diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 9fcb69e2e9..dba39ac36d 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,47 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.17.3-nightly.1 + - 3.17.2 + - 3.17.2-nightly.4 + - 3.17.2-nightly.3 + - 3.17.2-nightly.2 + - 3.17.2-nightly.1 + - 3.17.1 + - 3.17.1-nightly.3 + - 3.17.1-nightly.2 + - 3.17.1-nightly.1 + - 3.17.0 + - 3.16.7 + - 3.16.7-nightly.2 + - 3.16.7-nightly.1 + - 3.16.6 + - 3.16.6-nightly.1 + - 3.16.5 + - 3.16.5-nightly.5 + - 3.16.5-nightly.4 + - 3.16.5-nightly.3 + - 3.16.5-nightly.2 + - 3.16.5-nightly.1 + - 3.16.4 + - 3.16.4-nightly.3 + - 3.16.4-nightly.2 + - 3.16.4-nightly.1 + - 3.16.3 + - 3.16.3-nightly.5 + - 3.16.3-nightly.4 + - 3.16.3-nightly.3 + - 3.16.3-nightly.2 + - 3.16.3-nightly.1 + - 3.16.2 + - 3.16.2-nightly.2 + - 3.16.2-nightly.1 + - 3.16.1 + - 3.16.0 + - 3.16.0-nightly.2 + - 3.16.0-nightly.1 + - 3.15.12 + - 3.15.12-nightly.4 - 3.15.12-nightly.3 - 3.15.12-nightly.2 - 3.15.12-nightly.1 @@ -94,47 +135,6 @@ body: - 3.14.11-nightly.3 - 3.14.11-nightly.2 - 3.14.11-nightly.1 - - 3.14.10 - - 3.14.10-nightly.9 - - 3.14.10-nightly.8 - - 3.14.10-nightly.7 - - 3.14.10-nightly.6 - - 3.14.10-nightly.5 - - 3.14.10-nightly.4 - - 3.14.10-nightly.3 - - 3.14.10-nightly.2 - - 3.14.10-nightly.1 - - 3.14.9 - - 3.14.9-nightly.5 - - 3.14.9-nightly.4 - - 3.14.9-nightly.3 - - 3.14.9-nightly.2 - - 3.14.9-nightly.1 - - 3.14.8 - - 3.14.8-nightly.4 - - 3.14.8-nightly.3 - - 3.14.8-nightly.2 - - 3.14.8-nightly.1 - - 3.14.7 - - 3.14.7-nightly.8 - - 3.14.7-nightly.7 - - 3.14.7-nightly.6 - - 3.14.7-nightly.5 - - 3.14.7-nightly.4 - - 3.14.7-nightly.3 - - 3.14.7-nightly.2 - - 3.14.7-nightly.1 - - 3.14.6 - - 3.14.6-nightly.3 - - 3.14.6-nightly.2 - - 3.14.6-nightly.1 - - 3.14.5 - - 3.14.5-nightly.3 - - 3.14.5-nightly.2 - - 3.14.5-nightly.1 - - 3.14.4 - - 3.14.4-nightly.4 - - 3.14.4-nightly.3 validations: required: true - type: dropdown diff --git a/.github/workflows/miletone_release_trigger.yml b/.github/workflows/miletone_release_trigger.yml index 4a031be7f9..d755f7eb9f 100644 --- a/.github/workflows/miletone_release_trigger.yml +++ b/.github/workflows/miletone_release_trigger.yml @@ -5,12 +5,6 @@ on: inputs: milestone: required: true - release-type: - type: choice - description: What release should be created - options: - - release - - pre-release milestone: types: closed diff --git a/.gitignore b/.gitignore index 50f52f65a3..622d55fb88 100644 --- a/.gitignore +++ b/.gitignore @@ -37,6 +37,7 @@ Temporary Items ########### /build /dist/ +/server_addon/packages/* /vendor/bin/* /vendor/python/* diff --git a/CHANGELOG.md b/CHANGELOG.md index 095e0d96e4..7d5cf2c4d2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,4947 @@ # Changelog +## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.1...3.17.2) + +### **🆕 New features** + + +
+Maya: Add MayaPy application. #5705 + +This adds mayapy to the application to be launched from a task. + + +___ + +
+ + +
+Feature: Copy resources when downloading last workfile #4944 + +When the last published workfile is downloaded as a prelaunch hook, all resource files referenced in the workfile representation are copied to the `resources` folder, which is inside the local workfile folder. + + +___ + +
+ + +
+Blender: Deadline support #5438 + +Add Deadline support for Blender. + + +___ + +
+ + +
+Fusion: implement toggle to use Deadline plugin FusionCmd #5678 + +Fusion 17 doesn't work in DL 10.3, but FusionCmd does. It might be probably better option as headless variant.Fusion plugin seems to be closing and reopening application when worker is running on artist machine, not so with FusionCmdAdded configuration to Project Settings for admin to select appropriate Deadline plugin: + + +___ + +
+ + +
+Loader tool: Refactor loader tool (for AYON) #5729 + +Refactored loader tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. The tool is also replacing library loader. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: implement matchmove publishing #5445 + +Add possibility to export multiple cameras in single `matchmove` family instance, both in `abc` and `ma`.Exposed flag 'Keep image planes' to control export of image planes. + + +___ + +
+ + +
+Maya: Add optional Fbx extractors in Rig and Animation family #5589 + +This PR allows user to export control rigs(optionally with mesh) and animated rig in fbx optionally by attaching the rig objects to the two newly introduced sets. + + +___ + +
+ + +
+Maya: Optional Resolution Validator for Render #5693 + +Adding optional resolution validator for maya in render family, similar to the one in Max.It checks if the resolution in render setting aligns with that in setting from the db. + + +___ + +
+ + +
+Use host's node uniqueness for instance id in new publisher #5490 + +Instead of writing `instance_id` as parm or attributes on the publish instances we can, for some hosts, just rely on a unique name or path within the scene to refer to that particular instance. By doing so we fix #4820 because upon duplicating such a publish instance using the host's (DCC) functionality the uniqueness for the duplicate is then already ensured instead of attributes remaining exact same value as where to were duplicated from, making `instance_id` a non-unique value. + + +___ + +
+ + +
+Max: Implementation of OCIO configuration #5499 + +Resolve #5473 Implementation of OCIO configuration for Max 2024 regarding to the update of Max 2024 + + +___ + +
+ + +
+Nuke: Multiple format supports for ExtractReviewDataMov #5623 + +This PR would fix the bug of the plugin `ExtractReviewDataMov` not being able to support extensions other than `mov`. The plugin is also renamed to `ExtractReviewDataBakingStreams` as i provides multiple format supoort. + + +___ + +
+ + +
+Bugfix: houdini switching context doesnt update variables #5651 + +Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task.Using template keys is supported but formatting keys capitalization variants is not, e.g. {Asset} and {ASSET} won't workDisabling Update Houdini vars on context change feature will leave all Houdini vars unmanaged and thus no context update changes will occur.Also, this PR adds a new button in menu to update vars on demand. + + +___ + +
+ + +
+Publisher: Fix report maker memory leak + optimize lookups using set #5667 + +Fixes a memory leak where resetting publisher does not clear the stored plugins for the Publish Report Maker.Also changes the stored plugins to a `set` to optimize the lookup speeds. + + +___ + +
+ + +
+Add openpype_mongo command flag for testing. #5676 + +Instead of changing the environment, this command flag allows for changing the database. + + +___ + +
+ + +
+Nuke: minor docstring and code tweaks for ExtractReviewMov #5695 + +Code and docstring tweaks on https://github.com/ynput/OpenPype/pull/5623 + + +___ + +
+ + +
+AYON: Small settings fixes #5699 + +Small changes/fixes related to AYON settings. All foundry apps variant `13-0` has label `13.0`. Key `"ExtractReviewIntermediates"` is not mandatory in settings. + + +___ + +
+ + +
+Blender: Alembic Animation loader #5711 + +Implemented loading Alembic Animations in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Missing "data" field and enabling of audio #5618 + +When updating audio containers, the field "data" was missing and the audio node was not enabled on the timeline. + + +___ + +
+ + +
+Maya: Bug in validate Plug-in Path Attribute #5687 + +Overwriting list with string is causing `TypeError: string indices must be integers` in subsequent iterations, crashing the validator plugin. + + +___ + +
+ + +
+General: Avoid fallback if value is 0 for handle start/end #5652 + +There's a bug on the `pyblish_functions.get_time_data_from_instance_or_context` where if `handleStart` or `handleEnd` on the instance are set to value 0 it's falling back to grabbing the handles from the instance context. Instead, the logic should be that it only falls back to the `instance.context` if the key doesn't exist.This change was only affecting me on the `handleStart`/`handleEnd` and it's unlikely it could cause issues on `frameStart`, `frameEnd` or `fps` but regardless, the `get` logic is wrong. + + +___ + +
+ + +
+Fusion: added missing env vars to Deadline submission #5659 + +Environment variables discerning type of job was missing. Without this injection of environment variables won't start. + + +___ + +
+ + +
+Nuke: workfile version synchronization settings fixed #5662 + +Settings for synchronizing workfile version to published products is fixed. + + +___ + +
+ + +
+AYON Workfiles Tool: Open workfile changes context #5671 + +Change context when workfile is opened. + + +___ + +
+ + +
+Blender: Fix remove/update in new layout instance #5679 + +Fixes an error that occurs when removing or updating an asset in a new layout instance. + + +___ + +
+ + +
+AYON Launcher tool: Fix refresh btn #5685 + +Refresh button does propagate refreshed content properly. Folders and tasks are cached for 60 seconds instead of 10 seconds. Auto-refresh in launcher will refresh only actions and related data which is project and project settings. + + +___ + +
+ + +
+Deadline: handle all valid paths in RenderExecutable #5694 + +This commit enhances the path resolution mechanism in the RenderExecutable function of the Ayon plugin. Previously, the function only considered paths starting with a tilde (~), ignoring other valid paths listed in exe_list. This limitation led to an empty expanded_paths list when none of the paths in exe_list started with a tilde, causing the function to fail in finding the Ayon executable.With this fix, the RenderExecutable function now correctly processes and includes all valid paths from exe_list, improving its reliability and preventing unnecessary errors related to Ayon executable location. + + +___ + +
+ + +
+AYON Launcher tool: Fix skip last workfile boolean #5700 + +Skip last workfile boolean works as expected. + + +___ + +
+ + +
+Chore: Explore here action can work without task #5703 + +Explore here action does not crash when task is not selected, and change error message a little. + + +___ + +
+ + +
+Testing: Inject mongo_url argument earlier #5706 + +Fix for https://github.com/ynput/OpenPype/pull/5676The Mongo url is used earlier in the execution. + + +___ + +
+ + +
+Blender: Add support to auto-install PySide2 in blender 4 #5723 + +Change version regex to support blender 4 subfolder. + + +___ + +
+ + +
+Fix: Hardcoded main site and wrongly copied workfile #5733 + +Fixing these two issues: +- Hardcoded main site -> Replaced by `anatomy.fill_root`. +- Workfiles can sometimes be copied while they shouldn't. + + +___ + +
+ + +
+Bugfix: ServerDeleteOperation asset -> folder conversion typo #5735 + +Fix ServerDeleteOperation asset -> folder conversion typo + + +___ + +
+ + +
+Nuke: loaders are filtering correctly #5739 + +Variable name for filtering by extensions were not correct - it suppose to be plural. It is fixed now and filtering is working as suppose to. + + +___ + +
+ + +
+Nuke: failing multiple thumbnails integration #5741 + +This handles the situation when `ExtractReviewIntermediates` (previously `ExtractReviewDataMov`) has multiple outputs, including thumbnails that need to be integrated. Previously, integrating the thumbnail representation was causing an issue in the integration process. However, we have now resolved this issue by no longer integrating thumbnails as loadable representations.NOW default is that thumbnail representation are NOT integrated (eg. they will not show up in DB > couldn't be Loaded in Loader) and no `_thumb.jpg` will be left in `render` (most likely) publish folder.IF there would be need to override this behavior, please use `project_settings/global/publish/PreIntegrateThumbnails` + + +___ + +
+ + +
+AYON Settings: Fix global overrides #5745 + +The `output` dictionary that gets passed into `ayon_settings._convert_global_project_settings` gets replaced when converting the settings for `ExtractOIIOTranscode`. This results in `global` not being in the output dictionary and thus the defaults being used and not the project overrides. + + +___ + +
+ + +
+Chore: AYON query functions arguments #5752 + +Fixed how `archived` argument is handled in get subsets/assets function. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Publisher: Refactor Report Maker plugin data storage to be a dict by plugin.id #5668 + +Refactor Report Maker plugin data storage to be a dict by `plugin.id`Also fixes `_current_plugin_data` type on `__init__` + + +___ + +
+ + +
+Chore: Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost #5701 + +Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost + + +___ + +
+ +### **Merged pull requests** + + +
+Chore: Maya reduce get project settings calls #5669 + +Re-use system settings / project settings where we can instead of requerying. + + +___ + +
+ + +
+Extended error message when getting subset name #5649 + +Each Creator is using `get_subset_name` functions which collects context data and fills configured template with placeholders.If any key is missing in the template, non descriptive error is thrown.This should provide more verbose message: + + +___ + +
+ + +
+Tests: Remove checks for env var #5696 + +Env var will be filled in `env_var` fixture, here it is too early to check + + +___ + +
+ + + + +## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.0...3.17.1) + +### **🆕 New features** + + +
+Unreal: Yeti support #5643 + +Implemented Yeti support for Unreal. + + +___ + +
+ + +
+Houdini: Add Static Mesh product-type (family) #5481 + +This PR adds support to publish Unreal Static Mesh in Houdini as FBXQuick recap +- [x] Add UE Static Mesh Creator +- [x] Dynamic subset name like in Maya +- [x] Collect Static Mesh Type +- [x] Update collect output node +- [x] Validate FBX output node +- [x] Validate mesh is static +- [x] Validate Unreal Static Mesh Name +- [x] Validate Subset Name +- [x] FBX Extractor +- [x] FBX Loader +- [x] Update OP Settings +- [x] Update AYON Settings + + +___ + +
+ + +
+Launcher tool: Refactor launcher tool (for AYON) #5612 + +Refactored launcher tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Use custom staging dir function for Maya renders - OP-5265 #5186 + +Check for custom staging dir when setting the renders output folder in Maya. + + +___ + +
+ + +
+Colorspace: updating file path detection methods #5273 + +Support for OCIO v2 file rules integrated into the available color management API + + +___ + +
+ + +
+Chore: add default isort config #5572 + +Add default configuration for isort tool + + +___ + +
+ + +
+Deadline: set PATH environment in deadline jobs by GlobalJobPreLoad #5622 + +This PR makes `GlobalJobPreLoad` to set `PATH` environment in deadline jobs so that we don't have to use the full executable path for deadline to launch the dcc app. This trick should save us adding logic to pass houdini patch version and modifying Houdini deadline plugin. This trick should work with other DCCs + + +___ + +
+ + +
+nuke: extract review data mov read node with expression #5635 + +Some productions might have set default values for read nodes, those settings are not colliding anymore now. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Support new publisher for colorsets validation. #5630 + +Fix `validate_color_sets` for the new publisher.In current `develop` the repair option does not appear due to wrong error raising. + + +___ + +
+ + +
+Houdini: Camera Loader fix mismatch for Maya cameras #5584 + +This PR adds +- A workaround to match Maya render mask in Houdini +- `SetCameraResolution` inventory action +- set camera resolution when loading or updating camera + + +___ + +
+ + +
+Nuke: fix set colorspace on writes #5634 + +Colorspace is set correctly to any write node created from publisher. + + +___ + +
+ + +
+TVPaint: Fix review family extraction #5637 + +Extractor marks representation of review instance with review tag. + + +___ + +
+ + +
+AYON settings: Extract OIIO transcode settings #5639 + +Output definitions of Extract OIIO transcode have name to match OpenPype settings, and the settings are converted to dictionary in settings conversion. + + +___ + +
+ + +
+AYON: Fix task type short name conversion #5641 + +Convert AYON task type short name for OpenPype correctly. + + +___ + +
+ + +
+colorspace: missing `allowed_exts` fix #5646 + +Colorspace module is not failing due to missing `allowed_exts` attribute. + + +___ + +
+ + +
+Photoshop: remove trailing underscore in subset name #5647 + +If {layer} placeholder is at the end of subset name template and not used (for example in `auto_image` where separating it by layer doesn't make any sense) trailing '_' was kept. This updates cleaning logic and extracts it as it might be similar in regular `image` instance. + + +___ + +
+ + +
+traypublisher: missing `assetEntity` in context data #5648 + +Issue with missing `assetEnity` key in context data is not problem anymore. + + +___ + +
+ + +
+AYON: Workfiles tool save button works #5653 + +Fix save as button in workfiles tool.(It is mystery why this stopped to work??) + + +___ + +
+ + +
+Max: bug fix delete items from container #5658 + +Fix the bug shown when clicking "Delete Items from Container" and selecting nothing and press ok. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Chore: Remove unused functions from Fusion integration #5617 + +Cleanup unused code from Fusion integration + + +___ + +
+ +### **Merged pull requests** + + +
+Increase timout for deadline test #5654 + +DL picks up jobs quite slow, so bump up delay. + + +___ + +
+ + + + +## [3.17.0](https://github.com/ynput/OpenPype/tree/3.17.0) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.7...3.17.0) + +### **🚀 Enhancements** + + +
+Chore: Remove schema from OpenPype root #5355 + +Remove unused schema directory in root of repository which was moved inside openpype/pipeline/schema. + + +___ + +
+ + +
+Igniter: Allow custom Qt scale factor rounding policy #5554 + +Do not force `PassThrough` rounding policy if different policy is defined via env variable. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Chore: Lower urllib3 to support older OpenSSL #5538 + +Lowered `urllib3` to `1.26.16` to support older OpenSSL. + + +___ + +
+ + +
+Chore: Do not try to add schema to zip files #5557 + +Do not add `schema` folder to zip file. This fixes issue cause by https://github.com/ynput/OpenPype/pull/5355 . + + +___ + +
+ + +
+Chore: Lower click dependency version #5629 + +Lower click version to support older versions of python. + + +___ + +
+ +### **Merged pull requests** + + +
+Bump certifi from 2023.5.7 to 2023.7.22 #5351 + +Bumps [certifi](https://github.com/certifi/python-certifi) from 2023.5.7 to 2023.7.22. +
+Commits + +
+
+ + +[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=certifi&package-manager=pip&previous-version=2023.5.7&new-version=2023.7.22)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) + +You can trigger a rebase of this PR by commenting `@dependabot rebase`. + +[//]: # (dependabot-automerge-start) +[//]: # (dependabot-automerge-end) + +--- + +
+Dependabot commands and options +
+ +You can trigger Dependabot actions by commenting on this PR: +- `@dependabot rebase` will rebase this PR +- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it +- `@dependabot merge` will merge this PR after your CI passes on it +- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it +- `@dependabot cancel merge` will cancel a previously requested merge and block automerging +- `@dependabot reopen` will reopen this PR if it is closed +- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually +- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) +- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) +- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) +You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ynput/OpenPype/network/alerts). + +
+ +> **Note** +> Automatic rebases have been disabled on this pull request as it has been open for over 30 days. + +___ + +
+ + + + +## [3.16.7](https://github.com/ynput/OpenPype/tree/3.16.7) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.6...3.16.7) + +### **🆕 New features** + + +
+Maya: Extract active view as thumbnail when no thumbnail set #5426 + +This sets the Maya instance's thumbnail to the current active view if no thumbnail was set yet. + + +___ + +
+ + +
+Maya: Implement USD publish and load using native `mayaUsdPlugin` #5573 + +Implement Creator and Loaders for extraction and loading of USD files using Maya's own `mayaUsdPlugin`.Also adds support to load a `usd` file into an Arnold Standin (`aiStandin`) and assigning looks to it. + + +___ + +
+ + +
+AYON: Ignore separated modules #5619 + +Do not load already separated modules from default directory. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Reduce amount of code for Collect Looks #5253 + +- Refactor `get_file_node_files` because popping from `paths` by index should have been done in reversed order anyway. It's now changed to not need popping at all. +- Removed unused `RENDERER_NODE_TYPES` and if-branch which collected `node_attrs` list which was unused + collected members which was also done outside of the if branch and thus generated no extra data. +- Collected all materials from look set attributes at once instead of multiple queries +- Collected all file nodes in history from a single query instead of per type +- Restructured assignment of `instance.data["resources"]` to be more readable +- Cached `PXR_NODES` only ones (Note: plugin load is checked on discovery of the collect look plugin) instead of querying plugin load and its nodes per file node per attribute +- Removed some debug logs or combined some messages + + +___ + +
+ + +
+AYON: Mark deprecated settings in Maya #5627 + +Added deprecated info to docstrings of maya colormanagement settings.Resolves: https://github.com/ynput/OpenPype/issues/5556 + + +___ + +
+ + +
+Max: switching versions of maxScene maintain parentage/links with the loaders #5424 + +When using scene inventory to manage or update the version of the loading objects, the linked modifiers or parentage of the objects would be kept.Meanwhile, loaded objects from all loaders no longer parented to the container with OP Data. + + +___ + +
+ + +
+3ds max: small tweaks to obj extractor and model publishing flow #5605 + +There migh be situation where OBJ Extractor passes without failure, but no obj file is produced. This is adding simple check directly into the extractor to catch it earlier then in the integration phase. Also switched `Validate USD Plugin` to optional, because it was always run no matter if the Extract USD was enabled or not, hindering testing (and publishing). + + +___ + +
+ + +
+TVPaint: Plugin can be reopened #5610 + +TVPaint plugin can be reopened. + + +___ + +
+ + +
+Maya: Remove context prompt #5632 + +More of a plea than a PR, but could we please remove the context prompt in Maya when switching tasks? + + +___ + +
+ + +
+General: Create a desktop icon is checked #5636 + +In OP Installer `Create a desktop icon` is checked by default. +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Extract look is not AYON compatible - OP-5375 #5341 + +The textures that would use hardlinking are going through texture processors. Currently all texture processors are hardcoded to copy texture instead of respecting the settings of forcing to copy.The texture processors were last modified 4 months ago, so effectively all clients that are on any pipeline updated in the last 4 months wont be utilizing hardlinking at all, since the hardcoded texture processors will copy texture no matter the OS.This opts for completely disabling the hardlinking feature, while we figure out what to do about it. + + +___ + +
+ + +
+Maya: Multiverse USD Override inherit from correct new style creator #5566 + +Fix Creator for Multiverse USD Override by inheriting from correct new style creator class type + + +___ + +
+ + +
+Max: Bug Fix Alembic Loaders with Ornatrix #5434 + +Bugfix the alembic loader with both ornatrix alembic and max alembic supportsAdd the ornatrix alembic loaders for loading the alembic with Ornatrix-related modifiers. + + +___ + +
+ + +
+AYON: Avoid creation of duplicated links #5593 + +Handle cases when an existing link should be recreated and do not create the same link multitple times during single publishing. + + +___ + +
+ + +
+Extract Review: Multilayer specification for ffmpeg #5613 + +Extract review is specifying layer name when exr is multilayer. + + +___ + +
+ + +
+Fussion: added support for Fusion 17 #5614 + +Fusion 17 still uses Python 3.6 which causes issues with some our delivered libraries. Vendorized necessary set for Python 3.6 + + +___ + +
+ + +
+Publisher: Fix screenshot widget #5615 + +Use correct super method name.EDITED:Removed fade animation which is not triggered at some cases, e.g. in Nuke the animation does not start. I do expect that is caused by `exec_` on the dialog, which blocks event processing to the animation, even when I've added the window as parent it still didn't trigger registered callback.Modified how the "empty" space is not filled by using paths instead of clear mode on painter. Added render hints to add antialiasing. + + +___ + +
+ + +
+Photoshop: auto_images without alpha will not fail #5620 + +ExtractReview caused issue on `auto_image` instance without alpha channel, this fixes it. + + +___ + +
+ + +
+Fix - _id key used instead of id in get_last_version_by_subset_name #5626 + +Just 'id' is not returned because value in fields. Caused KeyError. + + +___ + +
+ + +
+Bugfix: create symlinks for ssl libs on Centos 7 #5633 + +Docker build was missing `libssl.1.1.so` and `libcrypto.1.1.so` symlinks needed by the executable itself, because Python is now explicitly built with OpenSSL 1.1.1 + + +___ + +
+ +### **📃 Documentation** + + +
+Documentation/local settings #5102 + +I completed the "Working with local settings" page. I updated the screenshot, wrote an explanation for each empty category, and if available, linked the more detailed pages already existing. I also added the "Environments" category. + + +___ + +
+ + + + +## [3.16.6](https://github.com/ynput/OpenPype/tree/3.16.6) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.5...3.16.6) + +### **🆕 New features** + + +
+Workfiles tool: Refactor workfiles tool (for AYON) #5550 + +Refactored workfiles tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. + + +___ + +
+ + +
+AfterEffects: added validator for missing files in FootageItems #5590 + +Published composition in AE could contain multiple FootageItems as a layers. If FootageItem contains imported file and it doesn't exist, render triggered by Publish process will silently fail and no output is generated. This could cause failure later in the process with unclear reason. (In `ExtractReview`).This PR adds validation to protect from this. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Yeti Cache Include viewport preview settings from source #5561 + +When publishing and loading yeti caches persist the display output and preview colors + settings to ensure consistency in the view + + +___ + +
+ + +
+Houdini: validate colorspace in review rop #5322 + +Adding a validator that checks if 'OCIO Colorspace' parameter on review rop was set to a valid value.It is a step towards managing colorspace in review ropvalid values are the ones in the dropdown menuthis validator also provides some helper actions This PR is related to #4836 and #4833 + + +___ + +
+ + +
+Colorspace: adding abstraction of publishing related functions #5497 + +The functionality of Colorspace has been abstracted for greater usability. + + +___ + +
+ + +
+Nuke: removing redundant workfile colorspace attributes #5580 + +Nuke root workfile colorspace data type knobs are long time configured automatically via config roles or the default values are also working well. Therefore there is no need for pipeline managed knobs. + + +___ + +
+ + +
+Ftrack: Less verbose logs for Ftrack integration in artist facing logs #5596 + +- Reduce artist-facing logs for component integration for Ftrack +- Avoid "Comment is not set" log in artist facing report for Kitsu and Ftrack +- Remove info log about `ffprobe` inspecting a file (changed to debug log) +- interesting to see however that it ffprobes the same jpeg twice - but maybe once for thumbnail? + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Fix rig validators for new out_SET and controls_SET names #5595 + +Fix usage of `out_SET` and `controls_SET` since #5310 because they can now be prefixed by the subset name. + + +___ + +
+ + +
+TrayPublisher: set default frame values to sequential data #5530 + +We are inheriting default frame handles and fps data either from project or setting them to 0. This is just for case a production will decide not to injest the sequential representations with asset based metadata. + + +___ + +
+ + +
+Publisher: Screenshot opacity value fix #5576 + +Fix opacity value. + + +___ + +
+ + +
+AfterEffects: fix imports of image sequences #5581 + +#4602 broke imports of image sequences. + + +___ + +
+ + +
+AYON: Fix representation context conversion #5591 + +Do not fix `"folder"` key in representation context until it is needed. + + +___ + +
+ + +
+ayon-nuke: default factory to lists #5594 + +Default factory were missing in settings schemas for complicated objects like lists and it was causing settings to be failing saving. + + +___ + +
+ + +
+Maya: Fix look assigner showing no asset if 'not found' representations are present #5597 + +Fix Maya Look assigner failing to show any content if it finds an invalid container for which it can't find the asset in the current project. (This can happen when e.g. loading something from a library project).There was logic already to avoid this but there was a bug where it used variable `_id` which did not exist and likely had to be `asset_id`.I've fixed that and improved the logged message a bit, e.g.: +``` +// Warning: openpype.hosts.maya.tools.mayalookassigner.commands : Id found on 22 nodes for which no asset is found database, skipping '641d78ec85c3c5b102e836b0' +``` +Example not found representation in Loader:The issue isn't necessarily related to NOT FOUND representations but in essence boils down to finding nodes with asset ids that do not exist in the current project which could very well just be local meshes in your scene.**Note:**I've excluded logging the nodes themselves because that tends to be a very long list of nodes. Only downside to removing that is that it's unclear which nodes are related to that `id`. If there are any ideas on how to still provide a concise informational message about that that'd be great so I could add it. Things I had considered: +- Report the containers, issue here is that it's about asset ids on nodes which don't HAVE to be in containers - it could be local geometry +- Report the namespaces, issue here is that it could be nodes without namespaces (plus potentially not about ALL nodes in a namespace) +- Report the short names of the nodes; it's shorter and readable but still likely a lot of nodes.@tokejepsen @LiborBatek any other ideas? + + +___ + +
+ + +
+Photoshop: fixed blank Flatten image #5600 + +Flatten image is simplified publishing approach where all visible layers are "flatten" and published together. This image could be used as a reference etc.This is implemented by auto creator which wasn't updated after first publish. This would result in missing newly created layers after `auto_image` instance was created. + + +___ + +
+ + +
+Blender: Remove Hardcoded Subset Name for Reviews #5603 + +Fixes hardcoded subset name for Reviews in Blender. + + +___ + +
+ + +
+TVPaint: Fix tool callbacks #5608 + +Do not wait for callback to finish. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Chore: Remove unused variables and cleanup #5588 + +Removing some unused variables. In some cases the unused variables _seemed like they should've been used - maybe?_ so please **double check the code whether it doesn't hint to an already existing bug**.Also tweaked some other small bugs in code + tweaked logging levels. + + +___ + +
+ +### **Merged pull requests** + + +
+Chore: Loader log deprecation warning for 'fname' attribute #5587 + +Since https://github.com/ynput/OpenPype/pull/4602 the `fname` attribute on the `LoaderPlugin` should've been deprecated and set for removal over time. However, no deprecation warning was logged whatsoever and thus one usage appears to have sneaked in (fixed with this PR) and a new one tried to sneak in with a recent PR + + +___ + +
+ + + + +## [3.16.5](https://github.com/ynput/OpenPype/tree/3.16.5) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.4...3.16.5) + +### **🆕 New features** + + +
+Attribute Definitions: Multiselection enum def #5547 + +Added `multiselection` option to `EnumDef`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Farm: adding target collector #5494 + +Enhancing farm publishing workflow. + + +___ + +
+ + +
+Maya: Optimize validate plug-in path attributes #5522 + +- Optimize query (use `cmds.ls` once) +- Add Select Invalid action +- Improve validation report +- Avoid "Unknown object type" errors + + +___ + +
+ + +
+Maya: Remove Validate Instance Attributes plug-in #5525 + +Remove Validate Instance Attributes plug-in. + + +___ + +
+ + +
+Enhancement: Tweak logging for artist facing reports #5537 + +Tweak the logging of publishing for global, deadline, maya and a fusion plugin to have a cleaner artist-facing report. +- Fix context being reported correctly from CollectContext +- Fix ValidateMeshArnoldAttributes: fix when arnold is not loaded, fix applying settings, fix for when ai attributes do not exist + + +___ + +
+ + +
+AYON: Update settings #5544 + +Updated settings in AYON addons and conversion of AYON settings in OpenPype. + + +___ + +
+ + +
+Chore: Removed Ass export script #5560 + +Removed Arnold render script, which was obsolete and unused. + + +___ + +
+ + +
+Nuke: Allow for knob values to be validated against multiple values. #5042 + +Knob values can now be validated against multiple values, so you can allow write nodes to be `exr` and `png`, or `16-bit` and `32-bit`. + + +___ + +
+ + +
+Enhancement: Cosmetics for Higher version of publish already exists validation error #5190 + +Fix double spaces in message.Example output **after** the PR: + + +___ + +
+ + +
+Nuke: publish existing frames on farm #5409 + +This PR proposes adding a fourth option in Nuke render publish called "Use Existing Frames - Farm". This would be useful when the farm is busy or when the artist lacks enough farm licenses. Additionally, some artists prefer rendering on the farm but still want to check frames before publishing.By adding the "Use Existing Frames - Farm" option, artists will have more flexibility and control over their render publishing process. This enhancement will streamline the workflow and improve efficiency for Nuke users. + + +___ + +
+ + +
+Unreal: Create project in temp location and move to final when done #5476 + +Create Unreal project in local temporary folder and when done, move it to final destination. + + +___ + +
+ + +
+TrayPublisher: adding audio product type into default presets #5489 + +Adding Audio product type into default presets so anybody can publish audio to their shots. + + +___ + +
+ + +
+Global: avoiding cleanup of flagged representation #5502 + +Publishing folder can be flagged as persistent at representation level. + + +___ + +
+ + +
+General: missing tag could raise error #5511 + +- avoiding potential situation where missing Tag key could raise error + + +___ + +
+ + +
+Chore: Queued event system #5514 + +Implemented event system with more expected behavior of event system. If an event is triggered during other event callback, it is not processed immediately but waits until all callbacks of previous events are done. The event system also allows to not trigger events directly once `emit_event` is called which gives option to process events in custom loops. + + +___ + +
+ + +
+Publisher: Tweak log message to provide plugin name after "Plugin" #5521 + +Fix logged message for settings automatically applied to plugin attributes + + +___ + +
+ + +
+Houdini: Improve VDB Selection #5523 + +Improves VDB selection if selection is `SopNode`: return the selected sop nodeif selection is `ObjNode`: get the output node with the minimum 'outputidx' or the node with display flag + + +___ + +
+ + +
+Maya: Refactor/tweak Validate Instance In same Context plug-in #5526 + +- Chore/Refactor: Re-use existing select invalid and repair actions +- Enhancement: provide more elaborate PublishValidationError report +- Bugfix: fix "optional" support by using `OptionalPyblishPluginMixin` base class. + + +___ + +
+ + +
+Enhancement: Update houdini main menu #5527 + +This PR adds two updates: +- dynamic main menu +- dynamic asset name and task + + +___ + +
+ + +
+Houdini: Reset FPS when clicking Set Frame Range #5528 + +_Similar to Maya,_ Make `Set Frame Range` resets FPS, issue https://github.com/ynput/OpenPype/issues/5516 + + +___ + +
+ + +
+Enhancement: Deadline plugins optimize, cleanup and fix optional support for validate deadline pools #5531 + +- Fix optional support of validate deadline pools +- Query deadline webservice only once per URL for verification, and once for available deadline pools instead of for every instance +- Use `deadlineUrl` in `instance.data` when validating pools if it is set. +- Code cleanup: Re-use existing `requests_get` implementation + + +___ + +
+ + +
+Chore: PowerShell script for docker build #5535 + +Added PowerShell script to run docker build. + + +___ + +
+ + +
+AYON: Deadline expand userpaths in executables list #5540 + +Expande `~` paths in executables list. + + +___ + +
+ + +
+Chore: Use correct git url #5542 + +Fixed github url in README.md. + + +___ + +
+ + +
+Chore: Create plugin does not expect system settings #5553 + +System settings are not passed to initialization of create plugin initialization (and `apply_settings`). + + +___ + +
+ + +
+Chore: Allow custom Qt scale factor rounding policy #5555 + +Do not force `PassThrough` rounding policy if different policy is defined via env variable. + + +___ + +
+ + +
+Houdini: Fix outdated containers pop-up on opening last workfile on launch #5567 + +Fix Houdini not showing outdated containers pop-up on scene open when launching with last workfile argument + + +___ + +
+ + +
+Houdini: Improve errors e.g. raise PublishValidationError or cosmetics #5568 + +Improve errors e.g. raise PublishValidationError or cosmeticsThis also fixes the Increment Current File plug-in since due to an invalid import it was previously broken + + +___ + +
+ + +
+Fusion: Code updates #5569 + +Update fusion code which contains obsolete code. Removed `switch_ui.py` script from fusion with related script in scripts. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Shape Zero fix repair action + provide informational artist-facing report #5524 + +Refactor to PublishValidationError to allow the RepairAction to work + provide informational report message + + +___ + +
+ + +
+Maya: Fix attribute definitions for `CreateYetiCache` #5574 + +Fix attribute definitions for `CreateYetiCache` + + +___ + +
+ + +
+Max: Optional Renderable Camera Validator for Render Instance #5286 + +Optional validation to check on renderable camera being set up correctly for deadline submission.If not being set up correctly, it wont pass the validation and user can perform repair actions. + + +___ + +
+ + +
+Max: Adding custom modifiers back to the loaded objects #5378 + +The custom parameters OpenpypeData doesn't show in the loaded container when it is being loaded through the loader. + + +___ + +
+ + +
+Houdini: Use default_variant to Houdini Node TAB Creator #5421 + +Use the default variant of the creator plugins on the interactive creator from the TAB node search instead of hard-coding it to `Main`. + + +___ + +
+ + +
+Nuke: adding inherited colorspace from instance #5454 + +Thumbnails are extracted with inherited colorspace collected from rendering write node. + + +___ + +
+ + +
+Add kitsu credentials to deadline publish job #5455 + +This PR hopefully fixes this issue #5440 + + +___ + +
+ + +
+AYON: Fill entities during editorial #5475 + +Fill entities and update template data on instances during extract AYON hierarchy. + + +___ + +
+ + +
+Ftrack: Fix version 0 when integrating to Ftrack - OP-6595 #5477 + +Fix publishing version 0 to Ftrack. + + +___ + +
+ + +
+OCIO: windows unc path support in Nuke and Hiero #5479 + +Hiero and Nuke is not supporting windows unc path formatting in OCIO environment variable. + + +___ + +
+ + +
+Deadline: Added super call to init #5480 + +DL 10.3 requires plugin inheriting from DeadlinePlugin to call super's **init** explicitly. + + +___ + +
+ + +
+Nuke: fixing thumbnail and monitor out root attributes #5483 + +Nuke Root Colorspace settings for Thumbnail and Monitor Out schema was gradually changed between version 12, 13, 14 and we needed to address those changes individually for particular version. + + +___ + +
+ + +
+Nuke: fixing missing `instance_id` error #5484 + +Workfiles with Instances created in old publisher workflow were rising error during converting method since they were missing `instance_id` key introduced in new publisher workflow. + + +___ + +
+ + +
+Nuke: existing frames validator is repairing render target #5486 + +Nuke is now correctly repairing render target after the existing frames validator finds missing frames and repair action is used. + + +___ + +
+ + +
+added UE to extract burnins families #5487 + +This PR fixes missing burnins in reviewables when rendering from UE. +___ + +
+ + +
+Harmony: refresh code for current Deadline #5493 + +- Added support in Deadline Plug-in for new versions of Harmony, in particular version 21 and 22. +- Remove review=False flag on render instance +- Add farm=True flag on render instance +- Fix is_in_tests function call in Harmony Deadline submission plugin +- Force HarmonyOpenPype.py Deadline Python plug-in to py3 +- Fix cosmetics/hound in HarmonyOpenPype.py Deadline Python plug-in + + +___ + +
+ + +
+Publisher: Fix multiselection value #5505 + +Selection of multiple instances in Publisher does not cause that all instances change all publish attributes to the same value. + + +___ + +
+ + +
+Publisher: Avoid warnings on thumbnails if source image also has alpha channel #5510 + +Avoids the following warning from `ExtractThumbnailFromSource`: +``` +// pyblish.ExtractThumbnailFromSource : oiiotool WARNING: -o : Can't save 4 channels to jpeg... saving only R,G,B +``` + + + +___ + +
+ + +
+Update ayon-python-api #5512 + +Update ayon python api and related callbacks. + + +___ + +
+ + +
+Max: Fixing the bug of falling back to use workfile for Arnold or any renderers except Redshift #5520 + +Fix the bug of falling back to use workfile for Arnold + + +___ + +
+ + +
+General: Fix Validate Publish Dir Validator #5534 + +Nonsensical "family" key was used instead of real value (as 'render' etc.) which would result in wrong translation of intermediate family names.Updated docstring. + + +___ + +
+ + +
+have the addons loading respect a custom AYON_ADDONS_DIR #5539 + +When using a custom AYON_ADDONS_DIR environment variable that variable is used in the launcher correctly and downloads and extracts addons to there, however when running Ayon does not respect this environment variable + + +___ + +
+ + +
+Deadline: files on representation cannot be single item list #5545 + +Further logic expects that single item files will be only 'string' not 'list' (eg. repre["files"] = "abc.exr" not repre["files"] = ["abc.exr"].This would cause an issue in ExtractReview later.This could happen if DL rendered single frame file with different frame value. + + +___ + +
+ + +
+Webpublisher: better encode list values for click #5546 + +Targets could be a list, original implementation pushed it as a separate items, it must be added as `--targets webpulish --targets filepublish`.`wepublish_routes` handles triggering from UI, changes in `publish_functions` handle triggering from cmd (for tests, api access). + + +___ + +
+ + +
+Houdini: Introduce imprint function for correct version in hda loader #5548 + +Resolve #5478 + + +___ + +
+ + +
+AYON: Fill entities during editorial (2) #5549 + +Fix changes made in https://github.com/ynput/OpenPype/pull/5475. + + +___ + +
+ + +
+Max: OP Data updates in Loaders #5563 + +Fix the bug on the loaders not being able to load the objects when iterating key and values with the dict.Max prefers list over the list in dict. + + +___ + +
+ + +
+Create Plugins: Better check of overriden '__init__' method #5571 + +Create plugins do not log warning messages about each create plugin because of wrong `__init__` method check. + + +___ + +
+ +### **Merged pull requests** + + +
+Tests: fix unit tests #5533 + +Fixed failing tests.Updated Unreal's validator to match removed general one which had a couple of issues fixed. + + +___ + +
+ + + + +## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.3...3.16.4) + +### **🆕 New features** + + +
+Feature: Download last published workfile specify version #4998 + +Setting `workfile_version` key to hook's `self.launch_context.data` allow you to specify the workfile version you want sync service to download if none is matched locally. This is helpful if the last version hasn't been correctly published/synchronized, and you want to recover the previous one (or some you'd like).Version could be set in two ways: +- OP's absolute version, matching the `version` index in DB. +- Relative version in reverse order from the last one: `-2`, `-3`...I don't know where I should write documentation about that. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: allow not creation of group for Import loaders #5427 + +This PR enhances previous one. All ReferenceLoaders could not wrap imported products into explicit group.Also `Import` Loaders have same options. Control for this is separate in Settings, eg. Reference might wrap loaded items in group, `Import` might not. + + +___ + +
+ + +
+3dsMax: Settings for Ayon #5388 + +Max Addon Setting for Ayon + + +___ + +
+ + +
+General: Navigation to Folder from Launcher #5404 + +Adds an action in launcher to open the directory of the asset. + + +___ + +
+ + +
+Chore: Default variant in create plugin #5429 + +Attribute `default_variant` on create plugins always returns string and if default variant is not filled other ways how to get one are implemented. + + +___ + +
+ + +
+Publisher: Thumbnail widget enhancements #5439 + +Thumbnails widget in Publisher has new 3 options to choose from: Paste (from clipboard), Take screenshot and Browse. Clear button and new options are not visible by default, user must expand options button to show them. + + +___ + +
+ + +
+AYON: Update ayon api to '0.3.5' #5460 + +Updated ayon-python-api to 0.3.5. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: Apply unknown ayon settings first #5435 + +Settings of custom addons are available in converted settings. + + +___ + +
+ + +
+Maya: Fix wrong subset name of render family in deadline #5442 + +New Publisher is creating different subset names than previously which resulted in duplication of `render` string in final subset name of `render` family published on Deadline.This PR solves that, it also fixes issues with legacy instances from old publisher, it matches the subset name as was before.This solves same issue in Max implementation. + + +___ + +
+ + +
+Maya: Fix setting of version to workfile instance #5452 + +If there are multiple instances of renderlayer published, previous logic resulted in unpredictable rewrite of instance family to 'workfile' if `Sync render version with workfile` was on. + + +___ + +
+ + +
+Maya: Context plugin shouldn't be tied to family #5464 + +`Maya Current File` collector was tied to `workfile` unnecessary. It should run even if `workile` instance is not being published. + + +___ + +
+ + +
+Unreal: Fix loading hero version for static and skeletal meshes #5393 + +Fixed a problem with loading hero versions for static ans skeletal meshes. + + +___ + +
+ + +
+TVPaint: Fix 'repeat' behavior #5412 + +Calculation of frames for repeat behavior is working correctly. + + +___ + +
+ + +
+AYON: Thumbnails cache and api prep #5437 + +Moved thumbnails cache from ayon python api to OpenPype and prepare AYON thumbnail resolver for new api functions. Current implementation should work with old and new ayon-python-api. + + +___ + +
+ + +
+Nuke: Name of the Read Node should be updated correctly when switching versions or assets. #5444 + +Bug fixing of the read node's name not being updated correctly when setting version or switching asset. + + +___ + +
+ + +
+Farm publishing: asymmetric handles fixed #5446 + +Handles are now set correctly on farm published product version if asymmetric were set to shot attributes. + + +___ + +
+ + +
+Scene Inventory: Provider icons fix #5450 + +Fix how provider icons are accessed in scene inventory. + + +___ + +
+ + +
+Fix typo on Deadline OP plugin name #5453 + +Surprised that no one has hit this bug yet... but it seems like there was a typo on the name of the OP Deadline plugin when submitting jobs to it. + + +___ + +
+ + +
+AYON: Fix version attributes update #5472 + +Fixed updates of attribs in AYON mode. + + +___ + +
+ +### **Merged pull requests** + + +
+Added missing defaults for import_loader #5447 + + +___ + +
+ + +
+Bug: Local settings don't open on 3.14.7 #5220 + +### Before posting a new ticket, have you looked through the documentation to find an answer? + +Yes I have + +### Have you looked through the existing tickets to find any related issues ? + +Not yet + +### Author of the bug + +@FadyFS + +### Version + +3.15.11-nightly.3 + +### What platform you are running OpenPype on? + +Linux / Centos + +### Current Behavior: + +the previous behavior (bug) : +![image](https://github.com/quadproduction/OpenPype/assets/135602303/09bff9d5-3f8b-4339-a1e5-30c04ade828c) + + +### Expected Behavior: + +![image](https://github.com/quadproduction/OpenPype/assets/135602303/c505a103-7965-4796-bcdf-73bcc48a469b) + + +### What type of bug is it ? + +Happened only once in a particular configuration + +### Which project / workfile / asset / ... + +open settings with 3.14.7 + +### Steps To Reproduce: + +1. Run openpype on the 3.15.11-nightly.3 version +2. Open settings in 3.14.7 version + +### Relevant log output: + +_No response_ + +### Additional context: + +_No response_ + +___ + +
+ + +
+Tests: Add automated targets for tests #5443 + +Without it plugins with 'automated' targets won't be triggered (eg `CloseAE` etc.) + + +___ + +
+ + + + +## [3.16.3](https://github.com/ynput/OpenPype/tree/3.16.3) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.2...3.16.3) + +### **🆕 New features** + + +
+AYON: 3rd party addon usage #5300 + +Prepare OpenPype code to be able use `ayon-third-party` addon which supply ffmpeg and OpenImageIO executables. Because they both can support to define custom arguments (more than one) a new functions were needed to supply.New functions are `get_ffmpeg_tool_args` and `get_oiio_tool_args`. They work similar to previous but instead of string are returning list of strings. All places using previous functions `get_ffmpeg_tool_path` and `get_oiio_tool_path` are now using new ones. They should be backwards compatible and even with addon if returns single argument. + + +___ + +
+ + +
+AYON: Addon settings in OpenPype #5347 + +Moved settings addons to OpenPype server addon. Modified create package to create zip files for server for each settings addon and for openpype addon. + + +___ + +
+ + +
+AYON: Add folder to template data #5417 + +Added `folder` to template data, so `{folder[name]}` can be used in templates. + + +___ + +
+ + +
+Option to start versioning from 0 #5262 + +This PR adds a settings option to start all versioning from 0.This PR will replace #4455. + + +___ + +
+ + +
+Ayon: deadline implementation #5321 + +Quick implementation of deadline in Ayon. New Ayon plugin added for Deadline repository + + +___ + +
+ + +
+AYON: Remove AYON launch logic from OpenPype #5348 + +Removed AYON launch logic from OpenPype. The logic is outdated at this moment and is replaced by `ayon-launcher`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Bug: Error on multiple instance rig with maya #5310 + +I change endswith method by startswith method because the set are automacaly name out_SET, out_SET1, out_SET2 ... + + +___ + +
+ + +
+Applications: Use prelaunch hooks to extract environments #5387 + +Environment variable preparation is based on prelaunch hooks. This should allow to pass OCIO environment variables to farm jobs. + + +___ + +
+ + +
+Applications: Launch hooks cleanup #5395 + +Use `set` instead of `list` for filtering attributes in launch hooks. Celaction hooks dir does not contain `__init__.py`. Celaction prelaunch hook is reusing `CELACTION_ROOT_DIR`. Launch hooks are using full import from `openpype.lib.applications`. + + +___ + +
+ + +
+Applications: Environment variables order #5245 + +Changed order of set environment variables. First are set context environment variables and then project environment overrides. Also asset and task environemnt variables are optional. + + +___ + +
+ + +
+Autosave preferences can be read after Nuke opens the script #5295 + +Looks like I need to open the script in Nuke to be able to correctly load the autosave preferences.This PR reads the Nuke script in context, and offers owerwriting the current script with autosaved one if autosave exists. + + +___ + +
+ + +
+Resolve: Update with compatible resolve version and latest docs #5317 + +Missing information about compatible Resolve version and latest docs from https://github.com/ynput/OpenPype/tree/develop/openpype/hosts/resolve + + +___ + +
+ + +
+Chore: Remove deprecated functions #5323 + +Removed functions/classes that are deprecated and marked to be removed. + + +___ + +
+ + +
+Nuke Render and Prerender nodes Process Order - OP-3555 #5332 + +This PR exposes control over the order of processing of the instances, by sorting the instances created. The sorting happens on the `render_order` and subset name. If the knob `render_order` is found on the instance, we'll sort by that first before sorting by subset name.`render_order` instances are processed before nodes without `render_order`. This could be extended in the future by querying other knobs but I dont know of a usecase for this.Hardcoded the creator `order` attribute of the `prerender` class to be before the `render`. Could be exposed to the user/studio but dont know of a use case for this. + + +___ + +
+ + +
+Unreal: Python Environment Improvements #5344 + +Automatically set `UE_PYTHONPATH` as `PYTHONPATH` when launching Unreal. + + +___ + +
+ + +
+Unreal: Custom location for Unreal Ayon Plugin #5346 + +Added a new environment variable `AYON_BUILT_UNREAL_PLUGIN` to set an already existing and built Ayon Plugin for Unreal. + + +___ + +
+ + +
+Unreal: Better handling of Exceptions in UE Worker threads #5349 + +Implemented a new `UEWorker` base class to handle exception during the execution of UE Workers. + + +___ + +
+ + +
+Houdini: Add farm toggle on creation menu #5350 + +Deadline Farm publishing and Rendering for Houdini was possible with this PR #4825 farm publishing is enabled by default some ROP nodes which may surprise new users (like me).I think adding a toggle (on by default) on creation UI is better so that users will be aware that there's a farm option for this publish instance.ROPs Modified : +- [x] Mantra ROP +- [x] Karma ROP +- [x] Arnold ROP +- [x] Redshift ROP +- [x] Vray ROP + + +___ + +
+ + +
+Ftrack: Sync to avalon settings #5353 + +Added roles settings for sync to avalon action. + + +___ + +
+ + +
+Chore: Schemas inside OpenPype #5354 + +Moved/copied schemas from repository root inside openpype/pipeline. + + +___ + +
+ + +
+AYON: Addons creation enhancements #5356 + +Enhanced AYON addons creation. Fix issue with `Pattern` typehint. Zip filenames contain version. OpenPype package is skipping modules that are already separated in AYON. Updated settings of addons. + + +___ + +
+ + +
+AYON: Update staging icons #5372 + +Updated staging icons for staging mode. + + +___ + +
+ + +
+Enhancement: Houdini Update pointcache labels #5373 + +To me it's logical to find pointcaches types listed one after another, but they were named differentlySo, I made this PR to update their labels + + +___ + +
+ + +
+nuke: split write node product instance features #5389 + +Improving Write node product instances by allowing precise activation of specific features. + + +___ + +
+ + +
+Max: Use the empty modifiers in container to store AYON Parameter #5396 + +Instead of adding AYON/OP Parameter along with other attributes inside the container, empty modifiers would be created to store AYON/OP custom attributes + + +___ + +
+ + +
+AfterEffects: Removed unused imports #5397 + +Removed unused import from extract local render plugin file. + + +___ + +
+ + +
+Nuke: adding BBox knob type to settings #5405 + +Nuke knob types in settings having new `Box` type for reposition nodes like Crop or Reformat. + + +___ + +
+ + +
+SyncServer: Existence of module is optional #5413 + +Existence of SyncServer module is optional and not required. Added `sync_server` module back to ignored modules when openpype addon is created for AYON. Command `syncserver` is marked as deprecated and redirected to sync server cli. + + +___ + +
+ + +
+Webpublisher: Self contain test publish logic #5414 + +Moved test logic of publishing to webpublisher. Simplified `remote_publish` to remove webpublisher specific logic. + + +___ + +
+ + +
+Webpublisher: Cleanup targets #5418 + +Removed `remote` target from webpublisher and replaced it with 2 targets `webpublisher` and `automated`. + + +___ + +
+ + +
+nuke: update server addon settings with box #5419 + +updtaing nuke ayon server settings for Box option in knob types. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: fix validate frame range on review attached to other instances #5296 + +Fixes situation where frame range validator can't be turned off on models if they are attached to reviewable camera in Maya. + + +___ + +
+ + +
+Maya: Apply project settings to creators #5303 + +Project settings were not applied to the creators. + + +___ + +
+ + +
+Maya: Validate Model Content #5336 + +`assemblies` in `cmds.ls` does not seem to work; +```python + +from maya import cmds + + +content_instance = ['|group2|pSphere1_GEO', '|group2|pSphere1_GEO|pSphere1_GEOShape', '|group1|pSphere1_GEO', '|group1|pSphere1_GEO|pSphere1_GEOShape'] +assemblies = cmds.ls(content_instance, assemblies=True, long=True) +print(assemblies) +``` + +Fixing with string splitting instead. + + +___ + +
+ + +
+Bugfix: Maya update defaults variable #5368 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Model +- [x] Set Dress + + +___ + +
+ + +
+Chore: Python 2 support fix #5375 + +Fix Python 2 support by adding `click` into python 2 dependencies and removing f-string from maya. + + +___ + +
+ + +
+Maya: do not create top level group on reference #5402 + +This PR allows to not wrapping loaded referenced assets in top level group either explicitly for artist or by configuration in Settings.Artists can control group creation in ReferenceLoader options.Default no group creation could be set by emptying `Group Name` in `project_settings/maya/load/reference_loader` + + +___ + +
+ + +
+Settings: Houdini & Maya create plugin settings #5436 + +Fixes related to Maya and Houdini settings. Renamed `defaults` to `default_variants` in plugin settings to match attribute name on create plugin in both OpenPype and AYON settings. Fixed Houdini AYON settings where were missing settings for defautlt varaints and fixed Maya AYON settings where default factory had wrong assignment. + + +___ + +
+ + +
+Maya: Hide CreateAnimation #5297 + +When converting `animation` family or loading a `rig` family, need to include the `animation` creator but hide it in creator context. + + +___ + +
+ + +
+Nuke Anamorphic slate - Read pixel aspect from input #5304 + +When asset pixel aspect differs from rendered pixel aspect, Nuke slate pixel aspect is not longer taken from asset, but is readed via ffprobe. + + +___ + +
+ + +
+Nuke - Allow ExtractReviewDataMov with no timecode knob #5305 + +ExtractReviewDataMov allows to specify file type. Trying to write some other extension than mov fails on generate_mov assuming that mov64_write_timecode knob exists. + + +___ + +
+ + +
+Nuke: removing settings schema with defaults for OpenPype #5306 + +continuation of https://github.com/ynput/OpenPype/pull/5275 + + +___ + +
+ + +
+Bugfix: Dependency without 'inputLinks' not downloaded #5337 + +Remove condition that avoids downloading dependency without `inputLinks`. + + +___ + +
+ + +
+Bugfix: Houdini Creator use selection even if it was toggled off #5359 + +When creating many product types (families) one after another without refreshing the creator window manually if you toggled `Use selection` once, all the later product types will use selection even if it was toggled offHere's Before it will keep use selection even if it was toggled off, unless you refresh window manuallyhttps://github.com/ynput/OpenPype/assets/20871534/8b890122-5b53-4c6b-897d-6a2f3aa3388aHere's After it works as expectedhttps://github.com/ynput/OpenPype/assets/20871534/6b1db990-de1b-428e-8828-04ab59a44e28 + + +___ + +
+ + +
+Houdini: Correct camera selection for karma renderer when using selected node #5360 + +When user creates the karma rop with selected camera by use selection, it will give the error message of "no render camera found in selection".This PR is to fix the bug of creating karma rop when using selected camera node in Houdini + + +___ + +
+ + +
+AYON: Environment variables and functions #5361 + +Prepare code for ayon-launcher compatibility. Fix ayon launcher subprocess calls, added more checks for `AYON_SERVER_ENABLED`, use ayon launcher suitable environment variables in AYON mode and changed outputs of some functions. Replaced usages of `OPENPYPE_REPOS_ROOT` environment variable with `PACKAGE_DIR` variable -> correct paths are used. + + +___ + +
+ + +
+Nuke: farm rendering of prerender ignore roots in nuke #5366 + +`prerender` family was using wrong subset, same as `render` which should be different. + + +___ + +
+ + +
+Bugfix: Houdini update defaults variable #5367 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Arnold ASS +- [x] Arnold ROP +- [x] Karma ROP +- [x] Mantra ROP +- [x] Redshift ROP +- [x] VRay ROP + + +___ + +
+ + +
+Publisher: Fix create/publish animation #5369 + +Use geometry movement instead of changing min/max width. + + +___ + +
+ + +
+Unreal: Move unreal splash screen to unreal #5370 + +Moved splash screen code to unreal integration and removed import from Igniter. + + +___ + +
+ + +
+Nuke: returned not cleaning of renders folder on the farm #5374 + +Previous PR enabled explicit cleanup of `renders` folder after farm publishing. This is not matching customer's workflows. Customer wants to have access to files in `renders` folder and potentially redo some frames for long frame sequences.This PR extends logic of marking rendered files for deletion only if instance doesn't have `stagingDir_persistent`.For backwards compatibility all Nuke instances have `stagingDir_persistent` set to True, eg. `renders` folder won't be cleaned after farm publish. + + +___ + +
+ + +
+Nuke: loading sequences is working #5376 + +Loading image sequences was broken after the latest release, version 3.16. However, I am pleased to inform you that it is now functioning as expected. + + +___ + +
+ + +
+AYON: Fix settings conversion for ayon addons #5377 + +AYON addon settings are available in system settings and does not have available the same values in `"modules"` subkey. + + +___ + +
+ + +
+Nuke: OCIO env var workflow #5379 + +The OCIO environment variable needs to be consistently handled across all platforms. Nuke resolves the custom OCIO config path differently depending on the platform, so we included the ocio config path in the workfile with a partial replacement using an environment variable. Additionally, for Windows sessions, we replaced backward slashes with a TCL expression. + + +___ + +
+ + +
+Unreal: Fix Unreal build script #5381 + +Define 'AYON_UNREAL_ROOT' environment variable in unreal addon. + + +___ + +
+ + +
+3dsMax: Use relative path to MAX_HOST_DIR #5382 + +Use `MAX_HOST_DIR` to calculate startup script path instead of use relative path to `OPENPYPE_ROOT` environment variable. + + +___ + +
+ + +
+Bugfix: Houdini abc validator error message #5386 + +When ABC path validator fails, it prints node objects not node paths or namesThis bug happened because of updating `get_invalid` method to return nodes instead of node pathsBeforeAfter + + +___ + +
+ + +
+Nuke: node name influence product (subset) name #5392 + +Nuke now allows users to duplicate publishing instances, making the workflow easier. By duplicating a node and changing its name, users can set the product (subset) name in the publishing context.Users now have the ability to change the variant name in Publisher, which will automatically rename the associated instance node. + + +___ + +
+ + +
+Houdini: delete redundant bgeo sop validator #5394 + +I found out that this `Validate BGEO SOP Path` validator is redundant, it catches two cases that are already implemented in "Validate Output Node". "Validate Output Node" works with `bgeo` as well as `abc` because `"pointcache"` is listed in its families + + +___ + +
+ + +
+Nuke: workfile is not reopening after change of context #5399 + +Nuke no longer reopens the latest workfile when the context is changed to a different task using the Workfile tool. The issue also affected the Script Clean (from Nuke File menu) and Close feature, but it has now been fixed. + + +___ + +
+ + +
+Bugfix: houdini hard coded project settings #5400 + +I made this PR to solve the issue with hard-coded settings in houdini + + +___ + +
+ + +
+AYON: 3dsMax settings #5401 + +Keep `adsk_3dsmax` group in applications settings. + + +___ + +
+ + +
+Bugfix: update defaults to default_variants in maya and houdini OP DCC settings #5407 + +On moving out to new creator in Maya and Houdini updating settings was missed. + + +___ + +
+ + +
+Applications: Attributes creation #5408 + +Applications addon does not cause infinite server restart loop. + + +___ + +
+ + +
+Max: fix the bug of handling Object deletion in OP Parameter #5410 + +If the object is added to the OP parameter and user delete it in the scene thereafter, it will error out the container with OP attributes. This PR resolves the bug.This PR also fixes the bug of not adding the attribute into OP parameter correctly when the user enables "use selections" to link the object into the OP parameter. + + +___ + +
+ + +
+Colorspace: including environments from launcher process #5411 + +Fixed bug in GitHub PR where the OCIO config template was not properly formatting environment variables from System Settings `general/environment`. + + +___ + +
+ + +
+Nuke: workfile template fixes #5428 + +Some bunch of small bugs needed to be fixed + + +___ + +
+ + +
+Houdini, Max: Fix missed function interface change #5430 + +This PR https://github.com/ynput/OpenPype/pull/5321/files from @kalisp missed updating the `add_render_job_env_var` in Houdini and Max as they are passing an extra arg: +``` +TypeError: add_render_job_env_var() takes 1 positional argument but 2 were given +``` + + +___ + +
+ + +
+Scene Inventory: Fix issue with 'sync_server' #5431 + +Fix accesss to `sync_server` attribute in scene inventory. + + +___ + +
+ + +
+Unpack project: Fix import issue #5433 + +Added `load_json_file`, `replace_project_documents` and `store_project_documents` to mongo init. + + +___ + +
+ + +
+Chore: Versions post fixes #5441 + +Fixed issues caused by my fault. Filled right version value to anatomy data. + + +___ + +
+ +### **📃 Testing** + + +
+Tests: Copy file_handler as it will be removed by purging ayon code #5357 + +Ayon code will get purged in the future from this repo/addon, therefore all `ayon_common` will be gone. `file_handler` gets internalized to tests as it is not used anywhere else. + + +___ + +
+ + + + +## [3.16.2](https://github.com/ynput/OpenPype/tree/3.16.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.1...3.16.2) + +### **🆕 New features** + + +
+Fusion - Set selected tool to active #5327 + +When you run the action to select a node, this PR makes the node-flow show the selected node + you'll see the nodes controls in the inspector. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: All base create plugins #5326 + +Prepared base classes for each creator type in Maya. Extended `MayaCreatorBase` to have default implementations of common logic with instances which is used in each type of plugin. + + +___ + +
+ + +
+Windows: Support long paths on zip updates. #5265 + +Support long paths for version extract on Windows.Use case is when having long paths in for example an addon. You can install to the C drive but because the zip files are extracted in the local users folder, it'll add additional sub directories to the paths and quickly get too long paths for Windows to handle the zip updates. + + +___ + +
+ + +
+Blender: Added setting to set resolution and start/end frames at startup #5338 + +This PR adds `set_resolution_startup`and `set_frames_startup` settings. They automatically set respectively the resolution and start/end frames and FPS in Blender when opening a file or creating a new one. + + +___ + +
+ + +
+Blender: Support for ExtractBurnin #5339 + +This PR adds support for ExtractBurnin for Blender, when publishing a Review. + + +___ + +
+ + +
+Blender: Extract Camera as Alembic #5343 + +Added support to extract Alembic Cameras in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Instance In Context #5335 + +Missing new publisher error so the repair action shows up. + + +___ + +
+ + +
+Settings: Fix default settings #5311 + +Fixed defautl settings for shotgrid. Renamed `FarmRootEnumEntity` to `DynamicEnumEntity` and removed doubled ABC metaclass definition (all settings entities have abstract metaclass). + + +___ + +
+ + +
+Deadline: missing context argument #5312 + +Updated function arguments + + +___ + +
+ + +
+Qt UI: Multiselection combobox PySide6 compatibility #5314 + +- The check states are replaced with the values for PySide6 +- `QtCore.Qt.ItemIsUserTristate` is used instead of `QtCore.Qt.ItemIsTristate` to avoid crashes on PySide6 + + +___ + +
+ + +
+Docker: handle openssl 1.1.1 for centos 7 docker build #5319 + +Move to python 3.9 has added need to use openssl 1.1.x - but it is not by default available on centos 7 image. This is fixing it. + + +___ + +
+ + +
+houdini: fix typo in redshift proxy #5320 + +I believe there's a typo in `create_redshift_proxy.py` ( extra ` ) in filename, and I made this PR to suggest a fix + + +___ + +
+ + +
+Houdini: fix wrong creator identifier in pointCache workflow #5324 + +FIxing a bug in publishing alembics, were invalid creator identifier caused missing family association. + + +___ + +
+ + +
+Fix colorspace compatibility check #5334 + +for some reason a user may have `PyOpenColorIO` installed to his machine, _in my case it came with renderman._it can trick the compatibility check as `import PyOpenColorIO` won't raise an error however it may be an old version _like my case_Beforecompatibility check was true and It used wrapper directly After Fix It will use wrapper via subprocess instead + + +___ + +
+ +### **Merged pull requests** + + +
+Remove forgotten dev logging #5315 + + +___ + +
+ + + + +## [3.16.1](https://github.com/ynput/OpenPype/tree/3.16.1) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.0...3.16.1) + +### **🆕 New features** + + +
+Royal Render: Maya and Nuke support #5191 + +Basic working implementation of Royal Render support in Maya.It expects New publisher implemented in Maya. + + +___ + +
+ + +
+Blender: Blend File Family #4321 + +Implementation of the Blend File family analogue to the Maya Scene one. + + +___ + +
+ + +
+Houdini: simple bgeo publishing #4588 + +Support for simple publishing of bgeo files. + +This is adding basic support for bgeo publishing in Houdini. It will allow publishing bgeo in all supported formats (selectable in the creator options). If selected node has `output` on sop level, it will be used automatically as path in file node. + + +___ + +
+ +### **🚀 Enhancements** + + +
+General: delivery action add renamed frame number in Loader #5024 + +Frame Offset options for delivery in Openpype loader + + +___ + +
+ + +
+Enhancement/houdini add path action for abc validator #5237 + +Add a default path attribute Action.it's a helper action more than a repair action, which used to add a default single value. + + +___ + +
+ + +
+Nuke: auto apply all settings after template build #5277 + +Adding auto run of Apply All Settings after template is builder is finishing its process. This will apply Frame-range, Image size, Colorspace found in context of a task shot. + + +___ + +
+ + +
+Harmony:Removed loader settings for Harmony #5289 + +It shouldn't be configurable, it is internal logic. By adding additional extension it wouldn't start to work magically. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: Make appdirs case sensitive #5298 + +Appdirs for AYON are case sensitive for linux and mac so we needed to change them to match ayon launcher. Changed 'ayon' to 'AYON' and 'ynput' to 'Ynput'. + + +___ + +
+ + +
+Traypublisher: Fix plugin order #5299 + +Frame range collector for traypublisher was moved to traypublisher plugins and changed order to make sure `assetEntity` is filled in `instance.data`. + + +___ + +
+ + +
+Deadline: removing OPENPYPE_VERSION from some host submitters #5302 + +Removing deprecated method of adding OPENPYPE_VERSION to job environment. It was leftover and other hosts have already been cleared. + + +___ + +
+ + +
+AYON: Fix args for workfile conversion util #5308 + +Workfile update conversion util function have right expected arguments. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Maya: Refactor imports to `lib.get_reference_node` since the other function… #5258 + +Refactor imports to `lib.get_reference_node` since the other function is deprecated. + + +___ + +
+ + + + +## [3.16.0](https://github.com/ynput/OpenPype/tree/3.16.0) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/...3.16.0) + +### **🆕 New features** + + +
+General: Reduce usage of legacy io #4723 + +Replace usages of `legacy_io` with getter methods or reuse already available information. Create plugins using CreateContext are using context from CreateContext object. Loaders are usign getter function from context tools. Publish plugin are using information instance.data or context.data. In some cases were pieces of code refactored a little e.g. fps getter in maya. + + +___ + +
+ + +
+Documentation: API docs reborn - yet again #4419 + +## Feature + +Add functional base for API Documentation using Sphinx and AutoAPI. + +After unsuccessful #2512, #834 and #210 this is yet another try. But this time without ambition to solve the whole issue. This is making Shinx script to work and nothing else. Any changes and improvements in API docs should be made in subsequent PRs. + +## How to use it + +You can run: + +```sh +cd .\docs +make.bat html +``` + +or + +```sh +cd ./docs +make html +``` + +This will go over our code and generate **.rst** files in `/docs/source/autoapi` and from those it will generate full html documentation in `/docs/build/html`. + +During the build you'll see tons of red errors that are pointing to our issues: + +1) **Wrong imports** + Invalid import are usually wrong relative imports (too deep) or circular imports. + +2) **Invalid doc-strings** + Doc-strings to be processed into documentation needs to follow some syntax - this can be checked by running + `pydocstyle` that is already included with OpenPype +3) **Invalid markdown/rst files** + md/rst files can be included inside rst files using `.. include::` directive. But they have to be properly formatted. + + +## Editing rst templates + +Everything starts with `/docs/source/index.rst` - this file should be properly edited, Right now it just includes `readme.rst` that in turn include and parse main `README.md`. This is entrypoint to API documentation. All templates generated by AutoAPI are in `/docs/source/autoapi`. They should be eventually commited to repository and edited too. + +## Steps for enhancing API documentation + +1) Run `/docs/make.bat html` +2) Read the red errors/warnings - fix it in the code +3) Run `/docs/make.bat html` again until there are not red lines +4) Edit rst files and add some meaningfull content there + +> **Note** +> This can (should) be merged as is without doc-string fixes in the code or changes in templates. All additional improvements on API documentation should be made in new PRs. + +> **Warning** +> You need to add new dependencies to use it. Run `create_venv`. + +Connected to #2490 +___ + +
+ + +
+Global: custom location for OP local versions #4673 + +This provides configurable location to unzip Openpype version zips. By default, it was hardcoded to artist's app data folder, which might be problematic/slow with roaming profiles.Location must be accessible by user running OP Tray with write permissions (so `Program Files` might be problematic) + + +___ + +
+ + +
+AYON: Update settings conversion #4837 + +Updated conversion script of AYON settings to v3 settings. PR is related to changes in addons repository https://github.com/ynput/ayon-addons/pull/6 . Changed how the conversion happens -> conversion output does not start with openpype defaults but as empty dictionary. + + +___ + +
+ + +
+AYON: Implement integrate links publish plugin #4842 + +Implemented entity links get/create functions. Added new integrator which replaces v3 integrator for links. + + +___ + +
+ + +
+General: Version attributes integration #4991 + +Implemented unified integrate plugin to update version attributes after all integrations for AYON. The goal is to be able update attribute values in a unified way to a version when all addon integrators are done, so e.g. ftrack can add ftrack id to matching version in AYON server etc.The can be stored under `"versionAttributes"` key. + + +___ + +
+ + +
+AYON: Staging versions can be used #4992 + +Added ability to use staging versions in AYON mode. + + +___ + +
+ + +
+AYON: Preparation for products #5038 + +Prepare ayon settings conversion script for `product` settings conversion. + + +___ + +
+ + +
+Loader: Hide inactive versions in UI #5101 + +Added support for `active` argument to hide versions with active set to False in Loader UI when in AYON mode. + + +___ + +
+ + +
+General: CLI addon command #5109 + +Added `addon` alias for `module` in OpenPype cli commands. + + +___ + +
+ + +
+AYON: OpenPype as server addon #5199 + +OpenPype repository can be converted to AYON addon for distribution. Addon has defined dependencies that are required to use it and are not in base ayon-launcher (desktop application). + + +___ + +
+ + +
+General: Runtime dependencies #5206 + +Defined runtime dependencies in pyproject toml. Moved python ocio and otio modules there. + + +___ + +
+ + +
+AYON: Bundle distribution #5209 + +Since AYON server 0.3.0 are addon versions defined by bundles which affects how addons, dependency packages and installers are handled. Only source of truth, about any version of anything that should be used, is server bundle. + + +___ + +
+ + +
+Feature/blender handle q application #5264 + +This edit is to change the way the QApplication is run for Blender. It calls in the singleton (QApplication) during the register. This is made so that other Qt applications and addons are able to run on Blender. In its current implementation, if a QApplication is already running, all functionality of OpenPype becomes unavailable. + + +___ + +
+ +### **🚀 Enhancements** + + +
+General: Connect to AYON server (base) #3924 + +Initial implementation of being able use AYON server in current OpenPype client. Added ability to connect to AYON server and use base queries. + +AYON mode has it's own executable (and start script). To start in AYON mode just replace `start.py` with `ayon_start.py` (added tray start script to tools). Added constant `AYON_SERVER_ENABLED` to `openpype/__init__.py` to know if ayon mode is enabled. In that case Mongo is not used at all and any attempts will cause crashes.I had to modify `~/openpype/client` content to be able do this switch. Mongo implementation was moved to `mongo` subfolder and use "star imports" in files from where current imports are used. Logic of any tool or query in code was not changed at all. Since functions were based on mongo queries they don't use full potential of AYON server abilities.ATM implementation has login UI, distribution of files from server and replacement of mongo queries. For queries is used `ayon_api` module. Which is in live development so the versions may change from day to day. + + +___ + +
+ + +
+Enhancement kitsu note with exceptions #4537 + +Adding a setting to choose some exceptions to IntegrateKitsuNote task status changes. + + +___ + +
+ + +
+General: Environment variable for default OCIO configs #4670 + +Define environment variable which lead to root of builtin ocio configs to be able change the root without changing settings. For the path in settings was used `"{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfig"` which disallow to change the root somewhere else. That will be needed in AYON where configs won't be part of desktop application but downloaded from server. + + +___ + +
+ + +
+AYON: Editorial hierarchy creation #4699 + +Implemented extract hierarchy to AYON plugin which created entities in AYON using ayon api. + + +___ + +
+ + +
+AYON: Vendorize ayon api #4753 + +Vendorize ayon api into openpype vendor directory. The reason is that `ayon-python-api` is in live development and will fix/add features often in next few weeks/months, and because update of dependency requires new release -> new build, we want to avoid the need of doing that as it would affect OpenPype development. + + +___ + +
+ + +
+General: Update PySide 6 for MacOs #4764 + +New version of PySide6 does not have issues with settings UI. It is still breaking UI stylesheets so it is not changed for other plaforms but it is enhancement from previous state. + + +___ + +
+ + +
+General: Removed unused cli commands #4902 + +Removed `texturecopy` and `launch` cli commands from cli commands. + + +___ + +
+ + +
+AYON: Linux & MacOS launch script #4970 + +Added shell script to launch tray in AYON mode. + + +___ + +
+ + +
+General: Qt scale enhancement #5059 + +Set ~~'QT_SCALE_FACTOR_ROUNDING_POLICY'~~ scale factor rounding policy of QApplication to `PassThrough` so the scaling can be 'float' number and not just 'int' (150% -> 1.5 scale). + + +___ + +
+ + +
+CI: WPS linting instead of Hound (rebase) 2 #5115 + +Because Hound currently used to lint the code on GH ships with really old flake8 support, it fails miserably on any newer Python syntax. This PR is adding WPS linter to GitHub workflows that should step in. + + +___ + +
+ + +
+Max: OP parameters only displays what is attached to the container #5229 + +The OP parameter in 3dsmax only displays what is currently attached to the container while deleting while you can see the items which is not added when you are adding to the container. + + +___ + +
+ + +
+Testing: improving logging during testing #5271 + +Unit testing logging was crashing on more then one nested layers of inherited loggers. + + +___ + +
+ + +
+Nuke: removing deprecated settings in baking #5275 + +Removing deprecated settings for baking with reformat. This option was only for single reformat node and it had been substituted with multiple reposition nodes. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: General fixes and updates #4975 + +Few smaller fixes related to AYON connection. Some of fixes were taken from this PR. + + +___ + +
+ + +
+Start script: Change returncode on validate or list versions #4515 + +Change exit code from `1` to `0` when versions are printed or when version is validated. + +Return code `1` is indicating error but there didn't happen any error. + + +___ + +
+ + +
+AYON: Change login UI works #4754 + +Fixed change of login UI. Logic change UI did show up, new login was successful, but after restart was used the previous login. This change fix the issue. + + +___ + +
+ + +
+AYON: General issues #4763 + +Vendorized `ayon_api` from PR broke OpenPype launch, because `ayon_api` is not available. Moved `ayon_api` from ayon specific subforlder to `common` python vendor in OpenPype, and removed login in ayon start script (which was invalid anyway). Also made fixed compatibility with PySide6 by using `qtpy` instead of `Qt` and changing code which is not PySide6 compatible. + + +___ + +
+ + +
+AYON: Small fixes #4841 + +Bugsfixes and enhancements related to AYON logic. Define `BUILTIN_OCIO_ROOT` environment variable so OCIO configs are working. Use constants from ayon api instead of hardcoding them in codebase. Change process name from "openpype" to "ayon". Don't execute login dialog when application is not yet running but use `open` method instead. Fixed missing modules settings which were not taken from openpype defaults. Updated ayon api to `0.1.17`. + + +___ + +
+ + +
+Bugfix - Update gazu to 0.9.3 #4845 + +This updates Gazu to 0.9.3 to make sure Gazu works with Kitsu and Zou 0.16.x+ + + +___ + +
+ + +
+Igniter: fix error reports in silent mode #4909 + +Some errors in silent mode commands in Igniter were suppressed and not visible for example in Deadline log. + + +___ + +
+ + +
+General: Remove ayon api from poetry lock #4964 + +Remove AYON python api from pyproject.toml and poetry.lock again. + + +___ + +
+ + +
+Ftrack: Fix AYON settings conversion #4967 + +Fix conversion of ftrack settings in AYON mode. + + +___ + +
+ + +
+AYON: ISO date format conversion issues #4981 + +Function `datetime.fromisoformat` was replaced with `arrow.get` to be used instead. + + +___ + +
+ + +
+AYON: Missing files on representations #4989 + +Fix integration of files into representation in server database. + + +___ + +
+ + +
+General: Fix Python 2 vendor for arrow #4993 + +Moved remaining dependencies for arrow from ftrack to python 2 vendor. + + +___ + +
+ + +
+General: Fix new load plugins for next minor relase #5000 + +Fix access to `fname` attribute which is not available on load plugin anymore. + + +___ + +
+ + +
+General: Fix mongo secure connection #5031 + +Fix `ssl` and `tls` keys checks in mongo uri query string. + + +___ + +
+ + +
+AYON: Fix site sync settings #5069 + +Fixed settings for AYON variant of sync server. + + +___ + +
+ + +
+General: Replace deprecated keyword argument in PyMongo #5080 + +Use argument `tlsCAFile` instead of `ssl_ca_certs` to avoid deprecation warnings. + + +___ + +
+ + +
+Igniter: QApplication is created #5081 + +Function `_get_qt_app` actually creates new `QApplication` if was not created yet. + + +___ + +
+ + +
+General: Lower unidecode version #5090 + +Use older version of Unidecode module to support Python 2. + + +___ + +
+ + +
+General: Lower cryptography to 39.0.0 #5099 + +Lower cryptography to 39.0.0 to avoid breaking of DCCs like Maya and Nuke. + + +___ + +
+ + +
+AYON: Global environments key fix #5118 + +Seems that when converting ayon settings to OP settings the `environments` setting is put under the `environments` key in `general` however when populating the environment the `environment` key gets picked up, which does not contain the environment variables from the `core/environments` setting + + +___ + +
+ + +
+Add collector to tray publisher for getting frame range data #5152 + +Add collector to tray publisher to get frame range data. User can choose to enable this collector if they need this in the publisher.Resolve #5136 + + +___ + +
+ + +
+Unreal: get current project settings not using unreal project name #5170 + +There was a bug where Unreal project name was used to query project settings. But Unreal project name can differ from the "real" one because of naming convention rules set by Unreal. This is fixing it by asking for current project settings. + + +___ + +
+ + +
+Substance Painter: Fix Collect Texture Set Images unable to copy.deepcopy due to QMenu #5238 + +Fix `copy.deepcopy` of `instance.data`. + + +___ + +
+ + +
+Ayon: server returns different key #5251 + +Package returned from server has `filename` instead of `name`. + + +___ + +
+ + +
+Substance Painter: Fix default color management settings #5259 + +The default settings for color management for Substance Painter were invalid, it was set to override the global config by default but specified no valid config paths of its own - and thus errored that the paths were not correct.This sets the defaults correctly to match other hosts._I quickly checked - this seems to be the only host with the wrong default settings_ + + +___ + +
+ + +
+Nuke: fixing container data if windows path in value #5267 + +Windows path in container data are reformatted. Previously it was reported that Nuke was rising `utf8 0xc0` error if backward slashes were in data values. + + +___ + +
+ + +
+Houdini: fix typo error in collect arnold rop #5281 + +Fixing a typo error in `collect_arnold_rop.py`Reference: #5280 + + +___ + +
+ + +
+Slack - enhanced logging and protection against failure #5287 + +Covered issues found in production on customer site. SlackAPI exception doesn't need to have 'error', covered uncaught exception. + + +___ + +
+ + +
+Maya: Removed unnecessary import of pyblish.cli #5292 + +This import resulted in adding additional logging handler which lead to duplication of logs in hosts with plugins containing `is_in_tests` method. Import is unnecessary for testing functionality. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Loader: Remove `context` argument from Loader.__init__() #4602 + +Remove the previously required `context` argument. + + +___ + +
+ + +
+Global: Remove legacy integrator #4786 + +Remove the legacy integrator. + + +___ + +
+ +### **📃 Documentation** + + +
+Next Minor Release #5291 + + +___ + +
+ +### **Merged pull requests** + + +
+Maya: Refactor to new publisher #4388 + +**Refactor Maya to use the new publisher with new creators.** + + +- [x] Legacy instance can be converted in UI using `SubsetConvertorPlugin` +- [x] Fix support for old style "render" and "vrayscene" instance to the new per layer format. +- [x] Context data is stored with scene +- [x] Workfile instance converted to AutoCreator +- [x] Converted Creator classes +- [x] Create animation +- [x] Create ass +- [x] Create assembly +- [x] Create camera +- [x] Create layout +- [x] Create look +- [x] Create mayascene +- [x] Create model +- [x] Create multiverse look +- [x] Create multiverse usd +- [x] Create multiverse usd comp +- [x] Create multiverse usd over +- [x] Create pointcache +- [x] Create proxy abc +- [x] Create redshift proxy +- [x] Create render +- [x] Create rendersetup +- [x] Create review +- [x] Create rig +- [x] Create setdress +- [x] Create unreal skeletalmesh +- [x] Create unreal staticmesh +- [x] Create vrayproxy +- [x] Create vrayscene +- [x] Create xgen +- [x] Create yeti cache +- [x] Create yeti rig +- [ ] Tested new Creator publishes +- [x] Publish animation +- [x] Publish ass +- [x] Publish assembly +- [x] Publish camera +- [x] Publish layout +- [x] Publish look +- [x] Publish mayascene +- [x] Publish model +- [ ] Publish multiverse look +- [ ] Publish multiverse usd +- [ ] Publish multiverse usd comp +- [ ] Publish multiverse usd over +- [x] Publish pointcache +- [x] Publish proxy abc +- [x] Publish redshift proxy +- [x] Publish render +- [x] Publish rendersetup +- [x] Publish review +- [x] Publish rig +- [x] Publish setdress +- [x] Publish unreal skeletalmesh +- [x] Publish unreal staticmesh +- [x] Publish vrayproxy +- [x] Publish vrayscene +- [x] Publish xgen +- [x] Publish yeti cache +- [x] Publish yeti rig +- [x] Publish workfile +- [x] Rig loader correctly generates a new style animation creator instance +- [ ] Validations / Error messages for common validation failures look nice and usable as a report. +- [ ] Make Create Animation hidden to the user (should not create manually?) +- [x] Correctly detect difference between **'creator_attributes'** and **'instance_data'** since both are "flattened" to the top node. + + +___ + +
+ + +
+Start script: Fix possible issues with destination drive path #4478 + +Drive paths for windows are fixing possibly missing slash at the end of destination path. + +Windows `subst` command require to have destination path with slash if it's a drive (it should be `G:\` not `G:`). + + +___ + +
+ + +
+Global: Move PyOpenColorIO to vendor/python #4946 + +So that DCCs don't conflict with their own. + +See https://github.com/ynput/OpenPype/pull/4267#issuecomment-1537153263 for the issue with Gaffer. + +I'm not sure if this is the correct approach, but I assume PySide/Shiboken is under `vendor/python` for this reason as well... +___ + +
+ + +
+RuntimeError with Click on deadline publish #5065 + +I changed Click to version 8.0 instead of 7.1.2 to solve this error: +``` +2023-05-30 16:16:51: 0: STDOUT: Traceback (most recent call last): +2023-05-30 16:16:51: 0: STDOUT: File "start.py", line 1126, in boot +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/core.py", line 829, in __call__ +2023-05-30 16:16:51: 0: STDOUT: return self.main(*args, **kwargs) +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/core.py", line 760, in main +2023-05-30 16:16:51: 0: STDOUT: _verify_python3_env() +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/_unicodefun.py", line 126, in _verify_python3_env +2023-05-30 16:16:51: 0: STDOUT: raise RuntimeError( +2023-05-30 16:16:51: 0: STDOUT: RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/python3/ for mitigation steps. +``` + + +___ + +
+ + + + +## [3.15.12](https://github.com/ynput/OpenPype/tree/3.15.12) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.11...3.15.12) + +### **🆕 New features** + + +
+Tray Publisher: User can set colorspace per instance explicitly #4901 + +With this feature a user can set/override the colorspace for the representations of an instance explicitly instead of relying on the File Rules from project settings or alike. This way you can ingest any file and explicitly say "this file is colorspace X". + + +___ + +
+ + +
+Review Family in Max #5001 + +Review Feature by creating preview animation in 3dsmax(The code is still cleaning up so there is going to be some updates until it is ready for review) + + +___ + +
+ + +
+AfterEffects: support for workfile template builder #5163 + +This PR add functionality of templated workfile builder. It allows someone to prepare AE workfile with placeholders as for automatically loading particular representation of particular subset of particular asset from context where workfile is opened.Selection from multiple prepared workfiles is provided with usage of templates, specific type of tasks could use particular workfile template etc.Artists then can build workfile from template when opening new workfile. + + +___ + +
+ + +
+CreatePlugin: Get next version helper #5242 + +Implemented helper functions to get next available versions for create instances. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Improve Templates #4854 + +Use library method for fetching reference node and support parent in hierarchy. + + +___ + +
+ + +
+Bug: Maya - xgen sidecar files arent moved when saving workfile as an new asset workfile changing context - OP-6222 #5215 + +This PR manages the Xgen files when switching context in the Workfiles app. + + +___ + +
+ + +
+node references to check for duplicates in Max #5192 + +No duplicates for node references in Max when users trying to select nodes before publishing + + +___ + +
+ + +
+Tweak profiles logging to debug level #5194 + +Tweak profiles logging to debug level since they aren't artist facing logs. + + +___ + +
+ + +
+Enhancement: Reduce more visual clutter for artists in new publisher reports #5208 + +Got this from one of our artists' reports - figured some of these logs were definitely not for the artist, reduced those logs to debug level. + + +___ + +
+ + +
+Cosmetics: Tweak pyblish repair actions (icon, logs, docstring) #5213 + +- Add icon to RepairContextAction +- logs to debug level +- also add attempt repair for RepairAction for consistency +- fix RepairContextAction docstring to mention correct argument name + +#### Additional info + +We should not forget to remove this ["deprecated" actions.py file](https://github.com/ynput/OpenPype/blob/3501d0d23a78fbaef106da2fffe946cb49bef855/openpype/action.py) in 3.16 (next-minor) + +## Testing notes: + +1. Run some fabulous repairs! + +___ + +
+ + +
+Maya: fix save file prompt on launch last workfile with color management enabled + restructure `set_colorspace` #5225 + +- Only set `configFilePath` when OCIO env var is not set since it doesn't do anything if OCIO var is set anyway. +- Set the Maya 2022+ default OCIO path using the resources path instead of "" to avoid Maya Save File on new file after launch +- **Bugfix: This is what fixes the Save prompt on open last workfile feature with Global color management enabled** +- Move all code related to applying the maya settings together after querying the settings +- Swap around the `if use_workfile_settings` since the check was reversed +- Use `get_current_project_name()` instead of environment vars + + +___ + +
+ + +
+Enhancement: More descriptive error messages for Loaders #5227 + +Tweak raised errors and error messages for loader errors. + + +___ + +
+ + +
+Houdini: add select invalid action for ValidateSopOutputNode #5231 + +This PR adds `SelectROPAction` action to `houdini\api\action.py`and it's used in `Validate Output Node``SelectROPAction` is used to select the associated ROPs with the errored instances. + + +___ + +
+ + +
+Remove new lines from the delivery template string #5235 + +If the delivery template has a new line symbol at the end, say it was copied from the text editor, the delivery process will fail with `OSError` due to incorrect destination path. To avoid that I added `rstrip()` to the `delivery_path` processing. + + +___ + +
+ + +
+Houdini: better selection on pointcache creation #5250 + +Houdini allows `ObjNode` path as `sop_path` in the `ROP` unlike OP/ Ayon require `sop_path` to be set to a sop node path explicitly In this code, better selection is used to filter out invalid selections from OP/ Ayon point of viewValid selections are +- `SopNode` that has parent of type `geo` or `subnet` +- `ObjNode` of type `geo` that has +- `SopNode` of type `output` +- `SopNode` with render flag `on` (if no `Sopnode` of type `output`)this effectively filter +- empty `ObjNode` +- `ObjNode`(s) of other types like `cam` and `dopnet` +- `SopNode`(s) that thier parents of other types like `cam` and `sop solver` + + +___ + +
+ + +
+Update scene inventory even if any errors occurred during update #5252 + +When selecting many items in the scene inventory to update versions and one of the items would error out the updating stops. However, before this PR the scene inventory would also NOT refresh making you think it did nothing.Also implemented as method to allow some code deduplication. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Convert frame values to integers #5188 + +Convert frame values to integers. + + +___ + +
+ + +
+Maya: fix the register_event_callback correctly collecting workfile save after #5214 + +fixing the bug of register_event_callback not being able to collect action of "workfile_save_after" for lock file action + + +___ + +
+ + +
+Maya: aligning default settings to distributed aces 1.2 config #5233 + +Maya colorspace setttings defaults are set the way they align our distributed ACES 1.2 config file set in global colorspace configs. + + +___ + +
+ + +
+RepairAction and SelectInvalidAction filter instances failed on the exact plugin #5240 + +RepairAction and SelectInvalidAction actually filter to instances that failed on the exact plugin - not on "any failure" + + +___ + +
+ + +
+Maya: Bugfix look update nodes by id with non-unique shape names (query with `fullPath`) #5257 + +Fixes a bug where updating attributes on nodes with assigned shader if shape name existed more than once in the scene due to `cmds.listRelatives` call not being done with the `fullPath=True` flag.Original error: +```python +# Traceback (most recent call last): +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 264, in +# lambda: self._show_version_dialog(items)) +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 722, in _show_version_dialog +# self._update_containers(items, version) +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 849, in _update_containers +# update_container(item, item_version) +# File "E:\openpype\OpenPype\openpype\pipeline\load\utils.py", line 502, in update_container +# return loader.update(container, new_representation) +# File "E:\openpype\OpenPype\openpype\hosts\maya\plugins\load\load_look.py", line 119, in update +# nodes_by_id[lib.get_id(n)].append(n) +# File "E:\openpype\OpenPype\openpype\hosts\maya\api\lib.py", line 1420, in get_id +# sel.add(node) +``` + + +___ + +
+ + +
+Nuke: Create nodes with inpanel=False #5051 + +This PR is meant to remove the annoyance of the UI changing focus to the properties window just for the property window of the newly created node to disappear. Instead of using node.hideControlPanel I'm implementing the concealment during the creation of the node which will not change the focus of the current window. +___ + +
+ + +
+Fix the reset frame range not setting up the right timeline in Max #5187 + +Resolve #5181 + + +___ + +
+ + +
+Resolve: after launch automatization fixes #5193 + +Workfile is no correctly created and aligned witch actual project. Also the launching mechanism is now fixed so even no workfile had been saved yet it will open OpenPype menu automatically. + + +___ + +
+ + +
+General: Revert backward incompatible change of path to template to multiplatform #5197 + +Now platformity is still handed by usage of `work[root]` (or any other root that is accessible across platforms.) + + +___ + +
+ + +
+Nuke: root set format updating in node graph #5198 + +Nuke root node needs to be reset on some values so any knobs could be updated in node graph. This works the same way as an user would change frame number so expressions would update its values in knobs. + + +___ + +
+ + +
+Hiero: fixing otio current project and cosmetics #5200 + +Otio were not returning correct current project once additional Untitled project was open in project manager stack. + + +___ + +
+ + +
+Max: Publisher instances dont hold its enabled disabled states when Publisher reopened again #5202 + +Resolve #5183, general maxscript conversion issue to python (e.g. bool conversion, true in maxscript while True in Python)(Also resolve the ValueError when you change the subset to publish into list view menu) + + +___ + +
+ + +
+Burnins: Filter script is defined only for video streams #5205 + +Burnins are working for inputs with audio. + + +___ + +
+ + +
+Colorspace lib fix compatible python version comparison #5212 + +Fix python version comparison. + + +___ + +
+ + +
+Houdini: Fix `get_color_management_preferences` #5217 + +Fix the issue described here where the logic for retrieving the current OCIO display and view was incorrectly trying to apply a regex to it. + + +___ + +
+ + +
+Houdini: Redshift ROP image format bug #5218 + +Problem : +"RS_outputFileFormat" parm value was missing +and there were more "image_format" than redshift rop supports + +Fix: +1) removed unnecessary formats from `image_format_enum` +2) add the selected format value to `RS_outputFileFormat` +___ + +
+ + +
+Colorspace: check PyOpenColorIO rather then python version #5223 + +Fixing previously merged PR (https://github.com/ynput/OpenPype/pull/5212) And applying better way to check compatibility with PyOpenColorIO python api. + + +___ + +
+ + +
+Validate delivery action representations status #5228 + +- disable delivery button if no representations checked +- fix macos combobox layout +- add error message if no delivery templates found + + +___ + +
+ + +
+ Houdini: Add geometry check for pointcache family #5230 + +When `sop_path` on ABC ROP node points to a non `SopNode`, these validators `validate_abc_primitive_to_detail.py`, `validate_primitive_hierarchy_paths.py` will error and crash when this line is executed `geo = output_node.geometryAtFrame(frame)` + + +___ + +
+ + +
+Houdini: Add geometry check for VDB family #5232 + +When `sop_path` on Geometry ROP node points to a non SopNode, this validator `validate_vdb_output_node.py` will error and crash when this line is executed`sop_node.geometryAtFrame(frame)` + + +___ + +
+ + +
+Substance Painter: Include the setting only in publish tab #5234 + +Instead of having two settings in both create and publish tab, there is solely one setting in the publish tab for users to set up the parameters.Resolve #5172 + + +___ + +
+ + +
+Maya: Fix collecting arnold prefix when none #5243 + +When no prefix is specified in render settings, the renderlayer collector would error. + + +___ + +
+ + +
+Deadline: OPENPYPE_VERSION should only be added when running from build #5244 + +When running from source the environment variable `OPENPYPE_VERSION` should not be added. This is a bugfix for the feature #4489 + + +___ + +
+ + +
+Fix no prompt for "unsaved changes" showing when opening workfile in Houdini #5246 + +Fix no prompt for "unsaved changes" showing when opening workfile in Houdini. + + +___ + +
+ + +
+Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter #5248 + +Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter. + + +___ + +
+ + +
+General: add the os library before os.environ.get #5249 + +Adding os library into `creator_plugins.py` due to `os.environ.get` in line 667 + + +___ + +
+ + +
+Maya: Fix set_attribute for enum attributes #5261 + +Fix for #5260 + + +___ + +
+ + +
+Unreal: Move Qt imports away from module init #5268 + +Importing `Window` creates errors in headless mode. +``` +*** WRN: >>> { ModulesLoader }: [ FAILED to import host folder unreal ] +============================= +No Qt bindings could be found +============================= +Traceback (most recent call last): + File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 252, in + from PySide6 import __version__ as PYSIDE_VERSION # analysis:ignore +ModuleNotFoundERROR: No module named 'PySide6' + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "C:\Users\tokejepsen\OpenPype\openpype\modules\base.py", line 385, in _load_modules + default_module = __import__( + File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\__init__.py", line 1, in + from .addon import UnrealAddon + File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\addon.py", line 4, in + from openpype.widgets.message_window import Window + File "C:\Users\tokejepsen\OpenPype\openpype\widgets\__init__.py", line 1, in + from .password_dialog import PasswordDialog + File "C:\Users\tokejepsen\OpenPype\openpype\widgets\password_dialog.py", line 1, in + from qtpy import QtWidgets, QtCore, QtGui + File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 259, in + raise QtBindingsNotFoundERROR() +qtpy.QtBindingsNotFoundERROR: No Qt bindings could be found +``` + + +___ + +
+ +### **🔀 Refactored code** + + +
+Maya: Minor refactoring and code cleanup #5226 + +Some small cleanup and refactoring of logic. Removing old comments, unused imports and some minor optimization. Also removed the prints of the loader names of each container the scene in `fix_incompatible_containers` + optimizing by using `set` and defining only once. Moved some UI related code/tweaks to run `on_init` only if not in headless mode. Removed an empty `obj.py` file.Each commit message kind of describes why the change was made. + + +___ + +
+ +### **Merged pull requests** + + +
+Bug: Template builder fails when loading data without outliner representation #5222 + +I add an assertion management in case the container does not have a represention in outliner. + + +___ + +
+ + +
+AfterEffects - add container check validator to AE settings #5203 + +Adds check if scene contains only latest version of loaded containers. + + +___ + +
+ + + + ## [3.15.11](https://github.com/ynput/OpenPype/tree/3.15.11) @@ -1970,7 +6911,7 @@ ___
Maya Load References - Add Display Handle Setting #4904 -When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. +When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. ___ @@ -2078,7 +7019,7 @@ ___
Patchelf version locked #4853 -For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. +For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. ___ diff --git a/Dockerfile.centos7 b/Dockerfile.centos7 index ce1a624a4f..ab1d3f8253 100644 --- a/Dockerfile.centos7 +++ b/Dockerfile.centos7 @@ -32,12 +32,16 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n wget \ gcc \ zlib-devel \ + pcre-devel \ + perl-core \ bzip2 \ bzip2-devel \ readline-devel \ sqlite sqlite-devel \ openssl-devel \ openssl-libs \ + openssl11-devel \ + openssl11-libs \ tk-devel libffi-devel \ patchelf \ automake \ @@ -71,7 +75,12 @@ RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \ && echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \ && echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \ && echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc -RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION} +RUN source $HOME/.bashrc \ + && export CPPFLAGS="-I/usr/include/openssl11" \ + && export LDFLAGS="-L/usr/lib64/openssl11 -lssl -lcrypto" \ + && export PATH=/usr/local/openssl/bin:$PATH \ + && export LD_LIBRARY_PATH=/usr/local/openssl/lib:$LD_LIBRARY_PATH \ + && pyenv install ${OPENPYPE_PYTHON_VERSION} COPY . /opt/openpype/ RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet." @@ -93,12 +102,15 @@ RUN source $HOME/.bashrc \ RUN source $HOME/.bashrc \ && ./tools/fetch_thirdparty_libs.sh +RUN echo 'export PYTHONPATH="/opt/openpype/vendor/python:$PYTHONPATH"'>> $HOME/.bashrc RUN source $HOME/.bashrc \ && bash ./tools/build.sh RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.9/lib \ - && cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.9/lib \ - && cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.9/lib \ + && cp /usr/lib64/openssl11/libssl* ./build/exe.linux-x86_64-3.9/lib \ + && cp /usr/lib64/openssl11/libcrypto* ./build/exe.linux-x86_64-3.9/lib \ + && ln -sr ./build/exe.linux-x86_64-3.9/lib/libssl.so ./build/exe.linux-x86_64-3.9/lib/libssl.1.1.so \ + && ln -sr ./build/exe.linux-x86_64-3.9/lib/libcrypto.so ./build/exe.linux-x86_64-3.9/lib/libcrypto.1.1.so \ && cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.9/lib \ && cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.9/vendor/python/PySide2/Qt/lib diff --git a/README.md b/README.md index 8757e3db92..ce98f845e6 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![All Contributors](https://img.shields.io/badge/all_contributors-28-orange.svg?style=flat-square)](#contributors-) OpenPype -==== +======== [![documentation](https://github.com/pypeclub/pype/actions/workflows/documentation.yml/badge.svg)](https://github.com/pypeclub/pype/actions/workflows/documentation.yml) ![GitHub VFX Platform](https://img.shields.io/badge/vfx%20platform-2022-lightgrey?labelColor=303846) @@ -47,7 +47,7 @@ It can be built and ran on all common platforms. We develop and test on the foll For more details on requirements visit [requirements documentation](https://openpype.io/docs/dev_requirements) Building OpenPype -------------- +----------------- To build OpenPype you currently need [Python 3.9](https://www.python.org/downloads/) as we are following [vfx platform](https://vfxplatform.com). Because of some Linux distros comes with newer Python version @@ -62,14 +62,14 @@ development tools like [CMake](https://cmake.org/) and [Visual Studio](https://v #### Clone repository: ```sh -git clone --recurse-submodules git@github.com:Pypeclub/OpenPype.git +git clone --recurse-submodules git@github.com:ynput/OpenPype.git ``` #### To build OpenPype: -1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv` +1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv`. 2) Run `.\tools\fetch_thirdparty_libs.ps1` to download third-party dependencies like ffmpeg and oiio. Those will be included in build. -3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\` +3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`. To create distributable OpenPype versions, run `./tools/create_zip.ps1` - that will create zip file with name `openpype-vx.x.x.zip` parsed from current OpenPype repository and @@ -88,38 +88,38 @@ some OpenPype dependencies like [CMake](https://cmake.org/) and **XCode Command Easy way of installing everything necessary is to use [Homebrew](https://brew.sh): 1) Install **Homebrew**: -```sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -``` + ```sh + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + ``` 2) Install **cmake**: -```sh -brew install cmake -``` + ```sh + brew install cmake + ``` 3) Install [pyenv](https://github.com/pyenv/pyenv): -```sh -brew install pyenv -echo 'eval "$(pyenv init -)"' >> ~/.zshrc -pyenv init -exec "$SHELL" -PATH=$(pyenv root)/shims:$PATH -``` + ```sh + brew install pyenv + echo 'eval "$(pyenv init -)"' >> ~/.zshrc + pyenv init + exec "$SHELL" + PATH=$(pyenv root)/shims:$PATH + ``` -4) Pull in required Python version 3.9.x -```sh -# install Python build dependences -brew install openssl readline sqlite3 xz zlib +4) Pull in required Python version 3.9.x: + ```sh + # install Python build dependences + brew install openssl readline sqlite3 xz zlib -# replace with up-to-date 3.9.x version -pyenv install 3.9.6 -``` + # replace with up-to-date 3.9.x version + pyenv install 3.9.6 + ``` -5) Set local Python version -```sh -# switch to OpenPype source directory -pyenv local 3.9.6 -``` +5) Set local Python version: + ```sh + # switch to OpenPype source directory + pyenv local 3.9.6 + ``` #### To build OpenPype: @@ -144,6 +144,10 @@ sudo ./tools/docker_build.sh centos7 If all is successful, you'll find built OpenPype in `./build/` folder. +Docker build can be also started from Windows machine, just use `./tools/docker_build.ps1` instead of shell script. + +This could be used even for building linux build (with argument `centos7` or `debian`) + #### Manual build You will need [Python >= 3.9](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll also need [curl](https://curl.se) on systems that doesn't have one preinstalled. @@ -241,7 +245,7 @@ pyenv local 3.9.6 Running OpenPype ------------- +---------------- OpenPype can by executed either from live sources (this repository) or from *"frozen code"* - executables that can be build using steps described above. @@ -289,7 +293,7 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`. Developer tools -------------- +--------------- In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`). diff --git a/common/openpype_common/distribution/README.md b/common/openpype_common/distribution/README.md deleted file mode 100644 index 212eb267b8..0000000000 --- a/common/openpype_common/distribution/README.md +++ /dev/null @@ -1,18 +0,0 @@ -Addon distribution tool ------------------------- - -Code in this folder is backend portion of Addon distribution logic for v4 server. - -Each host, module will be separate Addon in the future. Each v4 server could run different set of Addons. - -Client (running on artist machine) will in the first step ask v4 for list of enabled addons. -(It expects list of json documents matching to `addon_distribution.py:AddonInfo` object.) -Next it will compare presence of enabled addon version in local folder. In the case of missing version of -an addon, client will use information in the addon to download (from http/shared local disk/git) zip file -and unzip it. - -Required part of addon distribution will be sharing of dependencies (python libraries, utilities) which is not part of this folder. - -Location of this folder might change in the future as it will be required for a clint to add this folder to sys.path reliably. - -This code needs to be independent on Openpype code as much as possible! \ No newline at end of file diff --git a/common/openpype_common/distribution/addon_distribution.py b/common/openpype_common/distribution/addon_distribution.py deleted file mode 100644 index 5e48639dec..0000000000 --- a/common/openpype_common/distribution/addon_distribution.py +++ /dev/null @@ -1,208 +0,0 @@ -import os -from enum import Enum -from abc import abstractmethod -import attr -import logging -import requests -import platform -import shutil - -from .file_handler import RemoteFileHandler -from .addon_info import AddonInfo - - -class UpdateState(Enum): - EXISTS = "exists" - UPDATED = "updated" - FAILED = "failed" - - -class AddonDownloader: - log = logging.getLogger(__name__) - - def __init__(self): - self._downloaders = {} - - def register_format(self, downloader_type, downloader): - self._downloaders[downloader_type.value] = downloader - - def get_downloader(self, downloader_type): - downloader = self._downloaders.get(downloader_type) - if not downloader: - raise ValueError(f"{downloader_type} not implemented") - return downloader() - - @classmethod - @abstractmethod - def download(cls, source, destination): - """Returns url to downloaded addon zip file. - - Args: - source (dict): {type:"http", "url":"https://} ...} - destination (str): local folder to unzip - Returns: - (str) local path to addon zip file - """ - pass - - @classmethod - def check_hash(cls, addon_path, addon_hash): - """Compares 'hash' of downloaded 'addon_url' file. - - Args: - addon_path (str): local path to addon zip file - addon_hash (str): sha256 hash of zip file - Raises: - ValueError if hashes doesn't match - """ - if not os.path.exists(addon_path): - raise ValueError(f"{addon_path} doesn't exist.") - if not RemoteFileHandler.check_integrity(addon_path, - addon_hash, - hash_type="sha256"): - raise ValueError(f"{addon_path} doesn't match expected hash.") - - @classmethod - def unzip(cls, addon_zip_path, destination): - """Unzips local 'addon_zip_path' to 'destination'. - - Args: - addon_zip_path (str): local path to addon zip file - destination (str): local folder to unzip - """ - RemoteFileHandler.unzip(addon_zip_path, destination) - os.remove(addon_zip_path) - - @classmethod - def remove(cls, addon_url): - pass - - -class OSAddonDownloader(AddonDownloader): - - @classmethod - def download(cls, source, destination): - # OS doesnt need to download, unzip directly - addon_url = source["path"].get(platform.system().lower()) - if not os.path.exists(addon_url): - raise ValueError("{} is not accessible".format(addon_url)) - return addon_url - - -class HTTPAddonDownloader(AddonDownloader): - CHUNK_SIZE = 100000 - - @classmethod - def download(cls, source, destination): - source_url = source["url"] - cls.log.debug(f"Downloading {source_url} to {destination}") - file_name = os.path.basename(destination) - _, ext = os.path.splitext(file_name) - if (ext.replace(".", '') not - in set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS)): - file_name += ".zip" - RemoteFileHandler.download_url(source_url, - destination, - filename=file_name) - - return os.path.join(destination, file_name) - - -def get_addons_info(server_endpoint): - """Returns list of addon information from Server""" - # TODO temp - # addon_info = AddonInfo( - # **{"name": "openpype_slack", - # "version": "1.0.0", - # "addon_url": "c:/projects/openpype_slack_1.0.0.zip", - # "type": UrlType.FILESYSTEM, - # "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa - # - # http_addon = AddonInfo( - # **{"name": "openpype_slack", - # "version": "1.0.0", - # "addon_url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing", # noqa - # "type": UrlType.HTTP, - # "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa - - response = requests.get(server_endpoint) - if not response.ok: - raise Exception(response.text) - - addons_info = [] - for addon in response.json(): - addons_info.append(AddonInfo(**addon)) - return addons_info - - -def update_addon_state(addon_infos, destination_folder, factory, - log=None): - """Loops through all 'addon_infos', compares local version, unzips. - - Loops through server provided list of dictionaries with information about - available addons. Looks if each addon is already present and deployed. - If isn't, addon zip gets downloaded and unzipped into 'destination_folder'. - Args: - addon_infos (list of AddonInfo) - destination_folder (str): local path - factory (AddonDownloader): factory to get appropriate downloader per - addon type - log (logging.Logger) - Returns: - (dict): {"addon_full_name": UpdateState.value - (eg. "exists"|"updated"|"failed") - """ - if not log: - log = logging.getLogger(__name__) - - download_states = {} - for addon in addon_infos: - full_name = "{}_{}".format(addon.name, addon.version) - addon_dest = os.path.join(destination_folder, full_name) - - if os.path.isdir(addon_dest): - log.debug(f"Addon version folder {addon_dest} already exists.") - download_states[full_name] = UpdateState.EXISTS.value - continue - - for source in addon.sources: - download_states[full_name] = UpdateState.FAILED.value - try: - downloader = factory.get_downloader(source.type) - zip_file_path = downloader.download(attr.asdict(source), - addon_dest) - downloader.check_hash(zip_file_path, addon.hash) - downloader.unzip(zip_file_path, addon_dest) - download_states[full_name] = UpdateState.UPDATED.value - break - except Exception: - log.warning(f"Error happened during updating {addon.name}", - exc_info=True) - if os.path.isdir(addon_dest): - log.debug(f"Cleaning {addon_dest}") - shutil.rmtree(addon_dest) - - return download_states - - -def check_addons(server_endpoint, addon_folder, downloaders): - """Main entry point to compare existing addons with those on server. - - Args: - server_endpoint (str): url to v4 server endpoint - addon_folder (str): local dir path for addons - downloaders (AddonDownloader): factory of downloaders - - Raises: - (RuntimeError) if any addon failed update - """ - addons_info = get_addons_info(server_endpoint) - result = update_addon_state(addons_info, - addon_folder, - downloaders) - if UpdateState.FAILED.value in result.values(): - raise RuntimeError(f"Unable to update some addons {result}") - - -def cli(*args): - raise NotImplementedError diff --git a/common/openpype_common/distribution/addon_info.py b/common/openpype_common/distribution/addon_info.py deleted file mode 100644 index 00ece11f3b..0000000000 --- a/common/openpype_common/distribution/addon_info.py +++ /dev/null @@ -1,80 +0,0 @@ -import attr -from enum import Enum - - -class UrlType(Enum): - HTTP = "http" - GIT = "git" - FILESYSTEM = "filesystem" - - -@attr.s -class MultiPlatformPath(object): - windows = attr.ib(default=None) - linux = attr.ib(default=None) - darwin = attr.ib(default=None) - - -@attr.s -class AddonSource(object): - type = attr.ib() - - -@attr.s -class LocalAddonSource(AddonSource): - path = attr.ib(default=attr.Factory(MultiPlatformPath)) - - -@attr.s -class WebAddonSource(AddonSource): - url = attr.ib(default=None) - - -@attr.s -class VersionData(object): - version_data = attr.ib(default=None) - - -@attr.s -class AddonInfo(object): - """Object matching json payload from Server""" - name = attr.ib() - version = attr.ib() - title = attr.ib(default=None) - sources = attr.ib(default=attr.Factory(dict)) - hash = attr.ib(default=None) - description = attr.ib(default=None) - license = attr.ib(default=None) - authors = attr.ib(default=None) - - @classmethod - def from_dict(cls, data): - sources = [] - - production_version = data.get("productionVersion") - if not production_version: - return - - # server payload contains info about all versions - # active addon must have 'productionVersion' and matching version info - version_data = data.get("versions", {})[production_version] - - for source in version_data.get("clientSourceInfo", []): - if source.get("type") == UrlType.FILESYSTEM.value: - source_addon = LocalAddonSource(type=source["type"], - path=source["path"]) - if source.get("type") == UrlType.HTTP.value: - source_addon = WebAddonSource(type=source["type"], - url=source["url"]) - - sources.append(source_addon) - - return cls(name=data.get("name"), - version=production_version, - sources=sources, - hash=data.get("hash"), - description=data.get("description"), - title=data.get("title"), - license=data.get("license"), - authors=data.get("authors")) - diff --git a/common/openpype_common/distribution/tests/test_addon_distributtion.py b/common/openpype_common/distribution/tests/test_addon_distributtion.py deleted file mode 100644 index 765ea0596a..0000000000 --- a/common/openpype_common/distribution/tests/test_addon_distributtion.py +++ /dev/null @@ -1,167 +0,0 @@ -import pytest -import attr -import tempfile - -from common.openpype_common.distribution.addon_distribution import ( - AddonDownloader, - OSAddonDownloader, - HTTPAddonDownloader, - AddonInfo, - update_addon_state, - UpdateState -) -from common.openpype_common.distribution.addon_info import UrlType - - -@pytest.fixture -def addon_downloader(): - addon_downloader = AddonDownloader() - addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader) - addon_downloader.register_format(UrlType.HTTP, HTTPAddonDownloader) - - yield addon_downloader - - -@pytest.fixture -def http_downloader(addon_downloader): - yield addon_downloader.get_downloader(UrlType.HTTP.value) - - -@pytest.fixture -def temp_folder(): - yield tempfile.mkdtemp() - - -@pytest.fixture -def sample_addon_info(): - addon_info = { - "versions": { - "1.0.0": { - "clientPyproject": { - "tool": { - "poetry": { - "dependencies": { - "nxtools": "^1.6", - "orjson": "^3.6.7", - "typer": "^0.4.1", - "email-validator": "^1.1.3", - "python": "^3.10", - "fastapi": "^0.73.0" - } - } - } - }, - "hasSettings": True, - "clientSourceInfo": [ - { - "type": "http", - "url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing" # noqa - }, - { - "type": "filesystem", - "path": { - "windows": ["P:/sources/some_file.zip", - "W:/sources/some_file.zip"], # noqa - "linux": ["/mnt/srv/sources/some_file.zip"], - "darwin": ["/Volumes/srv/sources/some_file.zip"] - } - } - ], - "frontendScopes": { - "project": { - "sidebar": "hierarchy" - } - } - } - }, - "description": "", - "title": "Slack addon", - "name": "openpype_slack", - "productionVersion": "1.0.0", - "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658" # noqa - } - yield addon_info - - -def test_register(printer): - addon_downloader = AddonDownloader() - - assert len(addon_downloader._downloaders) == 0, "Contains registered" - - addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader) - assert len(addon_downloader._downloaders) == 1, "Should contain one" - - -def test_get_downloader(printer, addon_downloader): - assert addon_downloader.get_downloader(UrlType.FILESYSTEM.value), "Should find" # noqa - - with pytest.raises(ValueError): - addon_downloader.get_downloader("unknown"), "Shouldn't find" - - -def test_addon_info(printer, sample_addon_info): - """Tests parsing of expected payload from v4 server into AadonInfo.""" - valid_minimum = { - "name": "openpype_slack", - "productionVersion": "1.0.0", - "versions": { - "1.0.0": { - "clientSourceInfo": [ - { - "type": "filesystem", - "path": { - "windows": [ - "P:/sources/some_file.zip", - "W:/sources/some_file.zip"], - "linux": [ - "/mnt/srv/sources/some_file.zip"], - "darwin": [ - "/Volumes/srv/sources/some_file.zip"] # noqa - } - } - ] - } - } - } - - assert AddonInfo.from_dict(valid_minimum), "Missing required fields" - - valid_minimum["versions"].pop("1.0.0") - with pytest.raises(KeyError): - assert not AddonInfo.from_dict(valid_minimum), "Must fail without version data" # noqa - - valid_minimum.pop("productionVersion") - assert not AddonInfo.from_dict( - valid_minimum), "none if not productionVersion" # noqa - - addon = AddonInfo.from_dict(sample_addon_info) - assert addon, "Should be created" - assert addon.name == "openpype_slack", "Incorrect name" - assert addon.version == "1.0.0", "Incorrect version" - - with pytest.raises(TypeError): - assert addon["name"], "Dict approach not implemented" - - addon_as_dict = attr.asdict(addon) - assert addon_as_dict["name"], "Dict approach should work" - - -def test_update_addon_state(printer, sample_addon_info, - temp_folder, addon_downloader): - """Tests possible cases of addon update.""" - addon_info = AddonInfo.from_dict(sample_addon_info) - orig_hash = addon_info.hash - - addon_info.hash = "brokenhash" - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.FAILED.value, \ - "Update should failed because of wrong hash" - - addon_info.hash = orig_hash - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.UPDATED.value, \ - "Addon should have been updated" - - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.EXISTS.value, \ - "Addon should already exist" diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000..102da990aa --- /dev/null +++ b/docs/README.md @@ -0,0 +1,74 @@ +API Documentation +================= + +This documents the way how to build and modify API documentation using Sphinx and AutoAPI. Ground for documentation +should be directly in sources - in docstrings and markdowns. Sphinx and AutoAPI will crawl over them and generate +RST files that are in turn used to generate HTML documentation. For docstrings we prefer "Napoleon" or "Google" style +docstrings, but RST is also acceptable mainly in cases where you need to use Sphinx directives. + +Using only docstrings is not really viable as some documentation should be done on higher level - like overview of +some modules/functionality and so on. This should be done directly in RST files and committed to repository. + +Configuration +------------- +Configuration is done in `/docs/source/conf.py`. The most important settings are: + +- `autodoc_mock_imports`: add modules that can't be actually imported by Sphinx in running environment, like `nuke`, `maya`, etc. +- `autoapi_ignore`: add directories that shouldn't be processed by **AutoAPI**, like vendor dirs, etc. +- `html_theme_options`: you can use these options to influence how the html theme of the generated files will look. +- `myst_gfm_only`: are Myst parser option for Markdown setting what flavour of Markdown should be used. + +How to build it +--------------- + +You can run: + +```sh +cd .\docs +make.bat html +``` + +on linux/macOS: + +```sh +cd ./docs +make html +``` + +This will go over our code and generate **.rst** files in `/docs/source/autoapi` and from those it will generate +full html documentation in `/docs/build/html`. + +During the build you may see tons of red errors that are pointing to our issues: + +1) **Wrong imports** - +Invalid import are usually wrong relative imports (too deep) or circular imports. +2) **Invalid docstrings** - +Docstrings to be processed into documentation needs to follow some syntax - this can be checked by running +`pydocstyle` that is already included with OpenPype +3) **Invalid markdown/rst files** - +Markdown/RST files can be included inside RST files using `.. include::` directive. But they have to be properly +formatted. + +Editing RST templates +--------------------- +Everything starts with `/docs/source/index.rst` - this file should be properly edited, Right now it just +includes `readme.rst` that in turn include and parse main `README.md`. This is entrypoint to API documentation. +All templates generated by AutoAPI are in `/docs/source/autoapi`. They should be eventually committed to repository +and edited too. + +Steps for enhancing API documentation +------------------------------------- + +1) Run `/docs/make.bat html` +2) Read the red errors/warnings - fix it in the code +3) Run `/docs/make.bat html` - again until there are no red lines +4) Edit RST files and add some meaningful content there + +Resources +========= + +- [ReStructuredText on Wikipedia](https://en.wikipedia.org/wiki/ReStructuredText) +- [RST Quick Reference](https://docutils.sourceforge.io/docs/user/rst/quickref.html) +- [Sphinx AutoAPI Documentation](https://sphinx-autoapi.readthedocs.io/en/latest/) +- [Example of Google Style Python Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) +- [Sphinx Directives](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html) diff --git a/docs/make.bat b/docs/make.bat index 4d9eb83d9f..1d261df277 100644 --- a/docs/make.bat +++ b/docs/make.bat @@ -5,7 +5,7 @@ pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build + set SPHINXBUILD=..\.poetry\bin\poetry run sphinx-build ) set SOURCEDIR=source set BUILDDIR=build diff --git a/docs/source/_static/AYON_tight_G.svg b/docs/source/_static/AYON_tight_G.svg new file mode 100644 index 0000000000..2c5b73deea --- /dev/null +++ b/docs/source/_static/AYON_tight_G.svg @@ -0,0 +1,38 @@ + + + + + + + + + + diff --git a/common/openpype_common/distribution/__init__.py b/docs/source/_static/README.md similarity index 100% rename from common/openpype_common/distribution/__init__.py rename to docs/source/_static/README.md diff --git a/docs/source/_templates/autoapi/index.rst b/docs/source/_templates/autoapi/index.rst new file mode 100644 index 0000000000..95d0ad8911 --- /dev/null +++ b/docs/source/_templates/autoapi/index.rst @@ -0,0 +1,15 @@ +API Reference +============= + +This page contains auto-generated API reference documentation [#f1]_. + +.. toctree:: + :titlesonly: + + {% for page in pages %} + {% if page.top_level_object and page.display %} + {{ page.include_path }} + {% endif %} + {% endfor %} + +.. [#f1] Created with `sphinx-autoapi `_ diff --git a/docs/source/_templates/autoapi/python/attribute.rst b/docs/source/_templates/autoapi/python/attribute.rst new file mode 100644 index 0000000000..ebaba555ad --- /dev/null +++ b/docs/source/_templates/autoapi/python/attribute.rst @@ -0,0 +1 @@ +{% extends "python/data.rst" %} diff --git a/docs/source/_templates/autoapi/python/class.rst b/docs/source/_templates/autoapi/python/class.rst new file mode 100644 index 0000000000..df5edffb62 --- /dev/null +++ b/docs/source/_templates/autoapi/python/class.rst @@ -0,0 +1,58 @@ +{% if obj.display %} +.. py:{{ obj.type }}:: {{ obj.short_name }}{% if obj.args %}({{ obj.args }}){% endif %} +{% for (args, return_annotation) in obj.overloads %} + {{ " " * (obj.type | length) }} {{ obj.short_name }}{% if args %}({{ args }}){% endif %} +{% endfor %} + + + {% if obj.bases %} + {% if "show-inheritance" in autoapi_options %} + Bases: {% for base in obj.bases %}{{ base|link_objs }}{% if not loop.last %}, {% endif %}{% endfor %} + {% endif %} + + + {% if "show-inheritance-diagram" in autoapi_options and obj.bases != ["object"] %} + .. autoapi-inheritance-diagram:: {{ obj.obj["full_name"] }} + :parts: 1 + {% if "private-members" in autoapi_options %} + :private-bases: + {% endif %} + + {% endif %} + {% endif %} + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} + {% if "inherited-members" in autoapi_options %} + {% set visible_classes = obj.classes|selectattr("display")|list %} + {% else %} + {% set visible_classes = obj.classes|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for klass in visible_classes %} + {{ klass.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_properties = obj.properties|selectattr("display")|list %} + {% else %} + {% set visible_properties = obj.properties|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for property in visible_properties %} + {{ property.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_attributes = obj.attributes|selectattr("display")|list %} + {% else %} + {% set visible_attributes = obj.attributes|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for attribute in visible_attributes %} + {{ attribute.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_methods = obj.methods|selectattr("display")|list %} + {% else %} + {% set visible_methods = obj.methods|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for method in visible_methods %} + {{ method.render()|indent(3) }} + {% endfor %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/data.rst b/docs/source/_templates/autoapi/python/data.rst new file mode 100644 index 0000000000..3d12b2d0c7 --- /dev/null +++ b/docs/source/_templates/autoapi/python/data.rst @@ -0,0 +1,37 @@ +{% if obj.display %} +.. py:{{ obj.type }}:: {{ obj.name }} + {%- if obj.annotation is not none %} + + :type: {%- if obj.annotation %} {{ obj.annotation }}{%- endif %} + + {%- endif %} + + {%- if obj.value is not none %} + + :value: {% if obj.value is string and obj.value.splitlines()|count > 1 -%} + Multiline-String + + .. raw:: html + +
Show Value + + .. code-block:: python + + """{{ obj.value|indent(width=8,blank=true) }}""" + + .. raw:: html + +
+ + {%- else -%} + {%- if obj.value is string -%} + {{ "%r" % obj.value|string|truncate(100) }} + {%- else -%} + {{ obj.value|string|truncate(100) }} + {%- endif -%} + {%- endif %} + {%- endif %} + + + {{ obj.docstring|indent(3) }} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/exception.rst b/docs/source/_templates/autoapi/python/exception.rst new file mode 100644 index 0000000000..92f3d38fd5 --- /dev/null +++ b/docs/source/_templates/autoapi/python/exception.rst @@ -0,0 +1 @@ +{% extends "python/class.rst" %} diff --git a/docs/source/_templates/autoapi/python/function.rst b/docs/source/_templates/autoapi/python/function.rst new file mode 100644 index 0000000000..b00d5c2445 --- /dev/null +++ b/docs/source/_templates/autoapi/python/function.rst @@ -0,0 +1,15 @@ +{% if obj.display %} +.. py:function:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %} + +{% for (args, return_annotation) in obj.overloads %} + {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %} + +{% endfor %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/method.rst b/docs/source/_templates/autoapi/python/method.rst new file mode 100644 index 0000000000..723cb7bbe5 --- /dev/null +++ b/docs/source/_templates/autoapi/python/method.rst @@ -0,0 +1,19 @@ +{%- if obj.display %} +.. py:method:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %} + +{% for (args, return_annotation) in obj.overloads %} + {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %} + +{% endfor %} + {% if obj.properties %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + + {% else %} + + {% endif %} + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/module.rst b/docs/source/_templates/autoapi/python/module.rst new file mode 100644 index 0000000000..d2714f6c9d --- /dev/null +++ b/docs/source/_templates/autoapi/python/module.rst @@ -0,0 +1,114 @@ +{% if not obj.display %} +:orphan: + +{% endif %} +:py:mod:`{{ obj.name }}` +=========={{ "=" * obj.name|length }} + +.. py:module:: {{ obj.name }} + +{% if obj.docstring %} +.. autoapi-nested-parse:: + + {{ obj.docstring|indent(3) }} + +{% endif %} + +{% block subpackages %} +{% set visible_subpackages = obj.subpackages|selectattr("display")|list %} +{% if visible_subpackages %} +Subpackages +----------- +.. toctree:: + :titlesonly: + :maxdepth: 3 + +{% for subpackage in visible_subpackages %} + {{ subpackage.short_name }}/index.rst +{% endfor %} + + +{% endif %} +{% endblock %} +{% block submodules %} +{% set visible_submodules = obj.submodules|selectattr("display")|list %} +{% if visible_submodules %} +Submodules +---------- +.. toctree:: + :titlesonly: + :maxdepth: 1 + +{% for submodule in visible_submodules %} + {{ submodule.short_name }}/index.rst +{% endfor %} + + +{% endif %} +{% endblock %} +{% block content %} +{% if obj.all is not none %} +{% set visible_children = obj.children|selectattr("short_name", "in", obj.all)|list %} +{% elif obj.type is equalto("package") %} +{% set visible_children = obj.children|selectattr("display")|list %} +{% else %} +{% set visible_children = obj.children|selectattr("display")|rejectattr("imported")|list %} +{% endif %} +{% if visible_children %} +{{ obj.type|title }} Contents +{{ "-" * obj.type|length }}--------- + +{% set visible_classes = visible_children|selectattr("type", "equalto", "class")|list %} +{% set visible_functions = visible_children|selectattr("type", "equalto", "function")|list %} +{% set visible_attributes = visible_children|selectattr("type", "equalto", "data")|list %} +{% if "show-module-summary" in autoapi_options and (visible_classes or visible_functions) %} +{% block classes scoped %} +{% if visible_classes %} +Classes +~~~~~~~ + +.. autoapisummary:: + +{% for klass in visible_classes %} + {{ klass.id }} +{% endfor %} + + +{% endif %} +{% endblock %} + +{% block functions scoped %} +{% if visible_functions %} +Functions +~~~~~~~~~ + +.. autoapisummary:: + +{% for function in visible_functions %} + {{ function.id }} +{% endfor %} + + +{% endif %} +{% endblock %} + +{% block attributes scoped %} +{% if visible_attributes %} +Attributes +~~~~~~~~~~ + +.. autoapisummary:: + +{% for attribute in visible_attributes %} + {{ attribute.id }} +{% endfor %} + + +{% endif %} +{% endblock %} +{% endif %} +{% for obj_item in visible_children %} +{{ obj_item.render()|indent(0) }} +{% endfor %} +{% endif %} +{% endblock %} diff --git a/docs/source/_templates/autoapi/python/package.rst b/docs/source/_templates/autoapi/python/package.rst new file mode 100644 index 0000000000..fb9a64965e --- /dev/null +++ b/docs/source/_templates/autoapi/python/package.rst @@ -0,0 +1 @@ +{% extends "python/module.rst" %} diff --git a/docs/source/_templates/autoapi/python/property.rst b/docs/source/_templates/autoapi/python/property.rst new file mode 100644 index 0000000000..70af24236f --- /dev/null +++ b/docs/source/_templates/autoapi/python/property.rst @@ -0,0 +1,15 @@ +{%- if obj.display %} +.. py:property:: {{ obj.short_name }} + {% if obj.annotation %} + :type: {{ obj.annotation }} + {% endif %} + {% if obj.properties %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + {% endif %} + + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/conf.py b/docs/source/conf.py index 5b34ff8dc0..916a397e8e 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -17,18 +17,29 @@ import os import sys -pype_root = os.path.abspath('../..') -sys.path.insert(0, pype_root) +import revitron_sphinx_theme + +openpype_root = os.path.abspath('../..') +sys.path.insert(0, openpype_root) +# app = QApplication([]) + +""" repos = os.listdir(os.path.abspath("../../repos")) -repos = [os.path.join(pype_root, "repos", repo) for repo in repos] +repos = [os.path.join(openpype_root, "repos", repo) for repo in repos] for repo in repos: sys.path.append(repo) +""" + +todo_include_todos = True +autodoc_mock_imports = ["maya", "pymel", "nuke", "nukestudio", "nukescripts", + "hiero", "bpy", "fusion", "houdini", "hou", "unreal", + "__builtin__", "resolve", "pysync", "DaVinciResolveScript"] # -- Project information ----------------------------------------------------- -project = 'pype' -copyright = '2019, Orbi Tools' -author = 'Orbi Tools' +project = 'OpenPype' +copyright = '2023 Ynput' +author = 'Ynput' # The short X.Y version version = '' @@ -52,11 +63,41 @@ extensions = [ 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', - 'sphinx.ext.viewcode', 'sphinx.ext.autosummary', - 'recommonmark' + 'revitron_sphinx_theme', + 'autoapi.extension', + 'myst_parser' ] +############################## +# Autoapi settings +############################## + +autoapi_dirs = ['../../openpype', '../../igniter'] + +# bypass modules with a lot of python2 content for now +autoapi_ignore = [ + "*vendor*", + "*schemas*", + "*startup/*", + "*/website*", + "*openpype/hooks*", + "*openpype/style*", + "openpype/tests*", + # to many levels of relative import: + "*/modules/sync_server/*" +] +autoapi_keep_files = True +autoapi_options = [ + 'members', + 'undoc-members', + 'show-inheritance', + 'show-module-summary' +] +autoapi_add_toctree_entry = True +autoapi_template_dir = '_templates/autoapi' + + # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] @@ -64,7 +105,7 @@ templates_path = ['_templates'] # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] -source_suffix = '.rst' +source_suffix = ['.rst', '.md'] # The master toctree document. master_doc = 'index' @@ -74,12 +115,15 @@ master_doc = 'index' # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. -language = None +language = "English" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. -exclude_patterns = [] +exclude_patterns = [ + "openpype.hosts.resolve.*", + "openpype.tools.*" + ] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'friendly' @@ -97,15 +141,22 @@ autosummary_generate = True # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # -html_theme = 'sphinx_rtd_theme' +html_theme = 'revitron_sphinx_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = { - 'collapse_navigation': False + 'collapse_navigation': True, + 'sticky_navigation': True, + 'navigation_depth': 4, + 'includehidden': True, + 'titles_only': False, + 'github_url': '', } +html_logo = '_static/AYON_tight_G.svg' + # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, @@ -153,8 +204,8 @@ latex_elements = { # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (master_doc, 'pype.tex', 'pype Documentation', - 'OrbiTools', 'manual'), + (master_doc, 'openpype.tex', 'OpenPype Documentation', + 'Ynput', 'manual'), ] @@ -163,7 +214,7 @@ latex_documents = [ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ - (master_doc, 'pype', 'pype Documentation', + (master_doc, 'openpype', 'OpenPype Documentation', [author], 1) ] @@ -174,8 +225,8 @@ man_pages = [ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (master_doc, 'pype', 'pype Documentation', - author, 'pype', 'One line description of project.', + (master_doc, 'OpenPype', 'OpenPype Documentation', + author, 'OpenPype', 'Pipeline for studios', 'Miscellaneous'), ] @@ -207,7 +258,4 @@ intersphinx_mapping = { 'https://docs.python.org/3/': None } -# -- Options for todo extension ---------------------------------------------- - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = True +myst_gfm_only = True diff --git a/docs/source/igniter.bootstrap_repos.rst b/docs/source/igniter.bootstrap_repos.rst deleted file mode 100644 index 7c6e0a0757..0000000000 --- a/docs/source/igniter.bootstrap_repos.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.bootstrap\_repos module -=============================== - -.. automodule:: igniter.bootstrap_repos - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.install_dialog.rst b/docs/source/igniter.install_dialog.rst deleted file mode 100644 index bf30ec270e..0000000000 --- a/docs/source/igniter.install_dialog.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.install\_dialog module -============================== - -.. automodule:: igniter.install_dialog - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.install_thread.rst b/docs/source/igniter.install_thread.rst deleted file mode 100644 index 6c19516219..0000000000 --- a/docs/source/igniter.install_thread.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.install\_thread module -============================== - -.. automodule:: igniter.install_thread - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.rst b/docs/source/igniter.rst deleted file mode 100644 index b4aebe88b0..0000000000 --- a/docs/source/igniter.rst +++ /dev/null @@ -1,42 +0,0 @@ -igniter package -=============== - -.. automodule:: igniter - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -igniter.bootstrap\_repos module -------------------------------- - -.. automodule:: igniter.bootstrap_repos - :members: - :undoc-members: - :show-inheritance: - -igniter.install\_dialog module ------------------------------- - -.. automodule:: igniter.install_dialog - :members: - :undoc-members: - :show-inheritance: - -igniter.install\_thread module ------------------------------- - -.. automodule:: igniter.install_thread - :members: - :undoc-members: - :show-inheritance: - -igniter.tools module --------------------- - -.. automodule:: igniter.tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.tools.rst b/docs/source/igniter.tools.rst deleted file mode 100644 index 4fdbdf9d29..0000000000 --- a/docs/source/igniter.tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.tools module -==================== - -.. automodule:: igniter.tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/index.rst b/docs/source/index.rst index b54d153894..f703468fca 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -1,14 +1,15 @@ -.. pype documentation master file, created by +.. openpype documentation master file, created by sphinx-quickstart on Mon May 13 17:18:23 2019. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to pype's documentation! -================================ +Welcome to OpenPype's API documentation! +======================================== .. toctree:: - readme - modules + + Readme + Indices and tables ================== diff --git a/docs/source/modules.rst b/docs/source/modules.rst deleted file mode 100644 index 1956d9ed04..0000000000 --- a/docs/source/modules.rst +++ /dev/null @@ -1,8 +0,0 @@ -igniter -======= - -.. toctree:: - :maxdepth: 6 - - igniter - pype \ No newline at end of file diff --git a/docs/source/pype.action.rst b/docs/source/pype.action.rst deleted file mode 100644 index 62a32e08b5..0000000000 --- a/docs/source/pype.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.action module -================== - -.. automodule:: pype.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.api.rst b/docs/source/pype.api.rst deleted file mode 100644 index af3602a895..0000000000 --- a/docs/source/pype.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.api module -=============== - -.. automodule:: pype.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.cli.rst b/docs/source/pype.cli.rst deleted file mode 100644 index 7e4a336fa9..0000000000 --- a/docs/source/pype.cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.cli module -=============== - -.. automodule:: pype.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.aftereffects.rst b/docs/source/pype.hosts.aftereffects.rst deleted file mode 100644 index 3c2b2dda41..0000000000 --- a/docs/source/pype.hosts.aftereffects.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.aftereffects package -=============================== - -.. automodule:: pype.hosts.aftereffects - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.action.rst b/docs/source/pype.hosts.blender.action.rst deleted file mode 100644 index a6444b1efc..0000000000 --- a/docs/source/pype.hosts.blender.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.blender.action module -================================ - -.. automodule:: pype.hosts.blender.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.plugin.rst b/docs/source/pype.hosts.blender.plugin.rst deleted file mode 100644 index cf6a8feec8..0000000000 --- a/docs/source/pype.hosts.blender.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.blender.plugin module -================================ - -.. automodule:: pype.hosts.blender.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.rst b/docs/source/pype.hosts.blender.rst deleted file mode 100644 index 19cb85e5f3..0000000000 --- a/docs/source/pype.hosts.blender.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.blender package -========================== - -.. automodule:: pype.hosts.blender - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.blender.action module --------------------------------- - -.. automodule:: pype.hosts.blender.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.blender.plugin module --------------------------------- - -.. automodule:: pype.hosts.blender.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.celaction.cli.rst b/docs/source/pype.hosts.celaction.cli.rst deleted file mode 100644 index c8843b90bd..0000000000 --- a/docs/source/pype.hosts.celaction.cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.celaction.cli module -=============================== - -.. automodule:: pype.hosts.celaction.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.celaction.rst b/docs/source/pype.hosts.celaction.rst deleted file mode 100644 index 1aa236397e..0000000000 --- a/docs/source/pype.hosts.celaction.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.celaction package -============================ - -.. automodule:: pype.hosts.celaction - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.celaction.cli module -------------------------------- - -.. automodule:: pype.hosts.celaction.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.lib.rst b/docs/source/pype.hosts.fusion.lib.rst deleted file mode 100644 index 32b8f501f5..0000000000 --- a/docs/source/pype.hosts.fusion.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.lib module -============================ - -.. automodule:: pype.hosts.fusion.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.menu.rst b/docs/source/pype.hosts.fusion.menu.rst deleted file mode 100644 index ec5bf76612..0000000000 --- a/docs/source/pype.hosts.fusion.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.menu module -============================= - -.. automodule:: pype.hosts.fusion.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.pipeline.rst b/docs/source/pype.hosts.fusion.pipeline.rst deleted file mode 100644 index ff2a6440a8..0000000000 --- a/docs/source/pype.hosts.fusion.pipeline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.pipeline module -================================= - -.. automodule:: pype.hosts.fusion.pipeline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.rst b/docs/source/pype.hosts.fusion.rst deleted file mode 100644 index 7c2fee827c..0000000000 --- a/docs/source/pype.hosts.fusion.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.fusion package -========================= - -.. automodule:: pype.hosts.fusion - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts.fusion.scripts - -Submodules ----------- - -pype.hosts.fusion.lib module ----------------------------- - -.. automodule:: pype.hosts.fusion.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst b/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst deleted file mode 100644 index 2503c20f3b..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.duplicate\_with\_inputs module -======================================================== - -.. automodule:: pype.hosts.fusion.scripts.duplicate_with_inputs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst b/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst deleted file mode 100644 index 770300116f..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.fusion\_switch\_shot module -===================================================== - -.. automodule:: pype.hosts.fusion.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.rst b/docs/source/pype.hosts.fusion.scripts.rst deleted file mode 100644 index 5de5f66652..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.fusion.scripts package -================================= - -.. automodule:: pype.hosts.fusion.scripts - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.fusion.scripts.fusion\_switch\_shot module ------------------------------------------------------ - -.. automodule:: pype.hosts.fusion.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.fusion.scripts.publish\_filesequence module ------------------------------------------------------- - -.. automodule:: pype.hosts.fusion.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst b/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst deleted file mode 100644 index 27bff63466..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.set\_rendermode module -================================================ - -.. automodule:: pype.hosts.fusion.scripts.set_rendermode - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.utils.rst b/docs/source/pype.hosts.fusion.utils.rst deleted file mode 100644 index b6de3d0510..0000000000 --- a/docs/source/pype.hosts.fusion.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.utils module -============================== - -.. automodule:: pype.hosts.fusion.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.harmony.rst b/docs/source/pype.hosts.harmony.rst deleted file mode 100644 index 60e1fcdce6..0000000000 --- a/docs/source/pype.hosts.harmony.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.harmony package -========================== - -.. automodule:: pype.hosts.harmony - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.events.rst b/docs/source/pype.hosts.hiero.events.rst deleted file mode 100644 index 874abbffba..0000000000 --- a/docs/source/pype.hosts.hiero.events.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.events module -============================== - -.. automodule:: pype.hosts.hiero.events - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.lib.rst b/docs/source/pype.hosts.hiero.lib.rst deleted file mode 100644 index 8c0d33b03b..0000000000 --- a/docs/source/pype.hosts.hiero.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.lib module -=========================== - -.. automodule:: pype.hosts.hiero.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.menu.rst b/docs/source/pype.hosts.hiero.menu.rst deleted file mode 100644 index baa1317e61..0000000000 --- a/docs/source/pype.hosts.hiero.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.menu module -============================ - -.. automodule:: pype.hosts.hiero.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.rst b/docs/source/pype.hosts.hiero.rst deleted file mode 100644 index 9a7891b45e..0000000000 --- a/docs/source/pype.hosts.hiero.rst +++ /dev/null @@ -1,19 +0,0 @@ -pype.hosts.hiero package -======================== - -.. automodule:: pype.hosts.hiero - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.hosts.hiero.events - pype.hosts.hiero.lib - pype.hosts.hiero.menu - pype.hosts.hiero.tags - pype.hosts.hiero.workio diff --git a/docs/source/pype.hosts.hiero.tags.rst b/docs/source/pype.hosts.hiero.tags.rst deleted file mode 100644 index 0df33279d5..0000000000 --- a/docs/source/pype.hosts.hiero.tags.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.tags module -============================ - -.. automodule:: pype.hosts.hiero.tags - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.workio.rst b/docs/source/pype.hosts.hiero.workio.rst deleted file mode 100644 index 11aae43212..0000000000 --- a/docs/source/pype.hosts.hiero.workio.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.workio module -============================== - -.. automodule:: pype.hosts.hiero.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.houdini.lib.rst b/docs/source/pype.hosts.houdini.lib.rst deleted file mode 100644 index ba6e60d5f3..0000000000 --- a/docs/source/pype.hosts.houdini.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.houdini.lib module -============================= - -.. automodule:: pype.hosts.houdini.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.houdini.rst b/docs/source/pype.hosts.houdini.rst deleted file mode 100644 index 5db18ab3d4..0000000000 --- a/docs/source/pype.hosts.houdini.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.houdini package -========================== - -.. automodule:: pype.hosts.houdini - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.houdini.lib module ------------------------------ - -.. automodule:: pype.hosts.houdini.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.action.rst b/docs/source/pype.hosts.maya.action.rst deleted file mode 100644 index e1ad7e5d43..0000000000 --- a/docs/source/pype.hosts.maya.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.action module -============================= - -.. automodule:: pype.hosts.maya.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.customize.rst b/docs/source/pype.hosts.maya.customize.rst deleted file mode 100644 index 335e75b0d4..0000000000 --- a/docs/source/pype.hosts.maya.customize.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.customize module -================================ - -.. automodule:: pype.hosts.maya.customize - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.expected_files.rst b/docs/source/pype.hosts.maya.expected_files.rst deleted file mode 100644 index 0ecf22e502..0000000000 --- a/docs/source/pype.hosts.maya.expected_files.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.expected\_files module -====================================== - -.. automodule:: pype.hosts.maya.expected_files - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.lib.rst b/docs/source/pype.hosts.maya.lib.rst deleted file mode 100644 index 7d7dbe4502..0000000000 --- a/docs/source/pype.hosts.maya.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.lib module -========================== - -.. automodule:: pype.hosts.maya.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.menu.rst b/docs/source/pype.hosts.maya.menu.rst deleted file mode 100644 index 614e113769..0000000000 --- a/docs/source/pype.hosts.maya.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.menu module -=========================== - -.. automodule:: pype.hosts.maya.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.plugin.rst b/docs/source/pype.hosts.maya.plugin.rst deleted file mode 100644 index 5796b40c70..0000000000 --- a/docs/source/pype.hosts.maya.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.plugin module -============================= - -.. automodule:: pype.hosts.maya.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.rst b/docs/source/pype.hosts.maya.rst deleted file mode 100644 index 0beab888fc..0000000000 --- a/docs/source/pype.hosts.maya.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.hosts.maya package -======================= - -.. automodule:: pype.hosts.maya - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.maya.action module ------------------------------ - -.. automodule:: pype.hosts.maya.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.customize module --------------------------------- - -.. automodule:: pype.hosts.maya.customize - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.expected\_files module --------------------------------------- - -.. automodule:: pype.hosts.maya.expected_files - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.lib module --------------------------- - -.. automodule:: pype.hosts.maya.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.menu module ---------------------------- - -.. automodule:: pype.hosts.maya.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.plugin module ------------------------------ - -.. automodule:: pype.hosts.maya.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.actions.rst b/docs/source/pype.hosts.nuke.actions.rst deleted file mode 100644 index d5e8849a38..0000000000 --- a/docs/source/pype.hosts.nuke.actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.actions module -============================== - -.. automodule:: pype.hosts.nuke.actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.lib.rst b/docs/source/pype.hosts.nuke.lib.rst deleted file mode 100644 index c177a27f2d..0000000000 --- a/docs/source/pype.hosts.nuke.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.lib module -========================== - -.. automodule:: pype.hosts.nuke.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.menu.rst b/docs/source/pype.hosts.nuke.menu.rst deleted file mode 100644 index 190e488b95..0000000000 --- a/docs/source/pype.hosts.nuke.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.menu module -=========================== - -.. automodule:: pype.hosts.nuke.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.plugin.rst b/docs/source/pype.hosts.nuke.plugin.rst deleted file mode 100644 index ddd5f1db89..0000000000 --- a/docs/source/pype.hosts.nuke.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.plugin module -============================= - -.. automodule:: pype.hosts.nuke.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.presets.rst b/docs/source/pype.hosts.nuke.presets.rst deleted file mode 100644 index a69aa8a367..0000000000 --- a/docs/source/pype.hosts.nuke.presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.presets module -============================== - -.. automodule:: pype.hosts.nuke.presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.rst b/docs/source/pype.hosts.nuke.rst deleted file mode 100644 index 559de65927..0000000000 --- a/docs/source/pype.hosts.nuke.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.hosts.nuke package -======================= - -.. automodule:: pype.hosts.nuke - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.nuke.actions module ------------------------------- - -.. automodule:: pype.hosts.nuke.actions - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.lib module --------------------------- - -.. automodule:: pype.hosts.nuke.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.menu module ---------------------------- - -.. automodule:: pype.hosts.nuke.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.plugin module ------------------------------ - -.. automodule:: pype.hosts.nuke.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.presets module ------------------------------- - -.. automodule:: pype.hosts.nuke.presets - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.utils module ----------------------------- - -.. automodule:: pype.hosts.nuke.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.utils.rst b/docs/source/pype.hosts.nuke.utils.rst deleted file mode 100644 index 66974dc707..0000000000 --- a/docs/source/pype.hosts.nuke.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.utils module -============================ - -.. automodule:: pype.hosts.nuke.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nukestudio.rst b/docs/source/pype.hosts.nukestudio.rst deleted file mode 100644 index c718d699fa..0000000000 --- a/docs/source/pype.hosts.nukestudio.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.hosts.nukestudio package -============================= - -.. automodule:: pype.hosts.nukestudio - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.nukestudio.events module ------------------------------------ - -.. automodule:: pype.hosts.nukestudio.events - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.lib module --------------------------------- - -.. automodule:: pype.hosts.nukestudio.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.menu module ---------------------------------- - -.. automodule:: pype.hosts.nukestudio.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.tags module ---------------------------------- - -.. automodule:: pype.hosts.nukestudio.tags - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.workio module ------------------------------------ - -.. automodule:: pype.hosts.nukestudio.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.photoshop.rst b/docs/source/pype.hosts.photoshop.rst deleted file mode 100644 index f77ea79874..0000000000 --- a/docs/source/pype.hosts.photoshop.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.photoshop package -============================ - -.. automodule:: pype.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.premiere.lib.rst b/docs/source/pype.hosts.premiere.lib.rst deleted file mode 100644 index e2c2723841..0000000000 --- a/docs/source/pype.hosts.premiere.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.premiere.lib module -============================== - -.. automodule:: pype.hosts.premiere.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.premiere.rst b/docs/source/pype.hosts.premiere.rst deleted file mode 100644 index 7c38d52c22..0000000000 --- a/docs/source/pype.hosts.premiere.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.premiere package -=========================== - -.. automodule:: pype.hosts.premiere - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.premiere.lib module ------------------------------- - -.. automodule:: pype.hosts.premiere.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.action.rst b/docs/source/pype.hosts.resolve.action.rst deleted file mode 100644 index 781694781f..0000000000 --- a/docs/source/pype.hosts.resolve.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.action module -================================ - -.. automodule:: pype.hosts.resolve.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.lib.rst b/docs/source/pype.hosts.resolve.lib.rst deleted file mode 100644 index 5860f783cc..0000000000 --- a/docs/source/pype.hosts.resolve.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.lib module -============================= - -.. automodule:: pype.hosts.resolve.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.menu.rst b/docs/source/pype.hosts.resolve.menu.rst deleted file mode 100644 index df87dcde98..0000000000 --- a/docs/source/pype.hosts.resolve.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.menu module -============================== - -.. automodule:: pype.hosts.resolve.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.davinci_export.rst b/docs/source/pype.hosts.resolve.otio.davinci_export.rst deleted file mode 100644 index 498f96a7ed..0000000000 --- a/docs/source/pype.hosts.resolve.otio.davinci_export.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.davinci\_export module -============================================== - -.. automodule:: pype.hosts.resolve.otio.davinci_export - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.davinci_import.rst b/docs/source/pype.hosts.resolve.otio.davinci_import.rst deleted file mode 100644 index 30f43cc9fe..0000000000 --- a/docs/source/pype.hosts.resolve.otio.davinci_import.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.davinci\_import module -============================================== - -.. automodule:: pype.hosts.resolve.otio.davinci_import - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.rst b/docs/source/pype.hosts.resolve.otio.rst deleted file mode 100644 index 523d8937ca..0000000000 --- a/docs/source/pype.hosts.resolve.otio.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.hosts.resolve.otio package -=============================== - -.. automodule:: pype.hosts.resolve.otio - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.hosts.resolve.otio.davinci_export - pype.hosts.resolve.otio.davinci_import - pype.hosts.resolve.otio.utils diff --git a/docs/source/pype.hosts.resolve.otio.utils.rst b/docs/source/pype.hosts.resolve.otio.utils.rst deleted file mode 100644 index 765f492732..0000000000 --- a/docs/source/pype.hosts.resolve.otio.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.utils module -==================================== - -.. automodule:: pype.hosts.resolve.otio.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.pipeline.rst b/docs/source/pype.hosts.resolve.pipeline.rst deleted file mode 100644 index 3efc24137b..0000000000 --- a/docs/source/pype.hosts.resolve.pipeline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.pipeline module -================================== - -.. automodule:: pype.hosts.resolve.pipeline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.plugin.rst b/docs/source/pype.hosts.resolve.plugin.rst deleted file mode 100644 index 26f6c56aef..0000000000 --- a/docs/source/pype.hosts.resolve.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.plugin module -================================ - -.. automodule:: pype.hosts.resolve.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.preload_console.rst b/docs/source/pype.hosts.resolve.preload_console.rst deleted file mode 100644 index 0d38ae14ea..0000000000 --- a/docs/source/pype.hosts.resolve.preload_console.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.preload\_console module -========================================== - -.. automodule:: pype.hosts.resolve.preload_console - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.rst b/docs/source/pype.hosts.resolve.rst deleted file mode 100644 index 368129e43e..0000000000 --- a/docs/source/pype.hosts.resolve.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.hosts.resolve package -========================== - -.. automodule:: pype.hosts.resolve - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.resolve.action module --------------------------------- - -.. automodule:: pype.hosts.resolve.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.lib module ------------------------------ - -.. automodule:: pype.hosts.resolve.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.menu module ------------------------------- - -.. automodule:: pype.hosts.resolve.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.pipeline module ----------------------------------- - -.. automodule:: pype.hosts.resolve.pipeline - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.plugin module --------------------------------- - -.. automodule:: pype.hosts.resolve.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.preload\_console module ------------------------------------------- - -.. automodule:: pype.hosts.resolve.preload_console - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.utils module -------------------------------- - -.. automodule:: pype.hosts.resolve.utils - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.workio module --------------------------------- - -.. automodule:: pype.hosts.resolve.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.todo-rendering.rst b/docs/source/pype.hosts.resolve.todo-rendering.rst deleted file mode 100644 index 8ea80183ce..0000000000 --- a/docs/source/pype.hosts.resolve.todo-rendering.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.todo\-rendering module -========================================= - -.. automodule:: pype.hosts.resolve.todo-rendering - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.utils.rst b/docs/source/pype.hosts.resolve.utils.rst deleted file mode 100644 index e390a5d026..0000000000 --- a/docs/source/pype.hosts.resolve.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.utils module -=============================== - -.. automodule:: pype.hosts.resolve.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.workio.rst b/docs/source/pype.hosts.resolve.workio.rst deleted file mode 100644 index 5dceb99d64..0000000000 --- a/docs/source/pype.hosts.resolve.workio.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.workio module -================================ - -.. automodule:: pype.hosts.resolve.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.rst b/docs/source/pype.hosts.rst deleted file mode 100644 index e2d9121501..0000000000 --- a/docs/source/pype.hosts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts package -================== - -.. automodule:: pype.hosts - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts.blender - pype.hosts.celaction - pype.hosts.fusion - pype.hosts.harmony - pype.hosts.houdini - pype.hosts.maya - pype.hosts.nuke - pype.hosts.nukestudio - pype.hosts.photoshop - pype.hosts.premiere - pype.hosts.resolve - pype.hosts.unreal diff --git a/docs/source/pype.hosts.tvpaint.api.rst b/docs/source/pype.hosts.tvpaint.api.rst deleted file mode 100644 index 43273e8ec5..0000000000 --- a/docs/source/pype.hosts.tvpaint.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.tvpaint.api package -============================== - -.. automodule:: pype.hosts.tvpaint.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.tvpaint.rst b/docs/source/pype.hosts.tvpaint.rst deleted file mode 100644 index 561be3a9dc..0000000000 --- a/docs/source/pype.hosts.tvpaint.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.hosts.tvpaint package -========================== - -.. automodule:: pype.hosts.tvpaint - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.hosts.tvpaint.api diff --git a/docs/source/pype.hosts.unreal.lib.rst b/docs/source/pype.hosts.unreal.lib.rst deleted file mode 100644 index b891e71c47..0000000000 --- a/docs/source/pype.hosts.unreal.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.unreal.lib module -============================ - -.. automodule:: pype.hosts.unreal.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.unreal.plugin.rst b/docs/source/pype.hosts.unreal.plugin.rst deleted file mode 100644 index e3ef81c7c7..0000000000 --- a/docs/source/pype.hosts.unreal.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.unreal.plugin module -=============================== - -.. automodule:: pype.hosts.unreal.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.unreal.rst b/docs/source/pype.hosts.unreal.rst deleted file mode 100644 index f46140298b..0000000000 --- a/docs/source/pype.hosts.unreal.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.unreal package -========================= - -.. automodule:: pype.hosts.unreal - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.unreal.lib module ----------------------------- - -.. automodule:: pype.hosts.unreal.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.unreal.plugin module -------------------------------- - -.. automodule:: pype.hosts.unreal.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.launcher_actions.rst b/docs/source/pype.launcher_actions.rst deleted file mode 100644 index c7525acbd1..0000000000 --- a/docs/source/pype.launcher_actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.launcher\_actions module -============================= - -.. automodule:: pype.launcher_actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_collect_render.rst b/docs/source/pype.lib.abstract_collect_render.rst deleted file mode 100644 index d6adadc271..0000000000 --- a/docs/source/pype.lib.abstract_collect_render.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_collect\_render module -========================================= - -.. automodule:: pype.lib.abstract_collect_render - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_expected_files.rst b/docs/source/pype.lib.abstract_expected_files.rst deleted file mode 100644 index 904aeb3375..0000000000 --- a/docs/source/pype.lib.abstract_expected_files.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_expected\_files module -========================================= - -.. automodule:: pype.lib.abstract_expected_files - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_metaplugins.rst b/docs/source/pype.lib.abstract_metaplugins.rst deleted file mode 100644 index 9f2751b630..0000000000 --- a/docs/source/pype.lib.abstract_metaplugins.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_metaplugins module -===================================== - -.. automodule:: pype.lib.abstract_metaplugins - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_submit_deadline.rst b/docs/source/pype.lib.abstract_submit_deadline.rst deleted file mode 100644 index a57222add3..0000000000 --- a/docs/source/pype.lib.abstract_submit_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_submit\_deadline module -========================================== - -.. automodule:: pype.lib.abstract_submit_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.anatomy.rst b/docs/source/pype.lib.anatomy.rst deleted file mode 100644 index 7bddb37c8a..0000000000 --- a/docs/source/pype.lib.anatomy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.anatomy module -======================= - -.. automodule:: pype.lib.anatomy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.applications.rst b/docs/source/pype.lib.applications.rst deleted file mode 100644 index 8d1ff9b2c6..0000000000 --- a/docs/source/pype.lib.applications.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.applications module -============================ - -.. automodule:: pype.lib.applications - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.avalon_context.rst b/docs/source/pype.lib.avalon_context.rst deleted file mode 100644 index 067ea3380f..0000000000 --- a/docs/source/pype.lib.avalon_context.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.avalon\_context module -=============================== - -.. automodule:: pype.lib.avalon_context - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.config.rst b/docs/source/pype.lib.config.rst deleted file mode 100644 index ce4c13f4e7..0000000000 --- a/docs/source/pype.lib.config.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.config module -====================== - -.. automodule:: pype.lib.config - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.deprecated.rst b/docs/source/pype.lib.deprecated.rst deleted file mode 100644 index ec5ee58d67..0000000000 --- a/docs/source/pype.lib.deprecated.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.deprecated module -========================== - -.. automodule:: pype.lib.deprecated - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.editorial.rst b/docs/source/pype.lib.editorial.rst deleted file mode 100644 index d32e495e51..0000000000 --- a/docs/source/pype.lib.editorial.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.editorial module -========================= - -.. automodule:: pype.lib.editorial - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.env_tools.rst b/docs/source/pype.lib.env_tools.rst deleted file mode 100644 index cb470207c8..0000000000 --- a/docs/source/pype.lib.env_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.env\_tools module -========================== - -.. automodule:: pype.lib.env_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.execute.rst b/docs/source/pype.lib.execute.rst deleted file mode 100644 index 82c4ef0ad8..0000000000 --- a/docs/source/pype.lib.execute.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.execute module -======================= - -.. automodule:: pype.lib.execute - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.ffmpeg_utils.rst b/docs/source/pype.lib.ffmpeg_utils.rst deleted file mode 100644 index 968a3f39c8..0000000000 --- a/docs/source/pype.lib.ffmpeg_utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.ffmpeg\_utils module -============================= - -.. automodule:: pype.lib.ffmpeg_utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.git_progress.rst b/docs/source/pype.lib.git_progress.rst deleted file mode 100644 index 017cf4c3c7..0000000000 --- a/docs/source/pype.lib.git_progress.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.git\_progress module -============================= - -.. automodule:: pype.lib.git_progress - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.log.rst b/docs/source/pype.lib.log.rst deleted file mode 100644 index 6282178850..0000000000 --- a/docs/source/pype.lib.log.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.log module -=================== - -.. automodule:: pype.lib.log - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.mongo.rst b/docs/source/pype.lib.mongo.rst deleted file mode 100644 index 34fbc6af7f..0000000000 --- a/docs/source/pype.lib.mongo.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.mongo module -===================== - -.. automodule:: pype.lib.mongo - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.path_tools.rst b/docs/source/pype.lib.path_tools.rst deleted file mode 100644 index c19c41eea3..0000000000 --- a/docs/source/pype.lib.path_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.path\_tools module -=========================== - -.. automodule:: pype.lib.path_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.plugin_tools.rst b/docs/source/pype.lib.plugin_tools.rst deleted file mode 100644 index 6eadc5d3be..0000000000 --- a/docs/source/pype.lib.plugin_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.plugin\_tools module -============================= - -.. automodule:: pype.lib.plugin_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.profiling.rst b/docs/source/pype.lib.profiling.rst deleted file mode 100644 index 1fded0c8fd..0000000000 --- a/docs/source/pype.lib.profiling.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.profiling module -========================= - -.. automodule:: pype.lib.profiling - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.python_module_tools.rst b/docs/source/pype.lib.python_module_tools.rst deleted file mode 100644 index c916080bce..0000000000 --- a/docs/source/pype.lib.python_module_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.python\_module\_tools module -===================================== - -.. automodule:: pype.lib.python_module_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.rst b/docs/source/pype.lib.rst deleted file mode 100644 index ea880eea3e..0000000000 --- a/docs/source/pype.lib.rst +++ /dev/null @@ -1,90 +0,0 @@ -pype.lib package -================ - -.. automodule:: pype.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.lib.anatomy module ------------------------ - -.. automodule:: pype.lib.anatomy - :members: - :undoc-members: - :show-inheritance: - -pype.lib.config module ----------------------- - -.. automodule:: pype.lib.config - :members: - :undoc-members: - :show-inheritance: - -pype.lib.execute module ------------------------ - -.. automodule:: pype.lib.execute - :members: - :undoc-members: - :show-inheritance: - -pype.lib.git\_progress module ------------------------------ - -.. automodule:: pype.lib.git_progress - :members: - :undoc-members: - :show-inheritance: - -pype.lib.lib module -------------------- - -.. automodule:: pype.lib.lib - :members: - :undoc-members: - :show-inheritance: - -pype.lib.log module -------------------- - -.. automodule:: pype.lib.log - :members: - :undoc-members: - :show-inheritance: - -pype.lib.mongo module ---------------------- - -.. automodule:: pype.lib.mongo - :members: - :undoc-members: - :show-inheritance: - -pype.lib.profiling module -------------------------- - -.. automodule:: pype.lib.profiling - :members: - :undoc-members: - :show-inheritance: - -pype.lib.terminal module ------------------------- - -.. automodule:: pype.lib.terminal - :members: - :undoc-members: - :show-inheritance: - -pype.lib.user\_settings module ------------------------------- - -.. automodule:: pype.lib.user_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.terminal.rst b/docs/source/pype.lib.terminal.rst deleted file mode 100644 index dafe1d8f69..0000000000 --- a/docs/source/pype.lib.terminal.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.terminal module -======================== - -.. automodule:: pype.lib.terminal - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.terminal_splash.rst b/docs/source/pype.lib.terminal_splash.rst deleted file mode 100644 index 06038f0f09..0000000000 --- a/docs/source/pype.lib.terminal_splash.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.terminal\_splash module -================================ - -.. automodule:: pype.lib.terminal_splash - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.user_settings.rst b/docs/source/pype.lib.user_settings.rst deleted file mode 100644 index 7b4e8ced78..0000000000 --- a/docs/source/pype.lib.user_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.user\_settings module -============================== - -.. automodule:: pype.lib.user_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst b/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst deleted file mode 100644 index aadbaa0dc5..0000000000 --- a/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.adobe\_comunicator module -========================================================== - -.. automodule:: pype.modules.adobe_communicator.adobe_comunicator - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.publish.rst b/docs/source/pype.modules.adobe_communicator.lib.publish.rst deleted file mode 100644 index a16bf1dd0a..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.publish.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.lib.publish module -=================================================== - -.. automodule:: pype.modules.adobe_communicator.lib.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst b/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst deleted file mode 100644 index 457bebef99..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.lib.rest\_api module -===================================================== - -.. automodule:: pype.modules.adobe_communicator.lib.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.rst b/docs/source/pype.modules.adobe_communicator.lib.rst deleted file mode 100644 index cdec4ce80e..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.adobe\_communicator.lib package -============================================ - -.. automodule:: pype.modules.adobe_communicator.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.adobe\_communicator.lib.publish module ---------------------------------------------------- - -.. automodule:: pype.modules.adobe_communicator.lib.publish - :members: - :undoc-members: - :show-inheritance: - -pype.modules.adobe\_communicator.lib.rest\_api module ------------------------------------------------------ - -.. automodule:: pype.modules.adobe_communicator.lib.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.rst b/docs/source/pype.modules.adobe_communicator.rst deleted file mode 100644 index f2fa40ced4..0000000000 --- a/docs/source/pype.modules.adobe_communicator.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.adobe\_communicator package -======================================== - -.. automodule:: pype.modules.adobe_communicator - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.adobe_communicator.lib - -Submodules ----------- - -pype.modules.adobe\_communicator.adobe\_comunicator module ----------------------------------------------------------- - -.. automodule:: pype.modules.adobe_communicator.adobe_comunicator - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.avalon_app.rst b/docs/source/pype.modules.avalon_apps.avalon_app.rst deleted file mode 100644 index 43f467e748..0000000000 --- a/docs/source/pype.modules.avalon_apps.avalon_app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.avalon\_apps.avalon\_app module -============================================ - -.. automodule:: pype.modules.avalon_apps.avalon_app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.rest_api.rst b/docs/source/pype.modules.avalon_apps.rest_api.rst deleted file mode 100644 index d89c979311..0000000000 --- a/docs/source/pype.modules.avalon_apps.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.avalon\_apps.rest\_api module -========================================== - -.. automodule:: pype.modules.avalon_apps.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.rst b/docs/source/pype.modules.avalon_apps.rst deleted file mode 100644 index 4755eddae6..0000000000 --- a/docs/source/pype.modules.avalon_apps.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.avalon\_apps package -================================= - -.. automodule:: pype.modules.avalon_apps - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.avalon\_apps.avalon\_app module --------------------------------------------- - -.. automodule:: pype.modules.avalon_apps.avalon_app - :members: - :undoc-members: - :show-inheritance: - -pype.modules.avalon\_apps.rest\_api module ------------------------------------------- - -.. automodule:: pype.modules.avalon_apps.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.base.rst b/docs/source/pype.modules.base.rst deleted file mode 100644 index 7cd3cfbd44..0000000000 --- a/docs/source/pype.modules.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.base module -======================== - -.. automodule:: pype.modules.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify.rst b/docs/source/pype.modules.clockify.clockify.rst deleted file mode 100644 index a3deaab81d..0000000000 --- a/docs/source/pype.modules.clockify.clockify.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify module -===================================== - -.. automodule:: pype.modules.clockify.clockify - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify_api.rst b/docs/source/pype.modules.clockify.clockify_api.rst deleted file mode 100644 index 2facc550c5..0000000000 --- a/docs/source/pype.modules.clockify.clockify_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify\_api module -========================================== - -.. automodule:: pype.modules.clockify.clockify_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify_module.rst b/docs/source/pype.modules.clockify.clockify_module.rst deleted file mode 100644 index 85f8e75ad1..0000000000 --- a/docs/source/pype.modules.clockify.clockify_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify\_module module -============================================= - -.. automodule:: pype.modules.clockify.clockify_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.constants.rst b/docs/source/pype.modules.clockify.constants.rst deleted file mode 100644 index e30a073bfc..0000000000 --- a/docs/source/pype.modules.clockify.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.constants module -====================================== - -.. automodule:: pype.modules.clockify.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.rst b/docs/source/pype.modules.clockify.rst deleted file mode 100644 index 550ba049c2..0000000000 --- a/docs/source/pype.modules.clockify.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.modules.clockify package -============================= - -.. automodule:: pype.modules.clockify - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.clockify.clockify module -------------------------------------- - -.. automodule:: pype.modules.clockify.clockify - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.clockify\_api module ------------------------------------------- - -.. automodule:: pype.modules.clockify.clockify_api - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.constants module --------------------------------------- - -.. automodule:: pype.modules.clockify.constants - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.widgets module ------------------------------------- - -.. automodule:: pype.modules.clockify.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.widgets.rst b/docs/source/pype.modules.clockify.widgets.rst deleted file mode 100644 index e9809fb048..0000000000 --- a/docs/source/pype.modules.clockify.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.widgets module -==================================== - -.. automodule:: pype.modules.clockify.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.deadline.deadline_module.rst b/docs/source/pype.modules.deadline.deadline_module.rst deleted file mode 100644 index 43e7198a8b..0000000000 --- a/docs/source/pype.modules.deadline.deadline_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.deadline.deadline\_module module -============================================= - -.. automodule:: pype.modules.deadline.deadline_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.deadline.rst b/docs/source/pype.modules.deadline.rst deleted file mode 100644 index 7633b2b950..0000000000 --- a/docs/source/pype.modules.deadline.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.modules.deadline package -============================= - -.. automodule:: pype.modules.deadline - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.deadline.deadline_module diff --git a/docs/source/pype.modules.ftrack.ftrack_module.rst b/docs/source/pype.modules.ftrack.ftrack_module.rst deleted file mode 100644 index 4188ffbed8..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_module module -========================================= - -.. automodule:: pype.modules.ftrack.ftrack_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst b/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst deleted file mode 100644 index b42c3e054d..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.custom\_db\_connector module -=============================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.custom_db_connector - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst b/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst deleted file mode 100644 index d6404f965c..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.event\_server\_cli module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.event_server_cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst deleted file mode 100644 index af2783c263..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.ftrack\_server module -======================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.ftrack_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.lib.rst b/docs/source/pype.modules.ftrack.ftrack_server.lib.rst deleted file mode 100644 index 2ac4cef517..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.lib module -============================================= - -.. automodule:: pype.modules.ftrack.ftrack_server.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.rst deleted file mode 100644 index 417acc1a45..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.rst +++ /dev/null @@ -1,90 +0,0 @@ -pype.modules.ftrack.ftrack\_server package -========================================== - -.. automodule:: pype.modules.ftrack.ftrack_server - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.ftrack\_server.custom\_db\_connector module ---------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.custom_db_connector - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.event\_server\_cli module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.event_server_cli - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.ftrack\_server module --------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.ftrack_server - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.lib module ---------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.lib - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.socket\_thread module --------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.socket_thread - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_processor module ---------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_processor - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_status module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_status - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_storer module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_storer - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_legacy\_server module -------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_legacy_server - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_user\_server module ------------------------------------------------------------ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_user_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst b/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst deleted file mode 100644 index d8d24a8288..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.socket\_thread module -======================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.socket_thread - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst deleted file mode 100644 index 04f863e347..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_processor module -=============================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_processor - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst deleted file mode 100644 index 876b7313cf..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_status module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_status - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst deleted file mode 100644 index 3d2d400d55..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_storer module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_storer - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst deleted file mode 100644 index d25cdfe8de..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_legacy\_server module -============================================================= - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_legacy_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst deleted file mode 100644 index c13095d5f1..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_user\_server module -=========================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_user_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.avalon_sync.rst b/docs/source/pype.modules.ftrack.lib.avalon_sync.rst deleted file mode 100644 index 954ec4d911..0000000000 --- a/docs/source/pype.modules.ftrack.lib.avalon_sync.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.avalon\_sync module -=========================================== - -.. automodule:: pype.modules.ftrack.lib.avalon_sync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.credentials.rst b/docs/source/pype.modules.ftrack.lib.credentials.rst deleted file mode 100644 index 3965dc406d..0000000000 --- a/docs/source/pype.modules.ftrack.lib.credentials.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.credentials module -========================================== - -.. automodule:: pype.modules.ftrack.lib.credentials - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst deleted file mode 100644 index cec38f9b8a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_action\_handler module -====================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_action_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst deleted file mode 100644 index 1f7395927d..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_app\_handler module -=================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_app_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst deleted file mode 100644 index 94fab7c940..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_base\_handler module -==================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_base_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst deleted file mode 100644 index 0b57219b50..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_event\_handler module -===================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_event_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.rst b/docs/source/pype.modules.ftrack.lib.rst deleted file mode 100644 index 32a219ab3a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.modules.ftrack.lib package -=============================== - -.. automodule:: pype.modules.ftrack.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.lib.avalon\_sync module -------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.avalon_sync - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.credentials module ------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.credentials - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_action\_handler module ------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_action_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_app\_handler module ---------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_app_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_base\_handler module ----------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_base_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_event\_handler module ------------------------------------------------------ - -.. automodule:: pype.modules.ftrack.lib.ftrack_event_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.settings.rst b/docs/source/pype.modules.ftrack.lib.settings.rst deleted file mode 100644 index 255d52178a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.settings module -======================================= - -.. automodule:: pype.modules.ftrack.lib.settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.rst b/docs/source/pype.modules.ftrack.rst deleted file mode 100644 index 13a92db808..0000000000 --- a/docs/source/pype.modules.ftrack.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.modules.ftrack package -=========================== - -.. automodule:: pype.modules.ftrack - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.ftrack.ftrack_server - pype.modules.ftrack.lib - pype.modules.ftrack.tray diff --git a/docs/source/pype.modules.ftrack.tray.ftrack_module.rst b/docs/source/pype.modules.ftrack.tray.ftrack_module.rst deleted file mode 100644 index c4a370472c..0000000000 --- a/docs/source/pype.modules.ftrack.tray.ftrack_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.ftrack\_module module -============================================== - -.. automodule:: pype.modules.ftrack.tray.ftrack_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst b/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst deleted file mode 100644 index 147647e9b4..0000000000 --- a/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.ftrack\_tray module -============================================ - -.. automodule:: pype.modules.ftrack.tray.ftrack_tray - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.login_dialog.rst b/docs/source/pype.modules.ftrack.tray.login_dialog.rst deleted file mode 100644 index dabc2e73a7..0000000000 --- a/docs/source/pype.modules.ftrack.tray.login_dialog.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.login\_dialog module -============================================= - -.. automodule:: pype.modules.ftrack.tray.login_dialog - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.login_tools.rst b/docs/source/pype.modules.ftrack.tray.login_tools.rst deleted file mode 100644 index 00ec690866..0000000000 --- a/docs/source/pype.modules.ftrack.tray.login_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.login\_tools module -============================================ - -.. automodule:: pype.modules.ftrack.tray.login_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.rst b/docs/source/pype.modules.ftrack.tray.rst deleted file mode 100644 index 79772a9c3b..0000000000 --- a/docs/source/pype.modules.ftrack.tray.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.modules.ftrack.tray package -================================ - -.. automodule:: pype.modules.ftrack.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.tray.ftrack\_module module ----------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.ftrack_module - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.tray.login\_dialog module ---------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.login_dialog - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.tray.login\_tools module --------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.login_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.idle_manager.idle_manager.rst b/docs/source/pype.modules.idle_manager.idle_manager.rst deleted file mode 100644 index 8e93f97e6b..0000000000 --- a/docs/source/pype.modules.idle_manager.idle_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.idle\_manager.idle\_manager module -=============================================== - -.. automodule:: pype.modules.idle_manager.idle_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.idle_manager.rst b/docs/source/pype.modules.idle_manager.rst deleted file mode 100644 index a3f7922999..0000000000 --- a/docs/source/pype.modules.idle_manager.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.modules.idle\_manager package -================================== - -.. automodule:: pype.modules.idle_manager - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.idle\_manager.idle\_manager module ------------------------------------------------ - -.. automodule:: pype.modules.idle_manager.idle_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.launcher_action.rst b/docs/source/pype.modules.launcher_action.rst deleted file mode 100644 index a63408e747..0000000000 --- a/docs/source/pype.modules.launcher_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.launcher\_action module -==================================== - -.. automodule:: pype.modules.launcher_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.log_view_module.rst b/docs/source/pype.modules.log_viewer.log_view_module.rst deleted file mode 100644 index 8d80170a9c..0000000000 --- a/docs/source/pype.modules.log_viewer.log_view_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.log\_view\_module module -================================================= - -.. automodule:: pype.modules.log_viewer.log_view_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.rst b/docs/source/pype.modules.log_viewer.rst deleted file mode 100644 index e275d56086..0000000000 --- a/docs/source/pype.modules.log_viewer.rst +++ /dev/null @@ -1,23 +0,0 @@ -pype.modules.log\_viewer package -================================ - -.. automodule:: pype.modules.log_viewer - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.tray - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.log_view_module diff --git a/docs/source/pype.modules.log_viewer.tray.app.rst b/docs/source/pype.modules.log_viewer.tray.app.rst deleted file mode 100644 index 0948a05594..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.app module -======================================== - -.. automodule:: pype.modules.log_viewer.tray.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.tray.models.rst b/docs/source/pype.modules.log_viewer.tray.models.rst deleted file mode 100644 index 4da3887600..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.models.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.models module -=========================================== - -.. automodule:: pype.modules.log_viewer.tray.models - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.tray.rst b/docs/source/pype.modules.log_viewer.tray.rst deleted file mode 100644 index 5f4b92f627..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.modules.log\_viewer.tray package -===================================== - -.. automodule:: pype.modules.log_viewer.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.tray.app - pype.modules.log_viewer.tray.models - pype.modules.log_viewer.tray.widgets diff --git a/docs/source/pype.modules.log_viewer.tray.widgets.rst b/docs/source/pype.modules.log_viewer.tray.widgets.rst deleted file mode 100644 index cb57c96559..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.widgets module -============================================ - -.. automodule:: pype.modules.log_viewer.tray.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.muster.rst b/docs/source/pype.modules.muster.muster.rst deleted file mode 100644 index d3ba1e7052..0000000000 --- a/docs/source/pype.modules.muster.muster.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.muster.muster module -================================= - -.. automodule:: pype.modules.muster.muster - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.rst b/docs/source/pype.modules.muster.rst deleted file mode 100644 index d8d0f762f4..0000000000 --- a/docs/source/pype.modules.muster.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.muster package -=========================== - -.. automodule:: pype.modules.muster - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.muster.muster module ---------------------------------- - -.. automodule:: pype.modules.muster.muster - :members: - :undoc-members: - :show-inheritance: - -pype.modules.muster.widget\_login module ----------------------------------------- - -.. automodule:: pype.modules.muster.widget_login - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.widget_login.rst b/docs/source/pype.modules.muster.widget_login.rst deleted file mode 100644 index 1c59cec820..0000000000 --- a/docs/source/pype.modules.muster.widget_login.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.muster.widget\_login module -======================================== - -.. automodule:: pype.modules.muster.widget_login - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.base_class.rst b/docs/source/pype.modules.rest_api.base_class.rst deleted file mode 100644 index c2a1030a78..0000000000 --- a/docs/source/pype.modules.rest_api.base_class.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.base\_class module -========================================= - -.. automodule:: pype.modules.rest_api.base_class - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.exceptions.rst b/docs/source/pype.modules.rest_api.lib.exceptions.rst deleted file mode 100644 index d755420ad0..0000000000 --- a/docs/source/pype.modules.rest_api.lib.exceptions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.exceptions module -============================================ - -.. automodule:: pype.modules.rest_api.lib.exceptions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.factory.rst b/docs/source/pype.modules.rest_api.lib.factory.rst deleted file mode 100644 index 2131d1b8da..0000000000 --- a/docs/source/pype.modules.rest_api.lib.factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.factory module -========================================= - -.. automodule:: pype.modules.rest_api.lib.factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.handler.rst b/docs/source/pype.modules.rest_api.lib.handler.rst deleted file mode 100644 index 6e340daf9b..0000000000 --- a/docs/source/pype.modules.rest_api.lib.handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.handler module -========================================= - -.. automodule:: pype.modules.rest_api.lib.handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.lib.rst b/docs/source/pype.modules.rest_api.lib.lib.rst deleted file mode 100644 index 19663788e0..0000000000 --- a/docs/source/pype.modules.rest_api.lib.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.lib module -===================================== - -.. automodule:: pype.modules.rest_api.lib.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.rst b/docs/source/pype.modules.rest_api.lib.rst deleted file mode 100644 index ed8288ee73..0000000000 --- a/docs/source/pype.modules.rest_api.lib.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.modules.rest\_api.lib package -================================== - -.. automodule:: pype.modules.rest_api.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.rest\_api.lib.exceptions module --------------------------------------------- - -.. automodule:: pype.modules.rest_api.lib.exceptions - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.factory module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.lib.factory - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.handler module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.lib.handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.lib module -------------------------------------- - -.. automodule:: pype.modules.rest_api.lib.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.rest_api.rst b/docs/source/pype.modules.rest_api.rest_api.rst deleted file mode 100644 index e3d951ac9f..0000000000 --- a/docs/source/pype.modules.rest_api.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.rest\_api module -======================================= - -.. automodule:: pype.modules.rest_api.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.rst b/docs/source/pype.modules.rest_api.rst deleted file mode 100644 index 09c58c84f8..0000000000 --- a/docs/source/pype.modules.rest_api.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.modules.rest\_api package -============================== - -.. automodule:: pype.modules.rest_api - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.rest_api.lib - -Submodules ----------- - -pype.modules.rest\_api.base\_class module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.base_class - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.rest\_api module ---------------------------------------- - -.. automodule:: pype.modules.rest_api.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rst b/docs/source/pype.modules.rst deleted file mode 100644 index 148c2084b4..0000000000 --- a/docs/source/pype.modules.rst +++ /dev/null @@ -1,36 +0,0 @@ -pype.modules package -==================== - -.. automodule:: pype.modules - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.adobe_communicator - pype.modules.avalon_apps - pype.modules.clockify - pype.modules.ftrack - pype.modules.idle_manager - pype.modules.muster - pype.modules.rest_api - pype.modules.standalonepublish - pype.modules.timers_manager - pype.modules.user - pype.modules.websocket_server - -Submodules ----------- - -pype.modules.base module ------------------------- - -.. automodule:: pype.modules.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.settings_action.rst b/docs/source/pype.modules.settings_action.rst deleted file mode 100644 index 10f0881ced..0000000000 --- a/docs/source/pype.modules.settings_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.settings\_action module -==================================== - -.. automodule:: pype.modules.settings_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish.rst b/docs/source/pype.modules.standalonepublish.rst deleted file mode 100644 index 2ed366af5c..0000000000 --- a/docs/source/pype.modules.standalonepublish.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.modules.standalonepublish package -====================================== - -.. automodule:: pype.modules.standalonepublish - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.standalonepublish.standalonepublish\_module module ---------------------------------------------------------------- - -.. automodule:: pype.modules.standalonepublish.standalonepublish_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst b/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst deleted file mode 100644 index a78826a4b4..0000000000 --- a/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.standalonepublish.standalonepublish\_module module -=============================================================== - -.. automodule:: pype.modules.standalonepublish.standalonepublish_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish_action.rst b/docs/source/pype.modules.standalonepublish_action.rst deleted file mode 100644 index d51dbcefa0..0000000000 --- a/docs/source/pype.modules.standalonepublish_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.standalonepublish\_action module -============================================= - -.. automodule:: pype.modules.standalonepublish_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.sync_server.rst b/docs/source/pype.modules.sync_server.rst deleted file mode 100644 index a26dc7e212..0000000000 --- a/docs/source/pype.modules.sync_server.rst +++ /dev/null @@ -1,16 +0,0 @@ -pype.modules.sync\_server package -================================= - -.. automodule:: pype.modules.sync_server - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.sync_server.sync_server - pype.modules.sync_server.utils diff --git a/docs/source/pype.modules.sync_server.sync_server.rst b/docs/source/pype.modules.sync_server.sync_server.rst deleted file mode 100644 index 36d6aa68ed..0000000000 --- a/docs/source/pype.modules.sync_server.sync_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.sync\_server.sync\_server module -============================================= - -.. automodule:: pype.modules.sync_server.sync_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.sync_server.utils.rst b/docs/source/pype.modules.sync_server.utils.rst deleted file mode 100644 index 325d5e435d..0000000000 --- a/docs/source/pype.modules.sync_server.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.sync\_server.utils module -====================================== - -.. automodule:: pype.modules.sync_server.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.rst b/docs/source/pype.modules.timers_manager.rst deleted file mode 100644 index 6c971e9dc1..0000000000 --- a/docs/source/pype.modules.timers_manager.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.timers\_manager package -==================================== - -.. automodule:: pype.modules.timers_manager - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.timers\_manager.timers\_manager module ---------------------------------------------------- - -.. automodule:: pype.modules.timers_manager.timers_manager - :members: - :undoc-members: - :show-inheritance: - -pype.modules.timers\_manager.widget\_user\_idle module ------------------------------------------------------- - -.. automodule:: pype.modules.timers_manager.widget_user_idle - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.timers_manager.rst b/docs/source/pype.modules.timers_manager.timers_manager.rst deleted file mode 100644 index fe18e4d15c..0000000000 --- a/docs/source/pype.modules.timers_manager.timers_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.timers\_manager.timers\_manager module -=================================================== - -.. automodule:: pype.modules.timers_manager.timers_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.widget_user_idle.rst b/docs/source/pype.modules.timers_manager.widget_user_idle.rst deleted file mode 100644 index b072879c7a..0000000000 --- a/docs/source/pype.modules.timers_manager.widget_user_idle.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.timers\_manager.widget\_user\_idle module -====================================================== - -.. automodule:: pype.modules.timers_manager.widget_user_idle - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.rst b/docs/source/pype.modules.user.rst deleted file mode 100644 index d181b263e5..0000000000 --- a/docs/source/pype.modules.user.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.user package -========================= - -.. automodule:: pype.modules.user - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.user.user\_module module -------------------------------------- - -.. automodule:: pype.modules.user.user_module - :members: - :undoc-members: - :show-inheritance: - -pype.modules.user.widget\_user module -------------------------------------- - -.. automodule:: pype.modules.user.widget_user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.user_module.rst b/docs/source/pype.modules.user.user_module.rst deleted file mode 100644 index a8e0cd6bad..0000000000 --- a/docs/source/pype.modules.user.user_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.user.user\_module module -===================================== - -.. automodule:: pype.modules.user.user_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.widget_user.rst b/docs/source/pype.modules.user.widget_user.rst deleted file mode 100644 index 2979e5ead4..0000000000 --- a/docs/source/pype.modules.user.widget_user.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.user.widget\_user module -===================================== - -.. automodule:: pype.modules.user.widget_user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst b/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst deleted file mode 100644 index 9f4720ae14..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.aftereffects module -======================================================== - -.. automodule:: pype.modules.websocket_server.hosts.aftereffects - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst b/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst deleted file mode 100644 index 4ac69d9015..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.external\_app\_1 module -============================================================ - -.. automodule:: pype.modules.websocket_server.hosts.external_app_1 - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.photoshop.rst b/docs/source/pype.modules.websocket_server.hosts.photoshop.rst deleted file mode 100644 index cbda61275a..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.photoshop.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.photoshop module -===================================================== - -.. automodule:: pype.modules.websocket_server.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.rst b/docs/source/pype.modules.websocket_server.hosts.rst deleted file mode 100644 index d5ce7c3f8e..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.websocket\_server.hosts package -============================================ - -.. automodule:: pype.modules.websocket_server.hosts - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.websocket\_server.hosts.external\_app\_1 module ------------------------------------------------------------- - -.. automodule:: pype.modules.websocket_server.hosts.external_app_1 - :members: - :undoc-members: - :show-inheritance: - -pype.modules.websocket\_server.hosts.photoshop module ------------------------------------------------------ - -.. automodule:: pype.modules.websocket_server.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.rst b/docs/source/pype.modules.websocket_server.rst deleted file mode 100644 index a83d371df1..0000000000 --- a/docs/source/pype.modules.websocket_server.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.websocket\_server package -====================================== - -.. automodule:: pype.modules.websocket_server - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.websocket_server.hosts - -Submodules ----------- - -pype.modules.websocket\_server.websocket\_server module -------------------------------------------------------- - -.. automodule:: pype.modules.websocket_server.websocket_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.websocket_server.rst b/docs/source/pype.modules.websocket_server.websocket_server.rst deleted file mode 100644 index 354c9e6cf9..0000000000 --- a/docs/source/pype.modules.websocket_server.websocket_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.websocket\_server module -======================================================= - -.. automodule:: pype.modules.websocket_server.websocket_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules_manager.rst b/docs/source/pype.modules_manager.rst deleted file mode 100644 index a5f2327d65..0000000000 --- a/docs/source/pype.modules_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules\_manager module -============================ - -.. automodule:: pype.modules_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugin.rst b/docs/source/pype.plugin.rst deleted file mode 100644 index c20bb77b2b..0000000000 --- a/docs/source/pype.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugin module -================== - -.. automodule:: pype.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_animation.rst b/docs/source/pype.plugins.maya.publish.collect_animation.rst deleted file mode 100644 index 497c497057..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_animation module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_ass.rst b/docs/source/pype.plugins.maya.publish.collect_ass.rst deleted file mode 100644 index a44e61ce98..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_ass.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_ass module -============================================= - -.. automodule:: pype.plugins.maya.publish.collect_ass - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_assembly.rst b/docs/source/pype.plugins.maya.publish.collect_assembly.rst deleted file mode 100644 index 5baa91818b..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_assembly module -================================================== - -.. automodule:: pype.plugins.maya.publish.collect_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst b/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst deleted file mode 100644 index efe857140e..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_file\_dependencies module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_file_dependencies - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst b/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst deleted file mode 100644 index 872bbc69a4..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_ftrack\_family module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_ftrack_family - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_history.rst b/docs/source/pype.plugins.maya.publish.collect_history.rst deleted file mode 100644 index 5a98778c24..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_history.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_history module -================================================= - -.. automodule:: pype.plugins.maya.publish.collect_history - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_instances.rst b/docs/source/pype.plugins.maya.publish.collect_instances.rst deleted file mode 100644 index 33c8b97597..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_instances.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_instances module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_instances - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_look.rst b/docs/source/pype.plugins.maya.publish.collect_look.rst deleted file mode 100644 index 234fcf20d1..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_look.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_look module -============================================== - -.. automodule:: pype.plugins.maya.publish.collect_look - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_maya_units.rst b/docs/source/pype.plugins.maya.publish.collect_maya_units.rst deleted file mode 100644 index 0cb01b0fa7..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_maya_units.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_maya\_units module -===================================================== - -.. automodule:: pype.plugins.maya.publish.collect_maya_units - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst b/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst deleted file mode 100644 index 7447052004..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_maya\_workspace module -========================================================= - -.. automodule:: pype.plugins.maya.publish.collect_maya_workspace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst b/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst deleted file mode 100644 index 14fe826229..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_mayaascii module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_mayaascii - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_model.rst b/docs/source/pype.plugins.maya.publish.collect_model.rst deleted file mode 100644 index b30bf3fb22..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_model module -=============================================== - -.. automodule:: pype.plugins.maya.publish.collect_model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst b/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst deleted file mode 100644 index a0bf9498d7..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_remove\_marked module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_remove_marked - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_render.rst b/docs/source/pype.plugins.maya.publish.collect_render.rst deleted file mode 100644 index 6de8827119..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_render.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_render module -================================================ - -.. automodule:: pype.plugins.maya.publish.collect_render - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst b/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst deleted file mode 100644 index ab511fc5dd..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_render\_layer\_aovs module -============================================================= - -.. automodule:: pype.plugins.maya.publish.collect_render_layer_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst b/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst deleted file mode 100644 index c98e8000a1..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_renderable\_camera module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_renderable_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_review.rst b/docs/source/pype.plugins.maya.publish.collect_review.rst deleted file mode 100644 index d73127aa85..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_review.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_review module -================================================ - -.. automodule:: pype.plugins.maya.publish.collect_review - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_rig.rst b/docs/source/pype.plugins.maya.publish.collect_rig.rst deleted file mode 100644 index e7c0528482..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_rig module -============================================= - -.. automodule:: pype.plugins.maya.publish.collect_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_scene.rst b/docs/source/pype.plugins.maya.publish.collect_scene.rst deleted file mode 100644 index c5c2fef222..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_scene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_scene module -=============================================== - -.. automodule:: pype.plugins.maya.publish.collect_scene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst b/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst deleted file mode 100644 index 673f0865fd..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_unreal\_staticmesh module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_unreal_staticmesh - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst b/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst deleted file mode 100644 index ed4386a7ba..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_workscene\_fps module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_workscene_fps - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst b/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst deleted file mode 100644 index 32ab50baca..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_yeti\_cache module -===================================================== - -.. automodule:: pype.plugins.maya.publish.collect_yeti_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst b/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst deleted file mode 100644 index 8cf968b7c5..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_yeti\_rig module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_yeti_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.determine_future_version.rst b/docs/source/pype.plugins.maya.publish.determine_future_version.rst deleted file mode 100644 index 55c6155680..0000000000 --- a/docs/source/pype.plugins.maya.publish.determine_future_version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.determine\_future\_version module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.determine_future_version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_animation.rst b/docs/source/pype.plugins.maya.publish.extract_animation.rst deleted file mode 100644 index 3649723042..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_animation module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_ass.rst b/docs/source/pype.plugins.maya.publish.extract_ass.rst deleted file mode 100644 index be8123e5d7..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_ass.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_ass module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_ass - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_assembly.rst b/docs/source/pype.plugins.maya.publish.extract_assembly.rst deleted file mode 100644 index b36e8f6d30..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_assembly module -================================================== - -.. automodule:: pype.plugins.maya.publish.extract_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_assproxy.rst b/docs/source/pype.plugins.maya.publish.extract_assproxy.rst deleted file mode 100644 index fc97a2ee46..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_assproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_assproxy module -================================================== - -.. automodule:: pype.plugins.maya.publish.extract_assproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst b/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst deleted file mode 100644 index a9df3da011..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_camera\_alembic module -========================================================= - -.. automodule:: pype.plugins.maya.publish.extract_camera_alembic - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst b/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst deleted file mode 100644 index db1799f52f..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_camera\_mayaScene module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.extract_camera_mayaScene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_fbx.rst b/docs/source/pype.plugins.maya.publish.extract_fbx.rst deleted file mode 100644 index fffd5a6394..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_fbx.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_fbx module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_fbx - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_look.rst b/docs/source/pype.plugins.maya.publish.extract_look.rst deleted file mode 100644 index f2708678ce..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_look.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_look module -============================================== - -.. automodule:: pype.plugins.maya.publish.extract_look - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst b/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst deleted file mode 100644 index 1e080dd0eb..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_maya\_scene\_raw module -========================================================== - -.. automodule:: pype.plugins.maya.publish.extract_maya_scene_raw - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_model.rst b/docs/source/pype.plugins.maya.publish.extract_model.rst deleted file mode 100644 index c78b49c777..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_model module -=============================================== - -.. automodule:: pype.plugins.maya.publish.extract_model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_playblast.rst b/docs/source/pype.plugins.maya.publish.extract_playblast.rst deleted file mode 100644 index 1aa284b370..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_playblast.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_playblast module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_playblast - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_pointcache.rst b/docs/source/pype.plugins.maya.publish.extract_pointcache.rst deleted file mode 100644 index 97ebde4933..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_pointcache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_pointcache module -==================================================== - -.. automodule:: pype.plugins.maya.publish.extract_pointcache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst b/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst deleted file mode 100644 index 86cb178f42..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_rendersetup module -===================================================== - -.. automodule:: pype.plugins.maya.publish.extract_rendersetup - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_rig.rst b/docs/source/pype.plugins.maya.publish.extract_rig.rst deleted file mode 100644 index f6419c9473..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_rig module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst b/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst deleted file mode 100644 index 2d03e11d55..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_thumbnail module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_thumbnail - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst b/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst deleted file mode 100644 index 5439ff59ca..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_vrayproxy module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_vrayproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst b/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst deleted file mode 100644 index 7ad84dfc70..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_yeti\_cache module -===================================================== - -.. automodule:: pype.plugins.maya.publish.extract_yeti_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst b/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst deleted file mode 100644 index 76d483d91b..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_yeti\_rig module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_yeti_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst b/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst deleted file mode 100644 index 97126a6c77..0000000000 --- a/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.increment\_current\_file\_deadline module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.increment_current_file_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.rst b/docs/source/pype.plugins.maya.publish.rst deleted file mode 100644 index dba0a9118c..0000000000 --- a/docs/source/pype.plugins.maya.publish.rst +++ /dev/null @@ -1,146 +0,0 @@ -pype.plugins.maya.publish package -================================= - -.. automodule:: pype.plugins.maya.publish - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya.publish.collect_animation - pype.plugins.maya.publish.collect_ass - pype.plugins.maya.publish.collect_assembly - pype.plugins.maya.publish.collect_file_dependencies - pype.plugins.maya.publish.collect_ftrack_family - pype.plugins.maya.publish.collect_history - pype.plugins.maya.publish.collect_instances - pype.plugins.maya.publish.collect_look - pype.plugins.maya.publish.collect_maya_units - pype.plugins.maya.publish.collect_maya_workspace - pype.plugins.maya.publish.collect_mayaascii - pype.plugins.maya.publish.collect_model - pype.plugins.maya.publish.collect_remove_marked - pype.plugins.maya.publish.collect_render - pype.plugins.maya.publish.collect_render_layer_aovs - pype.plugins.maya.publish.collect_renderable_camera - pype.plugins.maya.publish.collect_review - pype.plugins.maya.publish.collect_rig - pype.plugins.maya.publish.collect_scene - pype.plugins.maya.publish.collect_unreal_staticmesh - pype.plugins.maya.publish.collect_workscene_fps - pype.plugins.maya.publish.collect_yeti_cache - pype.plugins.maya.publish.collect_yeti_rig - pype.plugins.maya.publish.determine_future_version - pype.plugins.maya.publish.extract_animation - pype.plugins.maya.publish.extract_ass - pype.plugins.maya.publish.extract_assembly - pype.plugins.maya.publish.extract_assproxy - pype.plugins.maya.publish.extract_camera_alembic - pype.plugins.maya.publish.extract_camera_mayaScene - pype.plugins.maya.publish.extract_fbx - pype.plugins.maya.publish.extract_look - pype.plugins.maya.publish.extract_maya_scene_raw - pype.plugins.maya.publish.extract_model - pype.plugins.maya.publish.extract_playblast - pype.plugins.maya.publish.extract_pointcache - pype.plugins.maya.publish.extract_rendersetup - pype.plugins.maya.publish.extract_rig - pype.plugins.maya.publish.extract_thumbnail - pype.plugins.maya.publish.extract_vrayproxy - pype.plugins.maya.publish.extract_yeti_cache - pype.plugins.maya.publish.extract_yeti_rig - pype.plugins.maya.publish.increment_current_file_deadline - pype.plugins.maya.publish.save_scene - pype.plugins.maya.publish.submit_maya_deadline - pype.plugins.maya.publish.submit_maya_muster - pype.plugins.maya.publish.validate_animation_content - pype.plugins.maya.publish.validate_animation_out_set_related_node_ids - pype.plugins.maya.publish.validate_ass_relative_paths - pype.plugins.maya.publish.validate_assembly_name - pype.plugins.maya.publish.validate_assembly_namespaces - pype.plugins.maya.publish.validate_assembly_transforms - pype.plugins.maya.publish.validate_attributes - pype.plugins.maya.publish.validate_camera_attributes - pype.plugins.maya.publish.validate_camera_contents - pype.plugins.maya.publish.validate_color_sets - pype.plugins.maya.publish.validate_current_renderlayer_renderable - pype.plugins.maya.publish.validate_deadline_connection - pype.plugins.maya.publish.validate_frame_range - pype.plugins.maya.publish.validate_instance_has_members - pype.plugins.maya.publish.validate_instance_subset - pype.plugins.maya.publish.validate_instancer_content - pype.plugins.maya.publish.validate_instancer_frame_ranges - pype.plugins.maya.publish.validate_joints_hidden - pype.plugins.maya.publish.validate_look_contents - pype.plugins.maya.publish.validate_look_default_shaders_connections - pype.plugins.maya.publish.validate_look_id_reference_edits - pype.plugins.maya.publish.validate_look_members_unique - pype.plugins.maya.publish.validate_look_no_default_shaders - pype.plugins.maya.publish.validate_look_sets - pype.plugins.maya.publish.validate_look_shading_group - pype.plugins.maya.publish.validate_look_single_shader - pype.plugins.maya.publish.validate_maya_units - pype.plugins.maya.publish.validate_mesh_arnold_attributes - pype.plugins.maya.publish.validate_mesh_has_uv - pype.plugins.maya.publish.validate_mesh_lamina_faces - pype.plugins.maya.publish.validate_mesh_no_negative_scale - pype.plugins.maya.publish.validate_mesh_non_manifold - pype.plugins.maya.publish.validate_mesh_non_zero_edge - pype.plugins.maya.publish.validate_mesh_normals_unlocked - pype.plugins.maya.publish.validate_mesh_overlapping_uvs - pype.plugins.maya.publish.validate_mesh_shader_connections - pype.plugins.maya.publish.validate_mesh_single_uv_set - pype.plugins.maya.publish.validate_mesh_uv_set_map1 - pype.plugins.maya.publish.validate_mesh_vertices_have_edges - pype.plugins.maya.publish.validate_model_content - pype.plugins.maya.publish.validate_model_name - pype.plugins.maya.publish.validate_muster_connection - pype.plugins.maya.publish.validate_no_animation - pype.plugins.maya.publish.validate_no_default_camera - pype.plugins.maya.publish.validate_no_namespace - pype.plugins.maya.publish.validate_no_null_transforms - pype.plugins.maya.publish.validate_no_unknown_nodes - pype.plugins.maya.publish.validate_no_vraymesh - pype.plugins.maya.publish.validate_node_ids - pype.plugins.maya.publish.validate_node_ids_deformed_shapes - pype.plugins.maya.publish.validate_node_ids_in_database - pype.plugins.maya.publish.validate_node_ids_related - pype.plugins.maya.publish.validate_node_ids_unique - pype.plugins.maya.publish.validate_node_no_ghosting - pype.plugins.maya.publish.validate_render_image_rule - pype.plugins.maya.publish.validate_render_no_default_cameras - pype.plugins.maya.publish.validate_render_single_camera - pype.plugins.maya.publish.validate_renderlayer_aovs - pype.plugins.maya.publish.validate_rendersettings - pype.plugins.maya.publish.validate_resources - pype.plugins.maya.publish.validate_rig_contents - pype.plugins.maya.publish.validate_rig_controllers - pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes - pype.plugins.maya.publish.validate_rig_out_set_node_ids - pype.plugins.maya.publish.validate_rig_output_ids - pype.plugins.maya.publish.validate_scene_set_workspace - pype.plugins.maya.publish.validate_shader_name - pype.plugins.maya.publish.validate_shape_default_names - pype.plugins.maya.publish.validate_shape_render_stats - pype.plugins.maya.publish.validate_single_assembly - pype.plugins.maya.publish.validate_skinCluster_deformer_set - pype.plugins.maya.publish.validate_step_size - pype.plugins.maya.publish.validate_transform_naming_suffix - pype.plugins.maya.publish.validate_transform_zero - pype.plugins.maya.publish.validate_unicode_strings - pype.plugins.maya.publish.validate_unreal_mesh_triangulated - pype.plugins.maya.publish.validate_unreal_staticmesh_naming - pype.plugins.maya.publish.validate_unreal_up_axis - pype.plugins.maya.publish.validate_vray_distributed_rendering - pype.plugins.maya.publish.validate_vray_translator_settings - pype.plugins.maya.publish.validate_vrayproxy - pype.plugins.maya.publish.validate_vrayproxy_members - pype.plugins.maya.publish.validate_yeti_renderscript_callbacks - pype.plugins.maya.publish.validate_yeti_rig_cache_state - pype.plugins.maya.publish.validate_yeti_rig_input_in_instance - pype.plugins.maya.publish.validate_yeti_rig_settings diff --git a/docs/source/pype.plugins.maya.publish.save_scene.rst b/docs/source/pype.plugins.maya.publish.save_scene.rst deleted file mode 100644 index 2537bca03d..0000000000 --- a/docs/source/pype.plugins.maya.publish.save_scene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.save\_scene module -============================================ - -.. automodule:: pype.plugins.maya.publish.save_scene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst b/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst deleted file mode 100644 index 0e521cec4e..0000000000 --- a/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.submit\_maya\_deadline module -======================================================= - -.. automodule:: pype.plugins.maya.publish.submit_maya_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst b/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst deleted file mode 100644 index 4ae263e157..0000000000 --- a/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.submit\_maya\_muster module -===================================================== - -.. automodule:: pype.plugins.maya.publish.submit_maya_muster - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_animation_content.rst b/docs/source/pype.plugins.maya.publish.validate_animation_content.rst deleted file mode 100644 index 65191bb957..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_animation_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_animation\_content module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_animation_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst deleted file mode 100644 index ea289e84ed..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_animation\_out\_set\_related\_node\_ids module -================================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_animation_out_set_related_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst b/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst deleted file mode 100644 index f35ef916cc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_ass\_relative\_paths module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_ass_relative_paths - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst deleted file mode 100644 index c8178226b2..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_name module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_assembly_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst deleted file mode 100644 index 847b90281e..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_namespaces module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_assembly_namespaces - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst deleted file mode 100644 index b4348a2908..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_transforms module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_assembly_transforms - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_attributes.rst deleted file mode 100644 index 862820a7c0..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_attributes module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst deleted file mode 100644 index 054198f812..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_camera\_attributes module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_camera_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst b/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst deleted file mode 100644 index 9cf6604f7a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_camera\_contents module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_camera_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_color_sets.rst b/docs/source/pype.plugins.maya.publish.validate_color_sets.rst deleted file mode 100644 index 59bb5607bf..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_color_sets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_color\_sets module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_color_sets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst b/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst deleted file mode 100644 index 31c52477aa..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_current\_renderlayer\_renderable module -=========================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_current_renderlayer_renderable - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst b/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst deleted file mode 100644 index 3f8c4b6313..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_deadline\_connection module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_deadline_connection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_frame_range.rst b/docs/source/pype.plugins.maya.publish.validate_frame_range.rst deleted file mode 100644 index 0ccc8ed1cd..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_frame_range.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_frame\_range module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_frame_range - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst b/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst deleted file mode 100644 index 862d32f114..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instance\_has\_members module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_instance_has_members - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst b/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst deleted file mode 100644 index f71febb73c..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instance\_subset module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_instance_subset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst b/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst deleted file mode 100644 index 761889dd4d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instancer\_content module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_instancer_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst b/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst deleted file mode 100644 index 85338c3e2d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instancer\_frame\_ranges module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_instancer_frame_ranges - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst b/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst deleted file mode 100644 index ede5af0c67..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_joints\_hidden module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_joints_hidden - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_contents.rst b/docs/source/pype.plugins.maya.publish.validate_look_contents.rst deleted file mode 100644 index 946f924fb3..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_contents module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_look_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst b/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst deleted file mode 100644 index e293cfc0f1..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_default\_shaders\_connections module -============================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_default_shaders_connections - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst b/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst deleted file mode 100644 index 007f4e2d03..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_id\_reference\_edits module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_id_reference_edits - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst b/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst deleted file mode 100644 index 3378e8a0f6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_members\_unique module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_look_members_unique - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst b/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst deleted file mode 100644 index 662e2c7621..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_no\_default\_shaders module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_no_default_shaders - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_sets.rst b/docs/source/pype.plugins.maya.publish.validate_look_sets.rst deleted file mode 100644 index 5427331568..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_sets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_sets module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_sets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst b/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst deleted file mode 100644 index 259f4952b7..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_shading\_group module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_shading_group - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst b/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst deleted file mode 100644 index fa43283416..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_single\_shader module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_single_shader - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_maya_units.rst b/docs/source/pype.plugins.maya.publish.validate_maya_units.rst deleted file mode 100644 index 16af19f6d9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_maya_units.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_maya\_units module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_maya_units - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst deleted file mode 100644 index ef18ad1457..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_arnold\_attributes module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_arnold_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst deleted file mode 100644 index c6af7063c3..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_has\_uv module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_has_uv - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst deleted file mode 100644 index 006488e77f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_lamina\_faces module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_lamina_faces - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst deleted file mode 100644 index 8720f3d018..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_no\_negative\_scale module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_no_negative_scale - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst deleted file mode 100644 index a69a4c6fc4..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_non\_manifold module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_non_manifold - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst deleted file mode 100644 index 89ea60d1bc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_non\_zero\_edge module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_mesh_non_zero_edge - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst deleted file mode 100644 index 7dfbd0717d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_normals\_unlocked module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_normals_unlocked - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst deleted file mode 100644 index f5df633124..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_overlapping\_uvs module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_mesh_overlapping_uvs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst deleted file mode 100644 index b3cd77ab2a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_shader\_connections module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_shader_connections - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst deleted file mode 100644 index 29a1217437..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_single\_uv\_set module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_mesh_single_uv_set - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst deleted file mode 100644 index 49d1b22497..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_uv\_set\_map1 module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_uv_set_map1 - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst deleted file mode 100644 index 99e3047e3d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_vertices\_have\_edges module -====================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_vertices_have_edges - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_model_content.rst b/docs/source/pype.plugins.maya.publish.validate_model_content.rst deleted file mode 100644 index dc0a415718..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_model_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_model\_content module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_model_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_model_name.rst b/docs/source/pype.plugins.maya.publish.validate_model_name.rst deleted file mode 100644 index ea78ceea70..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_model_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_model\_name module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_model_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst b/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst deleted file mode 100644 index 4a4a1e926b..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_muster\_connection module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_muster_connection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_animation.rst b/docs/source/pype.plugins.maya.publish.validate_no_animation.rst deleted file mode 100644 index b42021369d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_animation module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst b/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst deleted file mode 100644 index 59544369f6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_default\_camera module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_default_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst b/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst deleted file mode 100644 index bdf4ceb324..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_namespace module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_namespace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst b/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst deleted file mode 100644 index 12beed8c33..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_null\_transforms module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_null_transforms - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst b/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst deleted file mode 100644 index 12c977dbb9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_unknown\_nodes module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_no_unknown_nodes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst b/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst deleted file mode 100644 index a1a0b9ee64..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_vraymesh module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_no_vraymesh - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids.rst deleted file mode 100644 index 7b1d79100f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst deleted file mode 100644 index 90ef81c5b5..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_deformed\_shapes module -====================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_deformed_shapes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst deleted file mode 100644 index 5eb0047d16..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_in\_database module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_in_database - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst deleted file mode 100644 index 1f030462ae..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_related module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_related - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst deleted file mode 100644 index 20ba3a3a6d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_unique module -============================================================ - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_unique - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst b/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst deleted file mode 100644 index 8315888630..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_no\_ghosting module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_node_no_ghosting - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst b/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst deleted file mode 100644 index 88870a9ea8..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_image\_rule module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_render_image_rule - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst b/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst deleted file mode 100644 index b464dbeab6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_no\_default\_cameras module -======================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_render_no_default_cameras - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst b/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst deleted file mode 100644 index 60a0cbd6fb..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_single\_camera module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_render_single_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst b/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst deleted file mode 100644 index 65d5181065..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_renderlayer\_aovs module -============================================================ - -.. automodule:: pype.plugins.maya.publish.validate_renderlayer_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst b/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst deleted file mode 100644 index fce7dba5b8..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rendersettings module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_rendersettings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_resources.rst b/docs/source/pype.plugins.maya.publish.validate_resources.rst deleted file mode 100644 index 0a866acdbb..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_resources module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst b/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst deleted file mode 100644 index dbd7d84bed..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_contents module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst b/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst deleted file mode 100644 index 3bf075e8ad..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_controllers module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_controllers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst deleted file mode 100644 index 67e9256f3a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_controllers\_arnold\_attributes module -=============================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst deleted file mode 100644 index e4f1cfc428..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_out\_set\_node\_ids module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_out_set_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst b/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst deleted file mode 100644 index e1d3b1a659..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_output\_ids module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_output_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst b/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst deleted file mode 100644 index daf2f152d9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_scene\_set\_workspace module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_scene_set_workspace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shader_name.rst b/docs/source/pype.plugins.maya.publish.validate_shader_name.rst deleted file mode 100644 index ae5b196a1d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shader_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shader\_name module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_shader_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst b/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst deleted file mode 100644 index 49effc932d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shape\_default\_names module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_shape_default_names - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst b/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst deleted file mode 100644 index 359af50a0f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shape\_render\_stats module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_shape_render_stats - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst b/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst deleted file mode 100644 index 090f57b3ff..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_single\_assembly module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_single_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst b/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst deleted file mode 100644 index 607a610097..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_skinCluster\_deformer\_set module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_skinCluster_deformer_set - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_step_size.rst b/docs/source/pype.plugins.maya.publish.validate_step_size.rst deleted file mode 100644 index bb883ea7b5..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_step_size.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_step\_size module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_step_size - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst b/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst deleted file mode 100644 index 4d7edda78d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_transform\_naming\_suffix module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_transform_naming_suffix - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst b/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst deleted file mode 100644 index 6d5cacfe00..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_transform\_zero module -========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_transform_zero - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst b/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst deleted file mode 100644 index 9cc17d6810..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unicode\_strings module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unicode_strings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst deleted file mode 100644 index 4dcb518194..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_mesh\_triangulated module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_mesh_triangulated - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst deleted file mode 100644 index f7225ab395..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_staticmesh\_naming module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_staticmesh_naming - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst deleted file mode 100644 index ff688c493f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_up\_axis module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_up_axis - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst b/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst deleted file mode 100644 index f5d05e6d76..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_distributed\_rendering module -======================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vray_distributed_rendering - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst b/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst deleted file mode 100644 index 16ad9666aa..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_referenced\_aovs module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vray_referenced_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst b/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst deleted file mode 100644 index a06a9531dd..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_translator\_settings module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_vray_translator_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst b/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst deleted file mode 100644 index 081f58924a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vrayproxy module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_vrayproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst b/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst deleted file mode 100644 index 7c587f39b0..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vrayproxy\_members module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vrayproxy_members - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst deleted file mode 100644 index 889d469b2f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_renderscript\_callbacks module -======================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_renderscript_callbacks - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst deleted file mode 100644 index 4138b1e8a4..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_cache\_state module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_cache_state - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst deleted file mode 100644 index 37b862926c..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_input\_in\_instance module -========================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_input_in_instance - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst deleted file mode 100644 index 9fd54193dc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_settings module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.rst b/docs/source/pype.plugins.maya.rst deleted file mode 100644 index 129cf5fce9..0000000000 --- a/docs/source/pype.plugins.maya.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.plugins.maya package -========================= - -.. automodule:: pype.plugins.maya - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya.publish diff --git a/docs/source/pype.plugins.rst b/docs/source/pype.plugins.rst deleted file mode 100644 index 8e5e45ba5d..0000000000 --- a/docs/source/pype.plugins.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.plugins package -==================== - -.. automodule:: pype.plugins - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya diff --git a/docs/source/pype.pype_commands.rst b/docs/source/pype.pype_commands.rst deleted file mode 100644 index b8a416df7b..0000000000 --- a/docs/source/pype.pype_commands.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.pype\_commands module -========================== - -.. automodule:: pype.pype_commands - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.resources.rst b/docs/source/pype.resources.rst deleted file mode 100644 index 2fb5b92dce..0000000000 --- a/docs/source/pype.resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.resources package -====================== - -.. automodule:: pype.resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.rst b/docs/source/pype.rst deleted file mode 100644 index 3589d2f3fe..0000000000 --- a/docs/source/pype.rst +++ /dev/null @@ -1,99 +0,0 @@ -pype package -============ - -.. automodule:: pype - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts - pype.lib - pype.modules - pype.resources - pype.scripts - pype.settings - pype.tests - pype.tools - pype.vendor - pype.widgets - -Submodules ----------- - -pype.action module ------------------- - -.. automodule:: pype.action - :members: - :undoc-members: - :show-inheritance: - -pype.api module ---------------- - -.. automodule:: pype.api - :members: - :undoc-members: - :show-inheritance: - -pype.cli module ---------------- - -.. automodule:: pype.cli - :members: - :undoc-members: - :show-inheritance: - -pype.launcher\_actions module ------------------------------ - -.. automodule:: pype.launcher_actions - :members: - :undoc-members: - :show-inheritance: - -pype.modules\_manager module ----------------------------- - -.. automodule:: pype.modules_manager - :members: - :undoc-members: - :show-inheritance: - -pype.plugin module ------------------- - -.. automodule:: pype.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.pype\_commands module --------------------------- - -.. automodule:: pype.pype_commands - :members: - :undoc-members: - :show-inheritance: - -pype.setdress\_api module -------------------------- - -.. automodule:: pype.setdress_api - :members: - :undoc-members: - :show-inheritance: - -pype.version module -------------------- - -.. automodule:: pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.export_maya_ass_job.rst b/docs/source/pype.scripts.export_maya_ass_job.rst deleted file mode 100644 index c35cc49ddd..0000000000 --- a/docs/source/pype.scripts.export_maya_ass_job.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.export\_maya\_ass\_job module -========================================== - -.. automodule:: pype.scripts.export_maya_ass_job - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.fusion_switch_shot.rst b/docs/source/pype.scripts.fusion_switch_shot.rst deleted file mode 100644 index 39d3473d16..0000000000 --- a/docs/source/pype.scripts.fusion_switch_shot.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.fusion\_switch\_shot module -======================================== - -.. automodule:: pype.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.otio_burnin.rst b/docs/source/pype.scripts.otio_burnin.rst deleted file mode 100644 index e6a93017f5..0000000000 --- a/docs/source/pype.scripts.otio_burnin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.otio\_burnin module -================================ - -.. automodule:: pype.scripts.otio_burnin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.publish_deadline.rst b/docs/source/pype.scripts.publish_deadline.rst deleted file mode 100644 index d134e17244..0000000000 --- a/docs/source/pype.scripts.publish_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.publish\_deadline module -===================================== - -.. automodule:: pype.scripts.publish_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.publish_filesequence.rst b/docs/source/pype.scripts.publish_filesequence.rst deleted file mode 100644 index 440d52caad..0000000000 --- a/docs/source/pype.scripts.publish_filesequence.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.publish\_filesequence module -========================================= - -.. automodule:: pype.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.rst b/docs/source/pype.scripts.rst deleted file mode 100644 index 5985771b97..0000000000 --- a/docs/source/pype.scripts.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.scripts package -==================== - -.. automodule:: pype.scripts - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.scripts.slates - -Submodules ----------- - -pype.scripts.export\_maya\_ass\_job module ------------------------------------------- - -.. automodule:: pype.scripts.export_maya_ass_job - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.fusion\_switch\_shot module ----------------------------------------- - -.. automodule:: pype.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.otio\_burnin module --------------------------------- - -.. automodule:: pype.scripts.otio_burnin - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.publish\_deadline module -------------------------------------- - -.. automodule:: pype.scripts.publish_deadline - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.publish\_filesequence module ------------------------------------------ - -.. automodule:: pype.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.rst b/docs/source/pype.scripts.slates.rst deleted file mode 100644 index 74b4cb4343..0000000000 --- a/docs/source/pype.scripts.slates.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.scripts.slates package -=========================== - -.. automodule:: pype.scripts.slates - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.scripts.slates.slate_base diff --git a/docs/source/pype.scripts.slates.slate_base.api.rst b/docs/source/pype.scripts.slates.slate_base.api.rst deleted file mode 100644 index 0016a5c42a..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.api module -========================================== - -.. automodule:: pype.scripts.slates.slate_base.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.base.rst b/docs/source/pype.scripts.slates.slate_base.base.rst deleted file mode 100644 index 5e34d654b0..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.base module -=========================================== - -.. automodule:: pype.scripts.slates.slate_base.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.example.rst b/docs/source/pype.scripts.slates.slate_base.example.rst deleted file mode 100644 index 95ebcc835a..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.example.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.example module -============================================== - -.. automodule:: pype.scripts.slates.slate_base.example - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.font_factory.rst b/docs/source/pype.scripts.slates.slate_base.font_factory.rst deleted file mode 100644 index c53efef554..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.font_factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.font\_factory module -==================================================== - -.. automodule:: pype.scripts.slates.slate_base.font_factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.items.rst b/docs/source/pype.scripts.slates.slate_base.items.rst deleted file mode 100644 index 25abb11bb9..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.items.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.items module -============================================ - -.. automodule:: pype.scripts.slates.slate_base.items - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.layer.rst b/docs/source/pype.scripts.slates.slate_base.layer.rst deleted file mode 100644 index 8681e3accf..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.layer.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.layer module -============================================ - -.. automodule:: pype.scripts.slates.slate_base.layer - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.lib.rst b/docs/source/pype.scripts.slates.slate_base.lib.rst deleted file mode 100644 index c4ef2c912e..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.lib module -========================================== - -.. automodule:: pype.scripts.slates.slate_base.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.main_frame.rst b/docs/source/pype.scripts.slates.slate_base.main_frame.rst deleted file mode 100644 index 5093c28a74..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.main_frame.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.main\_frame module -================================================== - -.. automodule:: pype.scripts.slates.slate_base.main_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.rst b/docs/source/pype.scripts.slates.slate_base.rst deleted file mode 100644 index 00726c04bf..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.scripts.slates.slate\_base package -======================================= - -.. automodule:: pype.scripts.slates.slate_base - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.scripts.slates.slate\_base.api module ------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.api - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.base module -------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.base - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.example module ----------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.example - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.font\_factory module ----------------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.font_factory - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.items module --------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.items - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.layer module --------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.layer - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.lib module ------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.lib - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.main\_frame module --------------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.main_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.setdress_api.rst b/docs/source/pype.setdress_api.rst deleted file mode 100644 index 95638ea64d..0000000000 --- a/docs/source/pype.setdress_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.setdress\_api module -========================= - -.. automodule:: pype.setdress_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.constants.rst b/docs/source/pype.settings.constants.rst deleted file mode 100644 index ac652089c8..0000000000 --- a/docs/source/pype.settings.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.constants module -============================== - -.. automodule:: pype.settings.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.handlers.rst b/docs/source/pype.settings.handlers.rst deleted file mode 100644 index 60ea0ae952..0000000000 --- a/docs/source/pype.settings.handlers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.handlers module -============================= - -.. automodule:: pype.settings.handlers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.lib.rst b/docs/source/pype.settings.lib.rst deleted file mode 100644 index d6e3e8bd06..0000000000 --- a/docs/source/pype.settings.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.lib module -======================== - -.. automodule:: pype.settings.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.rst b/docs/source/pype.settings.rst deleted file mode 100644 index 5bf131d555..0000000000 --- a/docs/source/pype.settings.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.settings package -===================== - -.. automodule:: pype.settings - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.settings.lib module ------------------------- - -.. automodule:: pype.settings.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.lib.rst b/docs/source/pype.tests.lib.rst deleted file mode 100644 index 375ebd0258..0000000000 --- a/docs/source/pype.tests.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.lib module -===================== - -.. automodule:: pype.tests.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.rst b/docs/source/pype.tests.rst deleted file mode 100644 index 3f34cdcd77..0000000000 --- a/docs/source/pype.tests.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.tests package -================== - -.. automodule:: pype.tests - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tests.lib module ---------------------- - -.. automodule:: pype.tests.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_avalon\_plugin\_presets module ------------------------------------------------ - -.. automodule:: pype.tests.test_avalon_plugin_presets - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_mongo\_performance module ------------------------------------------- - -.. automodule:: pype.tests.test_mongo_performance - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_pyblish\_filter module ---------------------------------------- - -.. automodule:: pype.tests.test_pyblish_filter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_avalon_plugin_presets.rst b/docs/source/pype.tests.test_avalon_plugin_presets.rst deleted file mode 100644 index b4ff802256..0000000000 --- a/docs/source/pype.tests.test_avalon_plugin_presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_avalon\_plugin\_presets module -=============================================== - -.. automodule:: pype.tests.test_avalon_plugin_presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_lib_restructuralization.rst b/docs/source/pype.tests.test_lib_restructuralization.rst deleted file mode 100644 index 8d426fcb6b..0000000000 --- a/docs/source/pype.tests.test_lib_restructuralization.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_lib\_restructuralization module -================================================ - -.. automodule:: pype.tests.test_lib_restructuralization - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_mongo_performance.rst b/docs/source/pype.tests.test_mongo_performance.rst deleted file mode 100644 index 4686247e59..0000000000 --- a/docs/source/pype.tests.test_mongo_performance.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_mongo\_performance module -========================================== - -.. automodule:: pype.tests.test_mongo_performance - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_pyblish_filter.rst b/docs/source/pype.tests.test_pyblish_filter.rst deleted file mode 100644 index 196ec02433..0000000000 --- a/docs/source/pype.tests.test_pyblish_filter.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_pyblish\_filter module -======================================= - -.. automodule:: pype.tests.test_pyblish_filter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.app.rst b/docs/source/pype.tools.assetcreator.app.rst deleted file mode 100644 index b46281b07a..0000000000 --- a/docs/source/pype.tools.assetcreator.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.app module -================================== - -.. automodule:: pype.tools.assetcreator.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.model.rst b/docs/source/pype.tools.assetcreator.model.rst deleted file mode 100644 index 752791d07c..0000000000 --- a/docs/source/pype.tools.assetcreator.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.model module -==================================== - -.. automodule:: pype.tools.assetcreator.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.rst b/docs/source/pype.tools.assetcreator.rst deleted file mode 100644 index b95c3b3c60..0000000000 --- a/docs/source/pype.tools.assetcreator.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.tools.assetcreator package -=============================== - -.. automodule:: pype.tools.assetcreator - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.assetcreator.app module ----------------------------------- - -.. automodule:: pype.tools.assetcreator.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.assetcreator.model module ------------------------------------- - -.. automodule:: pype.tools.assetcreator.model - :members: - :undoc-members: - :show-inheritance: - -pype.tools.assetcreator.widget module -------------------------------------- - -.. automodule:: pype.tools.assetcreator.widget - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.widget.rst b/docs/source/pype.tools.assetcreator.widget.rst deleted file mode 100644 index 23ed335306..0000000000 --- a/docs/source/pype.tools.assetcreator.widget.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.widget module -===================================== - -.. automodule:: pype.tools.assetcreator.widget - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.actions.rst b/docs/source/pype.tools.launcher.actions.rst deleted file mode 100644 index e2ec217d4b..0000000000 --- a/docs/source/pype.tools.launcher.actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.actions module -================================== - -.. automodule:: pype.tools.launcher.actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.delegates.rst b/docs/source/pype.tools.launcher.delegates.rst deleted file mode 100644 index e8a7519cd5..0000000000 --- a/docs/source/pype.tools.launcher.delegates.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.delegates module -==================================== - -.. automodule:: pype.tools.launcher.delegates - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.flickcharm.rst b/docs/source/pype.tools.launcher.flickcharm.rst deleted file mode 100644 index 5105d3235e..0000000000 --- a/docs/source/pype.tools.launcher.flickcharm.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.flickcharm module -===================================== - -.. automodule:: pype.tools.launcher.flickcharm - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.lib.rst b/docs/source/pype.tools.launcher.lib.rst deleted file mode 100644 index 28db8a6540..0000000000 --- a/docs/source/pype.tools.launcher.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.lib module -============================== - -.. automodule:: pype.tools.launcher.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.models.rst b/docs/source/pype.tools.launcher.models.rst deleted file mode 100644 index 701826284e..0000000000 --- a/docs/source/pype.tools.launcher.models.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.models module -================================= - -.. automodule:: pype.tools.launcher.models - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.rst b/docs/source/pype.tools.launcher.rst deleted file mode 100644 index c4782bf9bb..0000000000 --- a/docs/source/pype.tools.launcher.rst +++ /dev/null @@ -1,66 +0,0 @@ -pype.tools.launcher package -=========================== - -.. automodule:: pype.tools.launcher - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.launcher.actions module ----------------------------------- - -.. automodule:: pype.tools.launcher.actions - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.delegates module ------------------------------------- - -.. automodule:: pype.tools.launcher.delegates - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.flickcharm module -------------------------------------- - -.. automodule:: pype.tools.launcher.flickcharm - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.lib module ------------------------------- - -.. automodule:: pype.tools.launcher.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.models module ---------------------------------- - -.. automodule:: pype.tools.launcher.models - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.widgets module ----------------------------------- - -.. automodule:: pype.tools.launcher.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.window module ---------------------------------- - -.. automodule:: pype.tools.launcher.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.widgets.rst b/docs/source/pype.tools.launcher.widgets.rst deleted file mode 100644 index 400a5b7a2c..0000000000 --- a/docs/source/pype.tools.launcher.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.widgets module -================================== - -.. automodule:: pype.tools.launcher.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.window.rst b/docs/source/pype.tools.launcher.window.rst deleted file mode 100644 index ae92207795..0000000000 --- a/docs/source/pype.tools.launcher.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.window module -================================= - -.. automodule:: pype.tools.launcher.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.app.rst b/docs/source/pype.tools.pyblish_pype.app.rst deleted file mode 100644 index a70aada725..0000000000 --- a/docs/source/pype.tools.pyblish_pype.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.app module -=================================== - -.. automodule:: pype.tools.pyblish_pype.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.awesome.rst b/docs/source/pype.tools.pyblish_pype.awesome.rst deleted file mode 100644 index 50a81ac5e8..0000000000 --- a/docs/source/pype.tools.pyblish_pype.awesome.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.awesome module -======================================= - -.. automodule:: pype.tools.pyblish_pype.awesome - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.compat.rst b/docs/source/pype.tools.pyblish_pype.compat.rst deleted file mode 100644 index 4beee41e00..0000000000 --- a/docs/source/pype.tools.pyblish_pype.compat.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.compat module -====================================== - -.. automodule:: pype.tools.pyblish_pype.compat - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.constants.rst b/docs/source/pype.tools.pyblish_pype.constants.rst deleted file mode 100644 index bab67a2270..0000000000 --- a/docs/source/pype.tools.pyblish_pype.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.constants module -========================================= - -.. automodule:: pype.tools.pyblish_pype.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.control.rst b/docs/source/pype.tools.pyblish_pype.control.rst deleted file mode 100644 index c2f8c0031e..0000000000 --- a/docs/source/pype.tools.pyblish_pype.control.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.control module -======================================= - -.. automodule:: pype.tools.pyblish_pype.control - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.delegate.rst b/docs/source/pype.tools.pyblish_pype.delegate.rst deleted file mode 100644 index 8796c9830f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.delegate.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.delegate module -======================================== - -.. automodule:: pype.tools.pyblish_pype.delegate - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.mock.rst b/docs/source/pype.tools.pyblish_pype.mock.rst deleted file mode 100644 index 8c22e80856..0000000000 --- a/docs/source/pype.tools.pyblish_pype.mock.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.mock module -==================================== - -.. automodule:: pype.tools.pyblish_pype.mock - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.model.rst b/docs/source/pype.tools.pyblish_pype.model.rst deleted file mode 100644 index 983b06cc8a..0000000000 --- a/docs/source/pype.tools.pyblish_pype.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.model module -===================================== - -.. automodule:: pype.tools.pyblish_pype.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.rst b/docs/source/pype.tools.pyblish_pype.rst deleted file mode 100644 index 9479b5399f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.rst +++ /dev/null @@ -1,130 +0,0 @@ -pype.tools.pyblish\_pype package -================================ - -.. automodule:: pype.tools.pyblish_pype - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.pyblish_pype.vendor - -Submodules ----------- - -pype.tools.pyblish\_pype.app module ------------------------------------ - -.. automodule:: pype.tools.pyblish_pype.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.awesome module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.awesome - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.compat module --------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.compat - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.constants module ------------------------------------------ - -.. automodule:: pype.tools.pyblish_pype.constants - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.control module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.control - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.delegate module ----------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.delegate - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.mock module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.mock - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.model module -------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.model - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.settings module ----------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.settings - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.util module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.util - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.version module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.version - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.view module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.view - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.widgets module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.window module --------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.settings.rst b/docs/source/pype.tools.pyblish_pype.settings.rst deleted file mode 100644 index 2e4e95cca0..0000000000 --- a/docs/source/pype.tools.pyblish_pype.settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.settings module -======================================== - -.. automodule:: pype.tools.pyblish_pype.settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.util.rst b/docs/source/pype.tools.pyblish_pype.util.rst deleted file mode 100644 index fa34295f12..0000000000 --- a/docs/source/pype.tools.pyblish_pype.util.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.util module -==================================== - -.. automodule:: pype.tools.pyblish_pype.util - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst deleted file mode 100644 index a892128308..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome.animation module -========================================================== - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst deleted file mode 100644 index 4f4337348f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome.iconic\_font module -============================================================= - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.iconic_font - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst deleted file mode 100644 index 68b2ec4659..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome package -================================================= - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.pyblish\_pype.vendor.qtawesome.animation module ----------------------------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.animation - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.vendor.qtawesome.iconic\_font module -------------------------------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.iconic_font - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.rst b/docs/source/pype.tools.pyblish_pype.vendor.rst deleted file mode 100644 index 69e6096053..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.pyblish\_pype.vendor package -======================================= - -.. automodule:: pype.tools.pyblish_pype.vendor - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.pyblish_pype.vendor.qtawesome diff --git a/docs/source/pype.tools.pyblish_pype.version.rst b/docs/source/pype.tools.pyblish_pype.version.rst deleted file mode 100644 index a6ddcd5ce8..0000000000 --- a/docs/source/pype.tools.pyblish_pype.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.version module -======================================= - -.. automodule:: pype.tools.pyblish_pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.view.rst b/docs/source/pype.tools.pyblish_pype.view.rst deleted file mode 100644 index 21d34d9daa..0000000000 --- a/docs/source/pype.tools.pyblish_pype.view.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.view module -==================================== - -.. automodule:: pype.tools.pyblish_pype.view - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.widgets.rst b/docs/source/pype.tools.pyblish_pype.widgets.rst deleted file mode 100644 index 8a0d3c380a..0000000000 --- a/docs/source/pype.tools.pyblish_pype.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.widgets module -======================================= - -.. automodule:: pype.tools.pyblish_pype.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.window.rst b/docs/source/pype.tools.pyblish_pype.window.rst deleted file mode 100644 index 10f7b1a36e..0000000000 --- a/docs/source/pype.tools.pyblish_pype.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.window module -====================================== - -.. automodule:: pype.tools.pyblish_pype.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.rst b/docs/source/pype.tools.rst deleted file mode 100644 index d82ed3384a..0000000000 --- a/docs/source/pype.tools.rst +++ /dev/null @@ -1,19 +0,0 @@ -pype.tools package -================== - -.. automodule:: pype.tools - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.assetcreator - pype.tools.launcher - pype.tools.pyblish_pype - pype.tools.settings - pype.tools.standalonepublish diff --git a/docs/source/pype.tools.settings.rst b/docs/source/pype.tools.settings.rst deleted file mode 100644 index ef54851ab1..0000000000 --- a/docs/source/pype.tools.settings.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.settings package -=========================== - -.. automodule:: pype.tools.settings - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.settings.settings diff --git a/docs/source/pype.tools.settings.settings.rst b/docs/source/pype.tools.settings.settings.rst deleted file mode 100644 index 793914e1a8..0000000000 --- a/docs/source/pype.tools.settings.settings.rst +++ /dev/null @@ -1,16 +0,0 @@ -pype.tools.settings.settings package -==================================== - -.. automodule:: pype.tools.settings.settings - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.settings.settings.style - pype.tools.settings.settings.widgets diff --git a/docs/source/pype.tools.settings.settings.style.rst b/docs/source/pype.tools.settings.settings.style.rst deleted file mode 100644 index 228322245c..0000000000 --- a/docs/source/pype.tools.settings.settings.style.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.style package -========================================== - -.. automodule:: pype.tools.settings.settings.style - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst b/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst deleted file mode 100644 index ca951c82f0..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.anatomy\_types module -========================================================== - -.. automodule:: pype.tools.settings.settings.widgets.anatomy_types - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.base.rst b/docs/source/pype.tools.settings.settings.widgets.base.rst deleted file mode 100644 index 8964d6f628..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.base module -================================================ - -.. automodule:: pype.tools.settings.settings.widgets.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.item_types.rst b/docs/source/pype.tools.settings.settings.widgets.item_types.rst deleted file mode 100644 index 5e505538a7..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.item_types.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.item\_types module -======================================================= - -.. automodule:: pype.tools.settings.settings.widgets.item_types - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.lib.rst b/docs/source/pype.tools.settings.settings.widgets.lib.rst deleted file mode 100644 index ae100c74b2..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.lib module -=============================================== - -.. automodule:: pype.tools.settings.settings.widgets.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst b/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst deleted file mode 100644 index 004f2ae21f..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.multiselection\_combobox module -==================================================================== - -.. automodule:: pype.tools.settings.settings.widgets.multiselection_combobox - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.rst b/docs/source/pype.tools.settings.settings.widgets.rst deleted file mode 100644 index 8734280a08..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.tools.settings.settings.widgets package -============================================ - -.. automodule:: pype.tools.settings.settings.widgets - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.settings.settings.widgets.anatomy\_types module ----------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.anatomy_types - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.base module ------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.base - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.item\_types module -------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.item_types - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.lib module ------------------------------------------------ - -.. automodule:: pype.tools.settings.settings.widgets.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.multiselection\_combobox module --------------------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.multiselection_combobox - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.tests module -------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.tests - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.widgets module ---------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.window module --------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.tests.rst b/docs/source/pype.tools.settings.settings.widgets.tests.rst deleted file mode 100644 index fe8d6dabef..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.tests.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.tests module -================================================= - -.. automodule:: pype.tools.settings.settings.widgets.tests - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.widgets.rst b/docs/source/pype.tools.settings.settings.widgets.widgets.rst deleted file mode 100644 index 238e584ac3..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.widgets module -=================================================== - -.. automodule:: pype.tools.settings.settings.widgets.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.window.rst b/docs/source/pype.tools.settings.settings.widgets.window.rst deleted file mode 100644 index d67678012f..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.window module -================================================== - -.. automodule:: pype.tools.settings.settings.widgets.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.app.rst b/docs/source/pype.tools.standalonepublish.app.rst deleted file mode 100644 index 74776b80fe..0000000000 --- a/docs/source/pype.tools.standalonepublish.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.app module -======================================= - -.. automodule:: pype.tools.standalonepublish.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.publish.rst b/docs/source/pype.tools.standalonepublish.publish.rst deleted file mode 100644 index 47ad57e7fb..0000000000 --- a/docs/source/pype.tools.standalonepublish.publish.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.publish module -=========================================== - -.. automodule:: pype.tools.standalonepublish.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.rst b/docs/source/pype.tools.standalonepublish.rst deleted file mode 100644 index 5ca8194b61..0000000000 --- a/docs/source/pype.tools.standalonepublish.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.tools.standalonepublish package -==================================== - -.. automodule:: pype.tools.standalonepublish - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.standalonepublish.widgets - -Submodules ----------- - -pype.tools.standalonepublish.app module ---------------------------------------- - -.. automodule:: pype.tools.standalonepublish.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.publish module -------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst b/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst deleted file mode 100644 index 84d0ca2d93..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_asset module -======================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_asset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst b/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst deleted file mode 100644 index 0c3ae79b99..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_exact\_match module -============================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst b/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst deleted file mode 100644 index b828b75030..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_recursive\_sort module -================================================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_node.rst b/docs/source/pype.tools.standalonepublish.widgets.model_node.rst deleted file mode 100644 index 4789b14501..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_node.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_node module -======================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_node - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst deleted file mode 100644 index dbee838530..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tasks\_template module -================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_tasks_template - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst deleted file mode 100644 index 38eecb095a..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tree module -======================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst deleted file mode 100644 index 9afb505113..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tree\_view\_deselectable module -=========================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree_view_deselectable - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.resources.rst b/docs/source/pype.tools.standalonepublish.widgets.resources.rst deleted file mode 100644 index a0eddae63e..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.resources package -====================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.rst b/docs/source/pype.tools.standalonepublish.widgets.rst deleted file mode 100644 index 65bbcb62fc..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.rst +++ /dev/null @@ -1,146 +0,0 @@ -pype.tools.standalonepublish.widgets package -============================================ - -.. automodule:: pype.tools.standalonepublish.widgets - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.standalonepublish.widgets.resources - -Submodules ----------- - -pype.tools.standalonepublish.widgets.model\_asset module --------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_asset - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_exact\_match module ------------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_recursive\_sort module ---------------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_node module -------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_node - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tasks\_template module ------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tasks_template - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tree module -------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tree\_view\_deselectable module ---------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree_view_deselectable - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_asset module ---------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_asset - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_component\_item module -------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_component_item - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_components module --------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_components\_list module --------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components_list - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_drop\_empty module ---------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_empty - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_drop\_frame module ---------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_frame - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_family module ----------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_family\_desc module ----------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family_desc - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_shadow module ----------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_shadow - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst deleted file mode 100644 index 51a3763628..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_asset module -========================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.widget_asset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst deleted file mode 100644 index 3495fdf192..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_component\_item module -=================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_component_item - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst deleted file mode 100644 index be7c19af9f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_components module -============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst deleted file mode 100644 index 051efe07fe..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_components\_list module -==================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components_list - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst deleted file mode 100644 index b5b0a6acac..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_drop\_empty module -=============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_empty - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst deleted file mode 100644 index 6b3e3690e0..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_drop\_frame module -=============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst deleted file mode 100644 index 24c9d5496f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_family module -========================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst deleted file mode 100644 index 5a7f92177f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_family\_desc module -================================================================ - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family_desc - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst deleted file mode 100644 index 19f5c22198..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_shadow module -========================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_shadow - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.tray.pype_tray.rst b/docs/source/pype.tools.tray.pype_tray.rst deleted file mode 100644 index 9fc49c5763..0000000000 --- a/docs/source/pype.tools.tray.pype_tray.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.tray.pype\_tray module -================================= - -.. automodule:: pype.tools.tray.pype_tray - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.tray.rst b/docs/source/pype.tools.tray.rst deleted file mode 100644 index b28059d170..0000000000 --- a/docs/source/pype.tools.tray.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.tray package -======================= - -.. automodule:: pype.tools.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.tools.tray.pype_tray diff --git a/docs/source/pype.tools.workfiles.app.rst b/docs/source/pype.tools.workfiles.app.rst deleted file mode 100644 index a3a46b8a07..0000000000 --- a/docs/source/pype.tools.workfiles.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.app module -=============================== - -.. automodule:: pype.tools.workfiles.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.workfiles.model.rst b/docs/source/pype.tools.workfiles.model.rst deleted file mode 100644 index 44cea32b97..0000000000 --- a/docs/source/pype.tools.workfiles.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.model module -================================= - -.. automodule:: pype.tools.workfiles.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.workfiles.rst b/docs/source/pype.tools.workfiles.rst deleted file mode 100644 index 147c4cebbe..0000000000 --- a/docs/source/pype.tools.workfiles.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.tools.workfiles package -============================ - -.. automodule:: pype.tools.workfiles - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.tools.workfiles.app - pype.tools.workfiles.model - pype.tools.workfiles.view diff --git a/docs/source/pype.tools.workfiles.view.rst b/docs/source/pype.tools.workfiles.view.rst deleted file mode 100644 index acd32ed250..0000000000 --- a/docs/source/pype.tools.workfiles.view.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.view module -================================ - -.. automodule:: pype.tools.workfiles.view - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.configparser.helpers.rst b/docs/source/pype.vendor.backports.configparser.helpers.rst deleted file mode 100644 index 8d44d0a8c4..0000000000 --- a/docs/source/pype.vendor.backports.configparser.helpers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.backports.configparser.helpers module -================================================= - -.. automodule:: pype.vendor.backports.configparser.helpers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.configparser.rst b/docs/source/pype.vendor.backports.configparser.rst deleted file mode 100644 index 4f778a4a87..0000000000 --- a/docs/source/pype.vendor.backports.configparser.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.backports.configparser package -========================================== - -.. automodule:: pype.vendor.backports.configparser - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.backports.configparser.helpers module -------------------------------------------------- - -.. automodule:: pype.vendor.backports.configparser.helpers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.functools_lru_cache.rst b/docs/source/pype.vendor.backports.functools_lru_cache.rst deleted file mode 100644 index 26f2746cec..0000000000 --- a/docs/source/pype.vendor.backports.functools_lru_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.backports.functools\_lru\_cache module -================================================== - -.. automodule:: pype.vendor.backports.functools_lru_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.rst b/docs/source/pype.vendor.backports.rst deleted file mode 100644 index ff9efc29c5..0000000000 --- a/docs/source/pype.vendor.backports.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.vendor.backports package -============================= - -.. automodule:: pype.vendor.backports - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.backports.configparser - -Submodules ----------- - -pype.vendor.backports.functools\_lru\_cache module --------------------------------------------------- - -.. automodule:: pype.vendor.backports.functools_lru_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.builtins.rst b/docs/source/pype.vendor.builtins.rst deleted file mode 100644 index e21fb768ed..0000000000 --- a/docs/source/pype.vendor.builtins.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.builtins package -============================ - -.. automodule:: pype.vendor.builtins - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture.rst b/docs/source/pype.vendor.capture.rst deleted file mode 100644 index d42e073fb5..0000000000 --- a/docs/source/pype.vendor.capture.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture module -========================== - -.. automodule:: pype.vendor.capture - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.accordion.rst b/docs/source/pype.vendor.capture_gui.accordion.rst deleted file mode 100644 index cca228f151..0000000000 --- a/docs/source/pype.vendor.capture_gui.accordion.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.accordion module -========================================= - -.. automodule:: pype.vendor.capture_gui.accordion - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.app.rst b/docs/source/pype.vendor.capture_gui.app.rst deleted file mode 100644 index 291296834e..0000000000 --- a/docs/source/pype.vendor.capture_gui.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.app module -=================================== - -.. automodule:: pype.vendor.capture_gui.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.colorpicker.rst b/docs/source/pype.vendor.capture_gui.colorpicker.rst deleted file mode 100644 index c9e56500f2..0000000000 --- a/docs/source/pype.vendor.capture_gui.colorpicker.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.colorpicker module -=========================================== - -.. automodule:: pype.vendor.capture_gui.colorpicker - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.lib.rst b/docs/source/pype.vendor.capture_gui.lib.rst deleted file mode 100644 index e94a3bd196..0000000000 --- a/docs/source/pype.vendor.capture_gui.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.lib module -=================================== - -.. automodule:: pype.vendor.capture_gui.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.plugin.rst b/docs/source/pype.vendor.capture_gui.plugin.rst deleted file mode 100644 index 2e8f58c873..0000000000 --- a/docs/source/pype.vendor.capture_gui.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.plugin module -====================================== - -.. automodule:: pype.vendor.capture_gui.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.presets.rst b/docs/source/pype.vendor.capture_gui.presets.rst deleted file mode 100644 index c81b4e1c23..0000000000 --- a/docs/source/pype.vendor.capture_gui.presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.presets module -======================================= - -.. automodule:: pype.vendor.capture_gui.presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.rst b/docs/source/pype.vendor.capture_gui.rst deleted file mode 100644 index f7efce3501..0000000000 --- a/docs/source/pype.vendor.capture_gui.rst +++ /dev/null @@ -1,82 +0,0 @@ -pype.vendor.capture\_gui package -================================ - -.. automodule:: pype.vendor.capture_gui - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.capture_gui.vendor - -Submodules ----------- - -pype.vendor.capture\_gui.accordion module ------------------------------------------ - -.. automodule:: pype.vendor.capture_gui.accordion - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.app module ------------------------------------ - -.. automodule:: pype.vendor.capture_gui.app - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.colorpicker module -------------------------------------------- - -.. automodule:: pype.vendor.capture_gui.colorpicker - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.lib module ------------------------------------ - -.. automodule:: pype.vendor.capture_gui.lib - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.plugin module --------------------------------------- - -.. automodule:: pype.vendor.capture_gui.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.presets module ---------------------------------------- - -.. automodule:: pype.vendor.capture_gui.presets - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.tokens module --------------------------------------- - -.. automodule:: pype.vendor.capture_gui.tokens - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.version module ---------------------------------------- - -.. automodule:: pype.vendor.capture_gui.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.tokens.rst b/docs/source/pype.vendor.capture_gui.tokens.rst deleted file mode 100644 index 9e144a4d37..0000000000 --- a/docs/source/pype.vendor.capture_gui.tokens.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.tokens module -====================================== - -.. automodule:: pype.vendor.capture_gui.tokens - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.vendor.Qt.rst b/docs/source/pype.vendor.capture_gui.vendor.Qt.rst deleted file mode 100644 index 447e6dd812..0000000000 --- a/docs/source/pype.vendor.capture_gui.vendor.Qt.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.vendor.Qt module -========================================= - -.. automodule:: pype.vendor.capture_gui.vendor.Qt - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.vendor.rst b/docs/source/pype.vendor.capture_gui.vendor.rst deleted file mode 100644 index 0befc4bbb7..0000000000 --- a/docs/source/pype.vendor.capture_gui.vendor.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.capture\_gui.vendor package -======================================= - -.. automodule:: pype.vendor.capture_gui.vendor - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.capture\_gui.vendor.Qt module ------------------------------------------ - -.. automodule:: pype.vendor.capture_gui.vendor.Qt - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.version.rst b/docs/source/pype.vendor.capture_gui.version.rst deleted file mode 100644 index 3f0cfbabfd..0000000000 --- a/docs/source/pype.vendor.capture_gui.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.version module -======================================= - -.. automodule:: pype.vendor.capture_gui.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst deleted file mode 100644 index 5155df82aa..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.base module -================================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst deleted file mode 100644 index 3040fe18fd..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.disk module -================================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor.disk - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.rst deleted file mode 100644 index 1f7b522460..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor package -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.accessor.base module -------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.accessor.disk module -------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.disk - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.accessor.server module ---------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst deleted file mode 100644 index db835f99c4..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.server module -=================================================== - -.. automodule:: pype.vendor.ftrack_api_old.accessor.server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.attribute.rst b/docs/source/pype.vendor.ftrack_api_old.attribute.rst deleted file mode 100644 index 54276ceb2a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.attribute.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.attribute module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.attribute - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.cache.rst b/docs/source/pype.vendor.ftrack_api_old.cache.rst deleted file mode 100644 index 396bc5a1cd..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.cache module -========================================= - -.. automodule:: pype.vendor.ftrack_api_old.cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.collection.rst b/docs/source/pype.vendor.ftrack_api_old.collection.rst deleted file mode 100644 index de911619fc..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.collection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.collection module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.collection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.data.rst b/docs/source/pype.vendor.ftrack_api_old.data.rst deleted file mode 100644 index 2f67185cee..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.data.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.data module -======================================== - -.. automodule:: pype.vendor.ftrack_api_old.data - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst b/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst deleted file mode 100644 index 7ad3d87fd9..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.asset\_version module -========================================================= - -.. automodule:: pype.vendor.ftrack_api_old.entity.asset_version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.base.rst b/docs/source/pype.vendor.ftrack_api_old.entity.base.rst deleted file mode 100644 index b87428f817..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.base module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.component.rst b/docs/source/pype.vendor.ftrack_api_old.entity.component.rst deleted file mode 100644 index 27901ab786..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.component.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.component module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.component - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst b/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst deleted file mode 100644 index caada5c3c8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.factory module -================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.job.rst b/docs/source/pype.vendor.ftrack_api_old.entity.job.rst deleted file mode 100644 index 6f4ca18323..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.job.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.job module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.job - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.location.rst b/docs/source/pype.vendor.ftrack_api_old.entity.location.rst deleted file mode 100644 index 2f0b380349..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.location.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.location module -=================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.location - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.note.rst b/docs/source/pype.vendor.ftrack_api_old.entity.note.rst deleted file mode 100644 index c04e959402..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.note.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.note module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.note - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst b/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst deleted file mode 100644 index 6332a2e523..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.project\_schema module -========================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.project_schema - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.rst b/docs/source/pype.vendor.ftrack_api_old.entity.rst deleted file mode 100644 index bb43a7621b..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.rst +++ /dev/null @@ -1,82 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity package -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.entity.asset\_version module ---------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.asset_version - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.base module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.component module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.component - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.factory module --------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.factory - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.job module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.job - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.location module ---------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.location - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.note module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.note - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.project\_schema module ----------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.project_schema - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.user module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.user.rst b/docs/source/pype.vendor.ftrack_api_old.entity.user.rst deleted file mode 100644 index c0fe6574a6..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.user.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.user module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.base.rst b/docs/source/pype.vendor.ftrack_api_old.event.base.rst deleted file mode 100644 index 74b86e3bb2..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.base module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.expression.rst b/docs/source/pype.vendor.ftrack_api_old.event.expression.rst deleted file mode 100644 index 860678797b..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.expression.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.expression module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.expression - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.hub.rst b/docs/source/pype.vendor.ftrack_api_old.event.hub.rst deleted file mode 100644 index d09d52eedf..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.hub.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.hub module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.event.hub - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.rst b/docs/source/pype.vendor.ftrack_api_old.event.rst deleted file mode 100644 index 2db27bf7f8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.vendor.ftrack\_api\_old.event package -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.event - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.event.base module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.expression module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.expression - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.hub module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.hub - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.subscriber module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.subscriber - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.subscription module ------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.subscription - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst b/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst deleted file mode 100644 index a9bd13aabc..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.subscriber module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.subscriber - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst b/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst deleted file mode 100644 index 423fa9a688..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.subscription module -====================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.subscription - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.exception.rst b/docs/source/pype.vendor.ftrack_api_old.exception.rst deleted file mode 100644 index 54dbeeac36..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.exception.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.exception module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.exception - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.formatter.rst b/docs/source/pype.vendor.ftrack_api_old.formatter.rst deleted file mode 100644 index 75a23eefca..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.formatter.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.formatter module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.formatter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.inspection.rst b/docs/source/pype.vendor.ftrack_api_old.inspection.rst deleted file mode 100644 index 2b8849b3d0..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.inspection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.inspection module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.inspection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.logging.rst b/docs/source/pype.vendor.ftrack_api_old.logging.rst deleted file mode 100644 index a10fa10c26..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.logging.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.logging module -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.logging - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.operation.rst b/docs/source/pype.vendor.ftrack_api_old.operation.rst deleted file mode 100644 index a1d9d606f8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.operation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.operation module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.operation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.plugin.rst b/docs/source/pype.vendor.ftrack_api_old.plugin.rst deleted file mode 100644 index 0f26c705d2..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.plugin module -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.query.rst b/docs/source/pype.vendor.ftrack_api_old.query.rst deleted file mode 100644 index 5cf5aba0e4..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.query.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.query module -========================================= - -.. automodule:: pype.vendor.ftrack_api_old.query - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst b/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst deleted file mode 100644 index dccf51ea71..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer.base module -========================================================================== - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst b/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst deleted file mode 100644 index 342ecd9321..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer package -====================================================================== - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer.base module --------------------------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.rst b/docs/source/pype.vendor.ftrack_api_old.rst deleted file mode 100644 index 51d0a29357..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.rst +++ /dev/null @@ -1,126 +0,0 @@ -pype.vendor.ftrack\_api\_old package -==================================== - -.. automodule:: pype.vendor.ftrack_api_old - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.ftrack_api_old.accessor - pype.vendor.ftrack_api_old.entity - pype.vendor.ftrack_api_old.event - pype.vendor.ftrack_api_old.resource_identifier_transformer - pype.vendor.ftrack_api_old.structure - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.attribute module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.attribute - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.cache module ------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.cache - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.collection module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.collection - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.data module ----------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.data - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.exception module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.exception - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.formatter module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.formatter - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.inspection module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.inspection - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.logging module -------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.logging - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.operation module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.operation - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.plugin module ------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.query module ------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.query - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.session module -------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.session - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.symbol module ------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.symbol - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.session.rst b/docs/source/pype.vendor.ftrack_api_old.session.rst deleted file mode 100644 index beecdeb6af..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.session.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.session module -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.session - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.base.rst b/docs/source/pype.vendor.ftrack_api_old.structure.base.rst deleted file mode 100644 index 617d8aaed7..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.base module -================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst b/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst deleted file mode 100644 index ab6fd0997a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.entity\_id module -======================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.entity_id - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.id.rst b/docs/source/pype.vendor.ftrack_api_old.structure.id.rst deleted file mode 100644 index 6b887b7917..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.id.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.id module -================================================ - -.. automodule:: pype.vendor.ftrack_api_old.structure.id - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst b/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst deleted file mode 100644 index 8ad5fbdc11..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.origin module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.origin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.rst b/docs/source/pype.vendor.ftrack_api_old.structure.rst deleted file mode 100644 index 2402430589..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure package -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.structure.base module --------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.entity\_id module --------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.entity_id - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.id module ------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.id - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.origin module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.origin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.standard module ------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.standard - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst b/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst deleted file mode 100644 index 800201084f..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.standard module -====================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.standard - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.symbol.rst b/docs/source/pype.vendor.ftrack_api_old.symbol.rst deleted file mode 100644 index bc358d374a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.symbol.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.symbol module -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.symbol - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.pysync.rst b/docs/source/pype.vendor.pysync.rst deleted file mode 100644 index fbe5b33fb7..0000000000 --- a/docs/source/pype.vendor.pysync.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.pysync module -========================= - -.. automodule:: pype.vendor.pysync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.rst b/docs/source/pype.vendor.rst deleted file mode 100644 index 23aa17f7ab..0000000000 --- a/docs/source/pype.vendor.rst +++ /dev/null @@ -1,37 +0,0 @@ -pype.vendor package -=================== - -.. automodule:: pype.vendor - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.backports - pype.vendor.builtins - pype.vendor.capture_gui - pype.vendor.ftrack_api_old - -Submodules ----------- - -pype.vendor.capture module --------------------------- - -.. automodule:: pype.vendor.capture - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.pysync module -------------------------- - -.. automodule:: pype.vendor.pysync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.version.rst b/docs/source/pype.version.rst deleted file mode 100644 index 7ec69dc423..0000000000 --- a/docs/source/pype.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.version module -=================== - -.. automodule:: pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.message_window.rst b/docs/source/pype.widgets.message_window.rst deleted file mode 100644 index 60be203837..0000000000 --- a/docs/source/pype.widgets.message_window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.message\_window module -=================================== - -.. automodule:: pype.widgets.message_window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.popup.rst b/docs/source/pype.widgets.popup.rst deleted file mode 100644 index 7186ff48de..0000000000 --- a/docs/source/pype.widgets.popup.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.popup module -========================= - -.. automodule:: pype.widgets.popup - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.project_settings.rst b/docs/source/pype.widgets.project_settings.rst deleted file mode 100644 index 9589cf5479..0000000000 --- a/docs/source/pype.widgets.project_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.project\_settings module -===================================== - -.. automodule:: pype.widgets.project_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.rst b/docs/source/pype.widgets.rst deleted file mode 100644 index 1f09318b67..0000000000 --- a/docs/source/pype.widgets.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.widgets package -==================== - -.. automodule:: pype.widgets - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.widgets.message\_window module ------------------------------------ - -.. automodule:: pype.widgets.message_window - :members: - :undoc-members: - :show-inheritance: - -pype.widgets.popup module -------------------------- - -.. automodule:: pype.widgets.popup - :members: - :undoc-members: - :show-inheritance: - -pype.widgets.project\_settings module -------------------------------------- - -.. automodule:: pype.widgets.project_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/readme.rst b/docs/source/readme.rst index 823c0df3c8..138b88bba8 100644 --- a/docs/source/readme.rst +++ b/docs/source/readme.rst @@ -1,2 +1,6 @@ -.. title:: Pype Readme +=============== +OpenPype Readme +=============== + .. include:: ../../README.md + :parser: myst_parser.sphinx_ diff --git a/igniter/__init__.py b/igniter/__init__.py index aa1b1d209e..085a825860 100644 --- a/igniter/__init__.py +++ b/igniter/__init__.py @@ -19,21 +19,41 @@ if "OpenPypeVersion" not in sys.modules: sys.modules["OpenPypeVersion"] = OpenPypeVersion +def _get_qt_app(): + from qtpy import QtWidgets, QtCore + + app = QtWidgets.QApplication.instance() + if app is not None: + return app + + for attr_name in ( + "AA_EnableHighDpiScaling", + "AA_UseHighDpiPixmaps", + ): + attr = getattr(QtCore.Qt, attr_name, None) + if attr is not None: + QtWidgets.QApplication.setAttribute(attr) + + policy = os.getenv("QT_SCALE_FACTOR_ROUNDING_POLICY") + if ( + hasattr(QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy") + and not policy + ): + QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( + QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough + ) + + return QtWidgets.QApplication(sys.argv) + + def open_dialog(): """Show Igniter dialog.""" if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore from .install_dialog import InstallDialog - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() d = InstallDialog() d.open() @@ -47,16 +67,10 @@ def open_update_window(openpype_version): if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore + from .update_window import UpdateWindow - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() d = UpdateWindow(version=openpype_version) d.open() @@ -71,16 +85,10 @@ def show_message_dialog(title, message): if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore + from .message_dialog import MessageDialog - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() dialog = MessageDialog(title, message) dialog.open() diff --git a/igniter/bootstrap_repos.py b/igniter/bootstrap_repos.py index 6c7c834062..e7b440f812 100644 --- a/igniter/bootstrap_repos.py +++ b/igniter/bootstrap_repos.py @@ -25,7 +25,8 @@ from .user_settings import ( from .tools import ( get_openpype_global_settings, get_openpype_path_from_settings, - get_expected_studio_version_str + get_expected_studio_version_str, + get_local_openpype_path_from_settings ) @@ -34,6 +35,29 @@ LOG_WARNING = 1 LOG_ERROR = 3 +def sanitize_long_path(path): + """Sanitize long paths (260 characters) when on Windows. + + Long paths are not capatible with ZipFile or reading a file, so we can + shorten the path to use. + + Args: + path (str): path to either directory or file. + + Returns: + str: sanitized path + """ + if platform.system().lower() != "windows": + return path + path = os.path.abspath(path) + + if path.startswith("\\\\"): + path = "\\\\?\\UNC\\" + path[2:] + else: + path = "\\\\?\\" + path + return path + + def sha256sum(filename): """Calculate sha256 for content of the file. @@ -53,6 +77,13 @@ def sha256sum(filename): return h.hexdigest() +class ZipFileLongPaths(ZipFile): + def _extract_member(self, member, targetpath, pwd): + return ZipFile._extract_member( + self, member, sanitize_long_path(targetpath), pwd + ) + + class OpenPypeVersion(semver.VersionInfo): """Class for storing information about OpenPype version. @@ -61,6 +92,8 @@ class OpenPypeVersion(semver.VersionInfo): """ path = None + + _local_openpype_path = None # this should match any string complying with https://semver.org/ _VERSION_REGEX = re.compile(r"(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P[a-zA-Z\d\-.]*))?(?:\+(?P[a-zA-Z\d\-.]*))?") # noqa: E501 _installed_version = None @@ -289,6 +322,23 @@ class OpenPypeVersion(semver.VersionInfo): """ return os.getenv("OPENPYPE_PATH") + @classmethod + def get_local_openpype_path(cls): + """Path to unzipped versions. + + By default it should be user appdata, but could be overridden by + settings. + """ + if cls._local_openpype_path: + return cls._local_openpype_path + + settings = get_openpype_global_settings(os.environ["OPENPYPE_MONGO"]) + data_dir = get_local_openpype_path_from_settings(settings) + if not data_dir: + data_dir = Path(user_data_dir("openpype", "pypeclub")) + cls._local_openpype_path = data_dir + return data_dir + @classmethod def openpype_path_is_set(cls): """Path to OpenPype zip directory is set.""" @@ -319,9 +369,8 @@ class OpenPypeVersion(semver.VersionInfo): list: of compatible versions available on the machine. """ - # DEPRECATED: backwards compatible way to look for versions in root - dir_to_search = Path(user_data_dir("openpype", "pypeclub")) - versions = OpenPypeVersion.get_versions_from_directory(dir_to_search) + dir_to_search = cls.get_local_openpype_path() + versions = cls.get_versions_from_directory(dir_to_search) return list(sorted(set(versions))) @@ -533,17 +582,15 @@ class BootstrapRepos: """ # vendor and app used to construct user data dir - self._vendor = "pypeclub" - self._app = "openpype" + self._message = message self._log = log.getLogger(str(__class__)) - self.data_dir = Path(user_data_dir(self._app, self._vendor)) + self.set_data_dir(None) self.secure_registry = OpenPypeSecureRegistry("mongodb") self.registry = OpenPypeSettingsRegistry() self.zip_filter = [".pyc", "__pycache__"] self.openpype_filter = [ - "openpype", "schema", "LICENSE" + "openpype", "LICENSE" ] - self._message = message # dummy progress reporter def empty_progress(x: int): @@ -554,6 +601,13 @@ class BootstrapRepos: progress_callback = empty_progress self._progress_callback = progress_callback + def set_data_dir(self, data_dir): + if not data_dir: + self.data_dir = Path(user_data_dir("openpype", "pypeclub")) + else: + self._print(f"overriding local folder: {data_dir}") + self.data_dir = data_dir + @staticmethod def get_version_path_from_list( version: str, version_list: list) -> Union[Path, None]: @@ -756,7 +810,7 @@ class BootstrapRepos: def _create_openpype_zip(self, zip_path: Path, openpype_path: Path) -> None: """Pack repositories and OpenPype into zip. - We are using :mod:`zipfile` instead :meth:`shutil.make_archive` + We are using :mod:`ZipFile` instead :meth:`shutil.make_archive` because we need to decide what file and directories to include in zip and what not. They are determined by :attr:`zip_filter` on file level and :attr:`openpype_filter` on top level directory in OpenPype @@ -810,7 +864,7 @@ class BootstrapRepos: checksums.append( ( - sha256sum(file.as_posix()), + sha256sum(sanitize_long_path(file.as_posix())), file.resolve().relative_to(openpype_root) ) ) @@ -934,7 +988,9 @@ class BootstrapRepos: if platform.system().lower() == "windows": file_name = file_name.replace("/", "\\") try: - current = sha256sum((path / file_name).as_posix()) + current = sha256sum( + sanitize_long_path((path / file_name).as_posix()) + ) except FileNotFoundError: return False, f"Missing file [ {file_name} ]" @@ -1246,7 +1302,7 @@ class BootstrapRepos: # extract zip there self._print("Extracting zip to destination ...") - with ZipFile(version.path, "r") as zip_ref: + with ZipFileLongPaths(version.path, "r") as zip_ref: zip_ref.extractall(destination) self._print(f"Installed as {version.path.stem}") @@ -1362,7 +1418,7 @@ class BootstrapRepos: # extract zip there self._print("extracting zip to destination ...") - with ZipFile(openpype_version.path, "r") as zip_ref: + with ZipFileLongPaths(openpype_version.path, "r") as zip_ref: self._progress_callback(75) zip_ref.extractall(destination) self._progress_callback(100) diff --git a/igniter/install_thread.py b/igniter/install_thread.py index 4723e6adfb..1d55213de7 100644 --- a/igniter/install_thread.py +++ b/igniter/install_thread.py @@ -14,7 +14,11 @@ from .bootstrap_repos import ( OpenPypeVersion ) -from .tools import validate_mongo_connection +from .tools import ( + get_openpype_global_settings, + get_local_openpype_path_from_settings, + validate_mongo_connection +) class InstallThread(QtCore.QThread): @@ -80,6 +84,15 @@ class InstallThread(QtCore.QThread): return os.environ["OPENPYPE_MONGO"] = self._mongo + if not validate_mongo_connection(self._mongo): + self.message.emit(f"Cannot connect to {self._mongo}", True) + self._set_result(-1) + return + + global_settings = get_openpype_global_settings(self._mongo) + data_dir = get_local_openpype_path_from_settings(global_settings) + bs.set_data_dir(data_dir) + self.message.emit( f"Detecting installed OpenPype versions in {bs.data_dir}", False) diff --git a/igniter/tools.py b/igniter/tools.py index 79235b2329..9dea203f0c 100644 --- a/igniter/tools.py +++ b/igniter/tools.py @@ -40,7 +40,7 @@ def should_add_certificate_path_to_mongo_url(mongo_url): add_certificate = False # Check if url 'ssl' or 'tls' are set to 'true' for key in ("ssl", "tls"): - if key in query and "true" in query["ssl"]: + if key in query and "true" in query[key]: add_certificate = True break @@ -73,7 +73,7 @@ def validate_mongo_connection(cnx: str) -> (bool, str): } # Add certificate path if should be required if should_add_certificate_path_to_mongo_url(cnx): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() try: client = MongoClient(cnx, **kwargs) @@ -147,7 +147,7 @@ def get_openpype_global_settings(url: str) -> dict: """ kwargs = {} if should_add_certificate_path_to_mongo_url(url): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() try: # Create mongo connection @@ -188,6 +188,26 @@ def get_openpype_path_from_settings(settings: dict) -> Union[str, None]: return next((path for path in paths if os.path.exists(path)), None) +def get_local_openpype_path_from_settings(settings: dict) -> Union[str, None]: + """Get OpenPype local path from global settings. + + Used to download and unzip OP versions. + Args: + settings (dict): settings from DB. + + Returns: + path to OpenPype or None if not found + """ + path = ( + settings + .get("local_openpype_path", {}) + .get(platform.system().lower()) + ) + if path: + return Path(path) + return None + + def get_expected_studio_version_str( staging=False, global_settings=None ) -> str: diff --git a/igniter/update_thread.py b/igniter/update_thread.py index e98c95f892..0223477d0a 100644 --- a/igniter/update_thread.py +++ b/igniter/update_thread.py @@ -48,6 +48,8 @@ class UpdateThread(QtCore.QThread): """ bs = BootstrapRepos( progress_callback=self.set_progress, message=self.message) + + bs.set_data_dir(OpenPypeVersion.get_local_openpype_path()) version_path = bs.install_version(self._openpype_version) self._set_result(version_path) diff --git a/inno_setup.iss b/inno_setup.iss index 418bedbd4d..d9a41d22ee 100644 --- a/inno_setup.iss +++ b/inno_setup.iss @@ -36,7 +36,7 @@ WizardStyle=modern Name: "english"; MessagesFile: "compiler:Default.isl" [Tasks] -Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked +Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}" [InstallDelete] ; clean everything in previous installation folder @@ -53,4 +53,3 @@ Name: "{autodesktop}\{#MyAppName} {#AppVer}"; Filename: "{app}\openpype_gui.exe" [Run] Filename: "{app}\openpype_gui.exe"; Description: "{cm:LaunchProgram,OpenPype}"; Flags: nowait postinstall skipifsilent - diff --git a/openpype/__init__.py b/openpype/__init__.py index 810664707a..e6b77b1853 100644 --- a/openpype/__init__.py +++ b/openpype/__init__.py @@ -3,3 +3,5 @@ import os PACKAGE_DIR = os.path.dirname(os.path.abspath(__file__)) PLUGINS_DIR = os.path.join(PACKAGE_DIR, "plugins") + +AYON_SERVER_ENABLED = os.environ.get("USE_AYON_SERVER") == "1" diff --git a/openpype/action.py b/openpype/action.py deleted file mode 100644 index 6114c65fd4..0000000000 --- a/openpype/action.py +++ /dev/null @@ -1,135 +0,0 @@ -import warnings -import functools -import pyblish.api - - -class ActionDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", ActionDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=ActionDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.publish.get_errored_instances_from_context") -def get_errored_instances_from_context(context, plugin=None): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_instances_from_context - - return get_errored_instances_from_context(context, plugin=plugin) - - -@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context") -def get_errored_plugins_from_data(context): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_plugins_from_context - - return get_errored_plugins_from_context(context) - - -class RepairAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - icon = "wrench" # Icon from Awesome Icon - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) - for instance in errored_instances: - plugin.repair(instance) - - -class RepairContextAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_plugins = get_errored_plugins_from_data(context) - - # Apply pyblish.logic to get the instances for the plug-in - if plugin in errored_plugins: - self.log.info("Attempting fix ...") - plugin.repair(context) diff --git a/openpype/cli.py b/openpype/cli.py index 54af42920d..7422f32f13 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -5,11 +5,25 @@ import sys import code import click -# import sys +from openpype import AYON_SERVER_ENABLED from .pype_commands import PypeCommands -@click.group(invoke_without_command=True) +class AliasedGroup(click.Group): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self._aliases = {} + + def set_alias(self, src_name, dst_name): + self._aliases[dst_name] = src_name + + def get_command(self, ctx, cmd_name): + if cmd_name in self._aliases: + cmd_name = self._aliases[cmd_name] + return super().get_command(ctx, cmd_name) + + +@click.group(cls=AliasedGroup, invoke_without_command=True) @click.pass_context @click.option("--use-version", expose_value=False, help="use specified version") @@ -33,7 +47,11 @@ def main(ctx): if ctx.invoked_subcommand is None: # Print help if headless mode is used - if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": + if AYON_SERVER_ENABLED: + is_headless = os.getenv("AYON_HEADLESS_MODE") == "1" + else: + is_headless = os.getenv("OPENPYPE_HEADLESS_MODE") == "1" + if is_headless: print(ctx.get_help()) sys.exit(0) else: @@ -44,6 +62,9 @@ def main(ctx): @click.option("-d", "--dev", is_flag=True, help="Settings in Dev mode") def settings(dev): """Show Pype Settings UI.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'settings' command.") PypeCommands().launch_settings_gui(dev) @@ -58,16 +79,20 @@ def tray(): @PypeCommands.add_modules -@main.group(help="Run command line arguments of OpenPype modules") +@main.group(help="Run command line arguments of OpenPype addons") @click.pass_context def module(ctx): - """Module specific commands created dynamically. + """Addon specific commands created dynamically. - These commands are generated dynamically by currently loaded addon/modules. + These commands are generated dynamically by currently loaded addons. """ pass +# Add 'addon' as alias for module +main.set_alias("module", "addon") + + @main.command() @click.option("--ftrack-url", envvar="FTRACK_SERVER", help="Ftrack server url") @@ -93,6 +118,8 @@ def eventserver(ftrack_url, on linux and window service). """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'eventserver' command.") PypeCommands().launch_eventservercli( ftrack_url, ftrack_user, @@ -117,6 +144,10 @@ def webpublisherwebserver(executable, upload_dir, host=None, port=None): Expect "pype.club" user created on Ftrack. """ + if AYON_SERVER_ENABLED: + raise RuntimeError( + "AYON does not support 'webpublisherwebserver' command." + ) PypeCommands().launch_webpublisher_webservercli( upload_dir=upload_dir, executable=executable, @@ -165,122 +196,10 @@ def publish(paths, targets, gui): PypeCommands.publish(list(paths), targets, gui) -@main.command() -@click.argument("path") -@click.option("-h", "--host", help="Host") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublishfromapp(project, path, host, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublishfromapp( - project, path, host, user, targets=targets - ) - - -@main.command() -@click.argument("path") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublish(project, path, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublish(project, path, user, targets=targets) - - -@main.command() -@click.option("-p", "--project", required=True, - help="name of project asset is under") -@click.option("-a", "--asset", required=True, - help="name of asset to which we want to copy textures") -@click.option("--path", required=True, - help="path where textures are found", - type=click.Path(exists=True)) -def texturecopy(project, asset, path): - """Copy specified textures to provided asset path. - - It validates if project and asset exists. Then it will use speedcopy to - copy all textures found in all directories under --path to destination - folder, determined by template texture in anatomy. I will use source - filename and automatically rise version number on directory. - - Result will be copied without directory structure so it will be flat then. - Nothing is written to database. - """ - - PypeCommands().texture_copy(project, asset, path) - - -@main.command(context_settings={"ignore_unknown_options": True}) -@click.option("--app", help="Registered application name") -@click.option("--project", help="Project name", - default=lambda: os.environ.get('AVALON_PROJECT', '')) -@click.option("--asset", help="Asset name", - default=lambda: os.environ.get('AVALON_ASSET', '')) -@click.option("--task", help="Task name", - default=lambda: os.environ.get('AVALON_TASK', '')) -@click.option("--tools", help="List of tools to add") -@click.option("--user", help="Pype user name", - default=lambda: os.environ.get('OPENPYPE_USERNAME', '')) -@click.option("-fs", - "--ftrack-server", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_SERVER', '')) -@click.option("-fu", - "--ftrack-user", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_API_USER', '')) -@click.option("-fk", - "--ftrack-key", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_API_KEY', '')) -@click.argument('arguments', nargs=-1) -def launch(app, project, asset, task, - ftrack_server, ftrack_user, ftrack_key, tools, arguments, user): - """Launch registered application name in Pype context. - - You can define applications in pype-config toml files. Project, asset name - and task name must be provided (even if they are not used by app itself). - Optionally you can specify ftrack credentials if needed. - - ARGUMENTS are passed to launched application. - - """ - # TODO: this needs to switch for Settings - if ftrack_server: - os.environ["FTRACK_SERVER"] = ftrack_server - - if ftrack_server: - os.environ["FTRACK_API_USER"] = ftrack_user - - if ftrack_server: - os.environ["FTRACK_API_KEY"] = ftrack_key - - if user: - os.environ["OPENPYPE_USERNAME"] = user - - # test required - if not project or not asset or not task: - print("!!! Missing required arguments") - return - - PypeCommands().run_application(app, project, asset, task, tools, arguments) - - @main.command(context_settings={"ignore_unknown_options": True}) def projectmanager(): + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'projectmanager' command.") PypeCommands().launch_project_manager() @@ -371,19 +290,29 @@ def run(script): "--setup_only", help="Only create dbs, do not run tests", default=None) +@click.option("--mongo_url", + help="MongoDB for testing.", + default=None) def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant, - timeout, setup_only): + timeout, setup_only, mongo_url): """Run all automatic tests after proper initialization via start.py""" PypeCommands().run_tests(folder, mark, pyargs, test_data_folder, - persist, app_variant, timeout, setup_only) + persist, app_variant, timeout, setup_only, + mongo_url) -@main.command() +@main.command(help="DEPRECATED - run sync server") +@click.pass_context @click.option("-a", "--active_site", required=True, - help="Name of active stie") -def syncserver(active_site): + help="Name of active site") +def syncserver(ctx, active_site): """Run sync site server in background. + Deprecated: + This command is deprecated and will be removed in future versions. + Use '~/openpype_console module sync_server syncservice' instead. + + Details: Some Site Sync use cases need to expose site to another one. For example if majority of artists work in studio, they are not using SS at all, but if you want to expose published assets to 'studio' site @@ -397,7 +326,12 @@ def syncserver(active_site): var OPENPYPE_LOCAL_ID set to 'active_site'. """ - PypeCommands().syncserver(active_site) + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'syncserver' command.") + + from openpype.modules.sync_server.sync_server_module import ( + syncservice) + ctx.invoke(syncservice, active_site=active_site) @main.command() @@ -409,6 +343,8 @@ def repack_version(directory): recalculating file checksums. It will try to use version detected in directory name. """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'repack-version' command.") PypeCommands().repack_version(directory) @@ -420,6 +356,9 @@ def repack_version(directory): "--dbonly", help="Store only Database data", default=False, is_flag=True) def pack_project(project, dirpath, dbonly): """Create a package of project with all files and database dump.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'pack-project' command.") PypeCommands().pack_project(project, dirpath, dbonly) @@ -432,6 +371,8 @@ def pack_project(project, dirpath, dbonly): "--dbonly", help="Store only Database data", default=False, is_flag=True) def unpack_project(zipfile, root, dbonly): """Create a package of project with all files and database dump.""" + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'unpack-project' command.") PypeCommands().unpack_project(zipfile, root, dbonly) @@ -446,9 +387,17 @@ def interactive(): Executable 'openpype_gui' on Windows won't work. """ - from openpype.version import __version__ + if AYON_SERVER_ENABLED: + version = os.environ["AYON_VERSION"] + banner = ( + f"AYON launcher {version}\nPython {sys.version} on {sys.platform}" + ) + else: + from openpype.version import __version__ - banner = f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + banner = ( + f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + ) code.interact(banner) @@ -457,11 +406,13 @@ def interactive(): is_flag=True, default=False) def version(build): """Print OpenPype version.""" + if AYON_SERVER_ENABLED: + print(os.environ["AYON_VERSION"]) + return from openpype.version import __version__ from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion from pathlib import Path - import os if getattr(sys, 'frozen', False): local_version = BootstrapRepos.get_version( diff --git a/openpype/client/entities.py b/openpype/client/entities.py index adbdd7a47c..5d9654c611 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -1,1553 +1,6 @@ -"""Unclear if these will have public functions like these. +from openpype import AYON_SERVER_ENABLED -Goal is that most of functions here are called on (or with) an object -that has project name as a context (e.g. on 'ProjectEntity'?). - -+ We will need more specific functions doing very specific queries really fast. -""" - -import re -import collections - -import six -from bson.objectid import ObjectId - -from .mongo import get_project_database, get_project_connection - -PatternType = type(re.compile("")) - - -def _prepare_fields(fields, required_fields=None): - if not fields: - return None - - output = { - field: True - for field in fields - } - if "_id" not in output: - output["_id"] = True - - if required_fields: - for key in required_fields: - output[key] = True - return output - - -def convert_id(in_id): - """Helper function for conversion of id from string to ObjectId. - - Args: - in_id (Union[str, ObjectId, Any]): Entity id that should be converted - to right type for queries. - - Returns: - Union[ObjectId, Any]: Converted ids to ObjectId or in type. - """ - - if isinstance(in_id, six.string_types): - return ObjectId(in_id) - return in_id - - -def convert_ids(in_ids): - """Helper function for conversion of ids from string to ObjectId. - - Args: - in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that - should be converted to right type for queries. - - Returns: - List[ObjectId]: Converted ids to ObjectId. - """ - - _output = set() - for in_id in in_ids: - if in_id is not None: - _output.add(convert_id(in_id)) - return list(_output) - - -def get_projects(active=True, inactive=False, fields=None): - """Yield all project entity documents. - - Args: - active (Optional[bool]): Include active projects. Defaults to True. - inactive (Optional[bool]): Include inactive projects. - Defaults to False. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Yields: - dict: Project entity data which can be reduced to specified 'fields'. - None is returned if project with specified filters was not found. - """ - mongodb = get_project_database() - for project_name in mongodb.collection_names(): - if project_name in ("system.indexes",): - continue - project_doc = get_project( - project_name, active=active, inactive=inactive, fields=fields - ) - if project_doc is not None: - yield project_doc - - -def get_project(project_name, active=True, inactive=True, fields=None): - """Return project entity document by project name. - - Args: - project_name (str): Name of project. - active (Optional[bool]): Allow active project. Defaults to True. - inactive (Optional[bool]): Allow inactive project. Defaults to True. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Project entity data which can be reduced to - specified 'fields'. None is returned if project with specified - filters was not found. - """ - # Skip if both are disabled - if not active and not inactive: - return None - - query_filter = {"type": "project"} - # Keep query untouched if both should be available - if active and inactive: - pass - - # Add filter to keep only active - elif active: - query_filter["$or"] = [ - {"data.active": {"$exists": False}}, - {"data.active": True}, - ] - - # Add filter to keep only inactive - elif inactive: - query_filter["$or"] = [ - {"data.active": {"$exists": False}}, - {"data.active": False}, - ] - - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_whole_project(project_name): - """Receive all documents from project. - - Helper that can be used to get all document from whole project. For example - for backups etc. - - Returns: - Cursor: Query cursor as iterable which returns all documents from - project collection. - """ - - conn = get_project_connection(project_name) - return conn.find({}) - - -def get_asset_by_id(project_name, asset_id, fields=None): - """Receive asset data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_id (Union[str, ObjectId]): Asset's id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Asset entity data which can be reduced to - specified 'fields'. None is returned if asset with specified - filters was not found. - """ - - asset_id = convert_id(asset_id) - if not asset_id: - return None - - query_filter = {"type": "asset", "_id": asset_id} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_asset_by_name(project_name, asset_name, fields=None): - """Receive asset data by its name. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_name (str): Asset's name. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Asset entity data which can be reduced to - specified 'fields'. None is returned if asset with specified - filters was not found. - """ - - if not asset_name: - return None - - query_filter = {"type": "asset", "name": asset_name} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -# NOTE this could be just public function? -# - any better variable name instead of 'standard'? -# - same approach can be used for rest of types -def _get_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - standard=True, - archived=False, - fields=None -): - """Assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - standard (bool): Query standard assets (type 'asset'). - archived (bool): Query archived assets (type 'archived_asset'). - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - asset_types = [] - if standard: - asset_types.append("asset") - if archived: - asset_types.append("archived_asset") - - if not asset_types: - return [] - - if len(asset_types) == 1: - query_filter = {"type": asset_types[0]} - else: - query_filter = {"type": {"$in": asset_types}} - - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - query_filter["_id"] = {"$in": asset_ids} - - if asset_names is not None: - if not asset_names: - return [] - query_filter["name"] = {"$in": list(asset_names)} - - if parent_ids is not None: - parent_ids = convert_ids(parent_ids) - if not parent_ids: - return [] - query_filter["data.visualParent"] = {"$in": parent_ids} - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - archived=False, - fields=None -): - """Assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - archived (bool): Add also archived assets. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - return _get_assets( - project_name, - asset_ids, - asset_names, - parent_ids, - True, - archived, - fields - ) - - -def get_archived_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - fields=None -): - """Archived assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all archived assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - return _get_assets( - project_name, asset_ids, asset_names, parent_ids, False, True, fields - ) - - -def get_asset_ids_with_subsets(project_name, asset_ids=None): - """Find out which assets have existing subsets. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Look only for entered - asset ids. - - Returns: - Iterable[ObjectId]: Asset ids that have existing subsets. - """ - - subset_query = { - "type": "subset" - } - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - subset_query["parent"] = {"$in": asset_ids} - - conn = get_project_connection(project_name) - result = conn.aggregate([ - { - "$match": subset_query - }, - { - "$group": { - "_id": "$parent", - "count": {"$sum": 1} - } - } - ]) - asset_ids_with_subsets = [] - for item in result: - asset_id = item["_id"] - count = item["count"] - if count > 0: - asset_ids_with_subsets.append(asset_id) - return asset_ids_with_subsets - - -def get_subset_by_id(project_name, subset_id, fields=None): - """Single subset entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Id of subset which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Subset entity data which can be reduced to - specified 'fields'. None is returned if subset with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - query_filters = {"type": "subset", "_id": subset_id} - conn = get_project_connection(project_name) - return conn.find_one(query_filters, _prepare_fields(fields)) - - -def get_subset_by_name(project_name, subset_name, asset_id, fields=None): - """Single subset entity data by its name and its version id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_name (str): Name of subset. - asset_id (Union[str, ObjectId]): Id of parent asset. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Subset entity data which can be reduced to - specified 'fields'. None is returned if subset with specified - filters was not found. - """ - if not subset_name: - return None - - asset_id = convert_id(asset_id) - if not asset_id: - return None - - query_filters = { - "type": "subset", - "name": subset_name, - "parent": asset_id - } - conn = get_project_connection(project_name) - return conn.find_one(query_filters, _prepare_fields(fields)) - - -def get_subsets( - project_name, - subset_ids=None, - subset_names=None, - asset_ids=None, - names_by_asset_ids=None, - archived=False, - fields=None -): - """Subset entities data from one project filtered by entered filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should be - queried. Filter ignored if 'None' is passed. - subset_names (Iterable[str]): Subset names that should be queried. - Filter ignored if 'None' is passed. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids under which - should look for the subsets. Filter ignored if 'None' is passed. - names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering - using asset ids and list of subset names under the asset. - archived (bool): Look for archived subsets too. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching subsets. - """ - - subset_types = ["subset"] - if archived: - subset_types.append("archived_subset") - - if len(subset_types) == 1: - query_filter = {"type": subset_types[0]} - else: - query_filter = {"type": {"$in": subset_types}} - - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - query_filter["parent"] = {"$in": asset_ids} - - if subset_ids is not None: - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return [] - query_filter["_id"] = {"$in": subset_ids} - - if subset_names is not None: - if not subset_names: - return [] - query_filter["name"] = {"$in": list(subset_names)} - - if names_by_asset_ids is not None: - or_query = [] - for asset_id, names in names_by_asset_ids.items(): - if asset_id and names: - or_query.append({ - "parent": convert_id(asset_id), - "name": {"$in": list(names)} - }) - if not or_query: - return [] - query_filter["$or"] = or_query - - conn = get_project_connection(project_name) - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_subset_families(project_name, subset_ids=None): - """Set of main families of subsets. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should - be queried. All subsets from project are used if 'None' is passed. - - Returns: - set[str]: Main families of matching subsets. - """ - - subset_filter = { - "type": "subset" - } - if subset_ids is not None: - if not subset_ids: - return set() - subset_filter["_id"] = {"$in": list(subset_ids)} - - conn = get_project_connection(project_name) - result = list(conn.aggregate([ - {"$match": subset_filter}, - {"$project": { - "family": {"$arrayElemAt": ["$data.families", 0]} - }}, - {"$group": { - "_id": "family_group", - "families": {"$addToSet": "$family"} - }} - ])) - if result: - return set(result[0]["families"]) - return set() - - -def get_version_by_id(project_name, version_id, fields=None): - """Single version entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id: - return None - - query_filter = { - "type": {"$in": ["version", "hero_version"]}, - "_id": version_id - } - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_version_by_name(project_name, version, subset_id, fields=None): - """Single version entity data by its name and subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - version (int): name of version entity (its version). - subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - conn = get_project_connection(project_name) - query_filter = { - "type": "version", - "parent": subset_id, - "name": version - } - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def version_is_latest(project_name, version_id): - """Is version the latest from its subset. - - Note: - Hero versions are considered as latest. - - Todo: - Maybe raise exception when version was not found? - - Args: - project_name (str):Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Version id which is checked. - - Returns: - bool: True if is latest version from subset else False. - """ - - version_id = convert_id(version_id) - if not version_id: - return False - version_doc = get_version_by_id( - project_name, version_id, fields=["_id", "type", "parent"] - ) - # What to do when version is not found? - if not version_doc: - return False - - if version_doc["type"] == "hero_version": - return True - - last_version = get_last_version_by_subset_id( - project_name, version_doc["parent"], fields=["_id"] - ) - return last_version["_id"] == version_id - - -def _get_versions( - project_name, - subset_ids=None, - version_ids=None, - versions=None, - standard=True, - hero=False, - fields=None -): - version_types = [] - if standard: - version_types.append("version") - - if hero: - version_types.append("hero_version") - - if not version_types: - return [] - elif len(version_types) == 1: - query_filter = {"type": version_types[0]} - else: - query_filter = {"type": {"$in": version_types}} - - if subset_ids is not None: - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return [] - query_filter["parent"] = {"$in": subset_ids} - - if version_ids is not None: - version_ids = convert_ids(version_ids) - if not version_ids: - return [] - query_filter["_id"] = {"$in": version_ids} - - if versions is not None: - versions = list(versions) - if not versions: - return [] - - if len(versions) == 1: - query_filter["name"] = versions[0] - else: - query_filter["name"] = {"$in": versions} - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_versions( - project_name, - version_ids=None, - subset_ids=None, - versions=None, - hero=False, - fields=None -): - """Version entities data from one project filtered by entered filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - version_ids (Iterable[Union[str, ObjectId]]): Version ids that will - be queried. Filter ignored if 'None' is passed. - subset_ids (Iterable[str]): Subset ids that will be queried. - Filter ignored if 'None' is passed. - versions (Iterable[int]): Version names (as integers). - Filter ignored if 'None' is passed. - hero (bool): Look also for hero versions. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching versions. - """ - - return _get_versions( - project_name, - subset_ids, - version_ids, - versions, - standard=True, - hero=hero, - fields=fields - ) - - -def get_hero_version_by_subset_id(project_name, subset_id, fields=None): - """Hero version by subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Subset id under which - is hero version. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Hero version entity data which can be reduced to - specified 'fields'. None is returned if hero version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - versions = list(_get_versions( - project_name, - subset_ids=[subset_id], - standard=False, - hero=True, - fields=fields - )) - if versions: - return versions[0] - return None - - -def get_hero_version_by_id(project_name, version_id, fields=None): - """Hero version by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Hero version id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Hero version entity data which can be reduced to - specified 'fields'. None is returned if hero version with specified - filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id: - return None - - versions = list(_get_versions( - project_name, - version_ids=[version_id], - standard=False, - hero=True, - fields=fields - )) - if versions: - return versions[0] - return None - - -def get_hero_versions( - project_name, - subset_ids=None, - version_ids=None, - fields=None -): - """Hero version entities data from one project filtered by entered filters. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids for which - should look for hero versions. Filter ignored if 'None' is passed. - version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter - ignored if 'None' is passed. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor|list: Iterable yielding hero versions matching passed filters. - """ - - return _get_versions( - project_name, - subset_ids, - version_ids, - standard=False, - hero=True, - fields=fields - ) - - -def get_output_link_versions(project_name, version_id, fields=None): - """Versions where passed version was used as input. - - Question: - Not 100% sure about the usage of the function so the name and docstring - maybe does not match what it does? - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Version id which can be used - as input link for other versions. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Iterable: Iterable cursor yielding versions that are used as input - links for passed version. - """ - - version_id = convert_id(version_id) - if not version_id: - return [] - - conn = get_project_connection(project_name) - # Does make sense to look for hero versions? - query_filter = { - "type": "version", - "data.inputLinks.id": version_id - } - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_last_versions(project_name, subset_ids, active=None, fields=None): - """Latest versions for entered subset_ids. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids. - active (Optional[bool]): If True only active versions are returned. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - dict[ObjectId, int]: Key is subset id and value is last version name. - """ - - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return {} - - if fields is not None: - fields = list(fields) - if not fields: - return {} - - # Avoid double query if only name and _id are requested - name_needed = False - limit_query = False - if fields: - fields_s = set(fields) - if "name" in fields_s: - name_needed = True - fields_s.remove("name") - - for field in ("_id", "parent"): - if field in fields_s: - fields_s.remove(field) - limit_query = len(fields_s) == 0 - - group_item = { - "_id": "$parent", - "_version_id": {"$last": "$_id"} - } - # Add name if name is needed (only for limit query) - if name_needed: - group_item["name"] = {"$last": "$name"} - - aggregate_filter = { - "type": "version", - "parent": {"$in": subset_ids} - } - if active is False: - aggregate_filter["data.active"] = active - elif active is True: - aggregate_filter["$or"] = [ - {"data.active": {"$exists": 0}}, - {"data.active": active}, - ] - - aggregation_pipeline = [ - # Find all versions of those subsets - {"$match": aggregate_filter}, - # Sorting versions all together - {"$sort": {"name": 1}}, - # Group them by "parent", but only take the last - {"$group": group_item} - ] - - conn = get_project_connection(project_name) - aggregate_result = conn.aggregate(aggregation_pipeline) - if limit_query: - output = {} - for item in aggregate_result: - subset_id = item["_id"] - item_data = {"_id": item["_version_id"], "parent": subset_id} - if name_needed: - item_data["name"] = item["name"] - output[subset_id] = item_data - return output - - version_ids = [ - doc["_version_id"] - for doc in aggregate_result - ] - - fields = _prepare_fields(fields, ["parent"]) - - version_docs = get_versions( - project_name, version_ids=version_ids, fields=fields - ) - - return { - version_doc["parent"]: version_doc - for version_doc in version_docs - } - - -def get_last_version_by_subset_id(project_name, subset_id, fields=None): - """Last version for passed subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - last_versions = get_last_versions( - project_name, subset_ids=[subset_id], fields=fields - ) - return last_versions.get(subset_id) - - -def get_last_version_by_subset_name( - project_name, subset_name, asset_id=None, asset_name=None, fields=None -): - """Last version for passed subset name under asset id/name. - - It is required to pass 'asset_id' or 'asset_name'. Asset id is recommended - if is available. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_name (str): Name of subset. - asset_id (Union[str, ObjectId]): Asset id which is parent of passed - subset name. - asset_name (str): Asset name which is parent of passed subset name. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - if not asset_id and not asset_name: - return None - - if not asset_id: - asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) - if not asset_doc: - return None - asset_id = asset_doc["_id"] - subset_doc = get_subset_by_name( - project_name, subset_name, asset_id, fields=["_id"] - ) - if not subset_doc: - return None - return get_last_version_by_subset_id( - project_name, subset_doc["_id"], fields=fields - ) - - -def get_representation_by_id(project_name, representation_id, fields=None): - """Representation entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - representation_id (Union[str, ObjectId]): Representation id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Representation entity data which can be reduced to - specified 'fields'. None is returned if representation with - specified filters was not found. - """ - - if not representation_id: - return None - - repre_types = ["representation", "archived_representation"] - query_filter = { - "type": {"$in": repre_types} - } - if representation_id is not None: - query_filter["_id"] = convert_id(representation_id) - - conn = get_project_connection(project_name) - - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_representation_by_name( - project_name, representation_name, version_id, fields=None -): - """Representation entity data by its name and its version id. - - Args: - project_name (str): Name of project where to look for queried entities. - representation_name (str): Representation name. - version_id (Union[str, ObjectId]): Id of parent version entity. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[dict[str, Any], None]: Representation entity data which can be - reduced to specified 'fields'. None is returned if representation - with specified filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id or not representation_name: - return None - repre_types = ["representation", "archived_representations"] - query_filter = { - "type": {"$in": repre_types}, - "name": representation_name, - "parent": version_id - } - - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def _flatten_dict(data): - flatten_queue = collections.deque() - flatten_queue.append(data) - output = {} - while flatten_queue: - item = flatten_queue.popleft() - for key, value in item.items(): - if not isinstance(value, dict): - output[key] = value - continue - - tmp = {} - for subkey, subvalue in value.items(): - new_key = "{}.{}".format(key, subkey) - tmp[new_key] = subvalue - flatten_queue.append(tmp) - return output - - -def _regex_filters(filters): - output = [] - for key, value in filters.items(): - regexes = [] - a_values = [] - if isinstance(value, PatternType): - regexes.append(value) - elif isinstance(value, (list, tuple, set)): - for item in value: - if isinstance(item, PatternType): - regexes.append(item) - else: - a_values.append(item) - else: - a_values.append(value) - - key_filters = [] - if len(a_values) == 1: - key_filters.append({key: a_values[0]}) - elif a_values: - key_filters.append({key: {"$in": a_values}}) - - for regex in regexes: - key_filters.append({key: {"$regex": regex}}) - - if len(key_filters) == 1: - output.append(key_filters[0]) - else: - output.append({"$or": key_filters}) - - return output - - -def _get_representations( - project_name, - representation_ids, - representation_names, - version_ids, - context_filters, - names_by_version_ids, - standard, - archived, - fields -): - default_output = [] - repre_types = [] - if standard: - repre_types.append("representation") - if archived: - repre_types.append("archived_representation") - - if not repre_types: - return default_output - - if len(repre_types) == 1: - query_filter = {"type": repre_types[0]} - else: - query_filter = {"type": {"$in": repre_types}} - - if representation_ids is not None: - representation_ids = convert_ids(representation_ids) - if not representation_ids: - return default_output - query_filter["_id"] = {"$in": representation_ids} - - if representation_names is not None: - if not representation_names: - return default_output - query_filter["name"] = {"$in": list(representation_names)} - - if version_ids is not None: - version_ids = convert_ids(version_ids) - if not version_ids: - return default_output - query_filter["parent"] = {"$in": version_ids} - - or_queries = [] - if names_by_version_ids is not None: - or_query = [] - for version_id, names in names_by_version_ids.items(): - if version_id and names: - or_query.append({ - "parent": convert_id(version_id), - "name": {"$in": list(names)} - }) - if not or_query: - return default_output - or_queries.append(or_query) - - if context_filters is not None: - if not context_filters: - return [] - _flatten_filters = _flatten_dict(context_filters) - flatten_filters = {} - for key, value in _flatten_filters.items(): - if not key.startswith("context"): - key = "context.{}".format(key) - flatten_filters[key] = value - - for item in _regex_filters(flatten_filters): - for key, value in item.items(): - if key != "$or": - query_filter[key] = value - - elif value: - or_queries.append(value) - - if len(or_queries) == 1: - query_filter["$or"] = or_queries[0] - elif or_queries: - and_query = [] - for or_query in or_queries: - if isinstance(or_query, list): - or_query = {"$or": or_query} - and_query.append(or_query) - query_filter["$and"] = and_query - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_representations( - project_name, - representation_ids=None, - representation_names=None, - version_ids=None, - context_filters=None, - names_by_version_ids=None, - archived=False, - standard=True, - fields=None -): - """Representation entities data from one project filtered by filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - representation_ids (Iterable[Union[str, ObjectId]]): Representation ids - used as filter. Filter ignored if 'None' is passed. - representation_names (Iterable[str]): Representations names used - as filter. Filter ignored if 'None' is passed. - version_ids (Iterable[str]): Subset ids used as parent filter. Filter - ignored if 'None' is passed. - context_filters (Dict[str, List[str, PatternType]]): Filter by - representation context fields. - names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering - using version ids and list of names under the version. - archived (bool): Output will also contain archived representations. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching representations. - """ - - return _get_representations( - project_name=project_name, - representation_ids=representation_ids, - representation_names=representation_names, - version_ids=version_ids, - context_filters=context_filters, - names_by_version_ids=names_by_version_ids, - standard=standard, - archived=archived, - fields=fields - ) - - -def get_archived_representations( - project_name, - representation_ids=None, - representation_names=None, - version_ids=None, - context_filters=None, - names_by_version_ids=None, - fields=None -): - """Archived representation entities data from project with applied filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - representation_ids (Iterable[Union[str, ObjectId]]): Representation ids - used as filter. Filter ignored if 'None' is passed. - representation_names (Iterable[str]): Representations names used - as filter. Filter ignored if 'None' is passed. - version_ids (Iterable[str]): Subset ids used as parent filter. Filter - ignored if 'None' is passed. - context_filters (Dict[str, List[str, PatternType]]): Filter by - representation context fields. - names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering - using version ids and list of names under the version. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching representations. - """ - - return _get_representations( - project_name=project_name, - representation_ids=representation_ids, - representation_names=representation_names, - version_ids=version_ids, - context_filters=context_filters, - names_by_version_ids=names_by_version_ids, - standard=False, - archived=True, - fields=fields - ) - - -def get_representations_parents(project_name, representations): - """Prepare parents of representation entities. - - Each item of returned dictionary contains version, subset, asset - and project in that order. - - Args: - project_name (str): Name of project where to look for queried entities. - representations (List[dict]): Representation entities with at least - '_id' and 'parent' keys. - - Returns: - dict[ObjectId, tuple]: Parents by representation id. - """ - - repre_docs_by_version_id = collections.defaultdict(list) - version_docs_by_version_id = {} - version_docs_by_subset_id = collections.defaultdict(list) - subset_docs_by_subset_id = {} - subset_docs_by_asset_id = collections.defaultdict(list) - output = {} - for repre_doc in representations: - repre_id = repre_doc["_id"] - version_id = repre_doc["parent"] - output[repre_id] = (None, None, None, None) - repre_docs_by_version_id[version_id].append(repre_doc) - - version_docs = get_versions( - project_name, - version_ids=repre_docs_by_version_id.keys(), - hero=True - ) - for version_doc in version_docs: - version_id = version_doc["_id"] - subset_id = version_doc["parent"] - version_docs_by_version_id[version_id] = version_doc - version_docs_by_subset_id[subset_id].append(version_doc) - - subset_docs = get_subsets( - project_name, subset_ids=version_docs_by_subset_id.keys() - ) - for subset_doc in subset_docs: - subset_id = subset_doc["_id"] - asset_id = subset_doc["parent"] - subset_docs_by_subset_id[subset_id] = subset_doc - subset_docs_by_asset_id[asset_id].append(subset_doc) - - asset_docs = get_assets( - project_name, asset_ids=subset_docs_by_asset_id.keys() - ) - asset_docs_by_id = { - asset_doc["_id"]: asset_doc - for asset_doc in asset_docs - } - - project_doc = get_project(project_name) - - for version_id, repre_docs in repre_docs_by_version_id.items(): - asset_doc = None - subset_doc = None - version_doc = version_docs_by_version_id.get(version_id) - if version_doc: - subset_id = version_doc["parent"] - subset_doc = subset_docs_by_subset_id.get(subset_id) - if subset_doc: - asset_id = subset_doc["parent"] - asset_doc = asset_docs_by_id.get(asset_id) - - for repre_doc in repre_docs: - repre_id = repre_doc["_id"] - output[repre_id] = ( - version_doc, subset_doc, asset_doc, project_doc - ) - return output - - -def get_representation_parents(project_name, representation): - """Prepare parents of representation entity. - - Each item of returned dictionary contains version, subset, asset - and project in that order. - - Args: - project_name (str): Name of project where to look for queried entities. - representation (dict): Representation entities with at least - '_id' and 'parent' keys. - - Returns: - dict[ObjectId, tuple]: Parents by representation id. - """ - - if not representation: - return None - - repre_id = representation["_id"] - parents_by_repre_id = get_representations_parents( - project_name, [representation] - ) - return parents_by_repre_id[repre_id] - - -def get_thumbnail_id_from_source(project_name, src_type, src_id): - """Receive thumbnail id from source entity. - - Args: - project_name (str): Name of project where to look for queried entities. - src_type (str): Type of source entity ('asset', 'version'). - src_id (Union[str, ObjectId]): Id of source entity. - - Returns: - Union[ObjectId, None]: Thumbnail id assigned to entity. If Source - entity does not have any thumbnail id assigned. - """ - - if not src_type or not src_id: - return None - - query_filter = {"_id": convert_id(src_id)} - - conn = get_project_connection(project_name) - src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) - if src_doc: - return src_doc.get("data", {}).get("thumbnail_id") - return None - - -def get_thumbnails(project_name, thumbnail_ids, fields=None): - """Receive thumbnails entity data. - - Thumbnail entity can be used to receive binary content of thumbnail based - on its content and ThumbnailResolvers. - - Args: - project_name (str): Name of project where to look for queried entities. - thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail - entities. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - cursor: Cursor of queried documents. - """ - - if thumbnail_ids: - thumbnail_ids = convert_ids(thumbnail_ids) - - if not thumbnail_ids: - return [] - query_filter = { - "type": "thumbnail", - "_id": {"$in": thumbnail_ids} - } - conn = get_project_connection(project_name) - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_thumbnail(project_name, thumbnail_id, fields=None): - """Receive thumbnail entity data. - - Args: - project_name (str): Name of project where to look for queried entities. - thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Thumbnail entity data which can be reduced to - specified 'fields'.None is returned if thumbnail with specified - filters was not found. - """ - - if not thumbnail_id: - return None - query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_workfile_info( - project_name, asset_id, task_name, filename, fields=None -): - """Document with workfile information. - - Warning: - Query is based on filename and context which does not meant it will - find always right and expected result. Information have limited usage - and is not recommended to use it as source information about workfile. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_id (Union[str, ObjectId]): Id of asset entity. - task_name (str): Task name on asset. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Workfile entity data which can be reduced to - specified 'fields'.None is returned if workfile with specified - filters was not found. - """ - - if not asset_id or not task_name or not filename: - return None - - query_filter = { - "type": "workfile", - "parent": convert_id(asset_id), - "task_name": task_name, - "filename": filename - } - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -""" -## Custom data storage: -- Settings - OP settings overrides and local settings -- Logging - logs from Logger -- Webpublisher - jobs -- Ftrack - events -- Maya - Shaders - - openpype/hosts/maya/api/shader_definition_editor.py - - openpype/hosts/maya/plugins/publish/validate_model_name.py - -## Global publish plugins -- openpype/plugins/publish/extract_hierarchy_avalon.py - Create: - - asset - Update: - - asset - -## Lib -- openpype/lib/avalon_context.py - Update: - - workfile data -- openpype/lib/project_backpack.py - Update: - - project -""" +if not AYON_SERVER_ENABLED: + from .mongo.entities import * +else: + from .server.entities import * diff --git a/openpype/client/entity_links.py b/openpype/client/entity_links.py index b74b4ce7f6..e18970de90 100644 --- a/openpype/client/entity_links.py +++ b/openpype/client/entity_links.py @@ -1,243 +1,6 @@ -from .mongo import get_project_connection -from .entities import ( - get_assets, - get_asset_by_id, - get_version_by_id, - get_representation_by_id, - convert_id, -) +from openpype import AYON_SERVER_ENABLED - -def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): - """Extract linked asset ids from asset document. - - One of asset document or asset id must be passed. - - Note: - Asset links now works only from asset to assets. - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - List[Union[ObjectId, str]]: Asset ids of input links. - """ - - output = [] - if not asset_doc and not asset_id: - return output - - if not asset_doc: - asset_doc = get_asset_by_id( - project_name, asset_id, fields=["data.inputLinks"] - ) - - input_links = asset_doc["data"].get("inputLinks") - if not input_links: - return output - - for item in input_links: - # Backwards compatibility for "_id" key which was replaced with - # "id" - if "_id" in item: - link_id = item["_id"] - else: - link_id = item["id"] - output.append(link_id) - return output - - -def get_linked_assets( - project_name, asset_doc=None, asset_id=None, fields=None -): - """Return linked assets based on passed asset document. - - One of asset document or asset id must be passed. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_doc (Dict[str, Any]): Asset document from database. - asset_id (Union[ObjectId, str]): Asset id. Can be used instead of - asset document. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. - - Returns: - List[Dict[str, Any]]: Asset documents of input links for passed - asset doc. - """ - - if not asset_doc: - if not asset_id: - return [] - asset_doc = get_asset_by_id( - project_name, - asset_id, - fields=["data.inputLinks"] - ) - if not asset_doc: - return [] - - link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc) - if not link_ids: - return [] - - return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) - - -def get_linked_representation_id( - project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None -): - """Returns list of linked ids of particular type (if provided). - - One of representation document or representation id must be passed. - Note: - Representation links now works only from representation through version - back to representations. - - Args: - project_name (str): Name of project where look for links. - repre_doc (Dict[str, Any]): Representation document. - repre_id (Union[ObjectId, str]): Representation id. - link_type (str): Type of link (e.g. 'reference', ...). - max_depth (int): Limit recursion level. Default: 0 - - Returns: - List[ObjectId] Linked representation ids. - """ - - if repre_doc: - repre_id = repre_doc["_id"] - - if repre_id: - repre_id = convert_id(repre_id) - - if not repre_id and not repre_doc: - return [] - - version_id = None - if repre_doc: - version_id = repre_doc.get("parent") - - if not version_id: - repre_doc = get_representation_by_id( - project_name, repre_id, fields=["parent"] - ) - version_id = repre_doc["parent"] - - if not version_id: - return [] - - version_doc = get_version_by_id( - project_name, version_id, fields=["type", "version_id"] - ) - if version_doc["type"] == "hero_version": - version_id = version_doc["version_id"] - - if max_depth is None: - max_depth = 0 - - match = { - "_id": version_id, - # Links are not stored to hero versions at this moment so filter - # is limited to just versions - "type": "version" - } - - graph_lookup = { - "from": project_name, - "startWith": "$data.inputLinks.id", - "connectFromField": "data.inputLinks.id", - "connectToField": "_id", - "as": "outputs_recursive", - "depthField": "depth" - } - if max_depth != 0: - # We offset by -1 since 0 basically means no recursion - # but the recursion only happens after the initial lookup - # for outputs. - graph_lookup["maxDepth"] = max_depth - 1 - - query_pipeline = [ - # Match - {"$match": match}, - # Recursive graph lookup for inputs - {"$graphLookup": graph_lookup} - ] - conn = get_project_connection(project_name) - result = conn.aggregate(query_pipeline) - referenced_version_ids = _process_referenced_pipeline_result( - result, link_type - ) - if not referenced_version_ids: - return [] - - ref_ids = conn.distinct( - "_id", - filter={ - "parent": {"$in": list(referenced_version_ids)}, - "type": "representation" - } - ) - - return list(ref_ids) - - -def _process_referenced_pipeline_result(result, link_type): - """Filters result from pipeline for particular link_type. - - Pipeline cannot use link_type directly in a query. - - Returns: - (list) - """ - - referenced_version_ids = set() - correctly_linked_ids = set() - for item in result: - input_links = item.get("data", {}).get("inputLinks") - if not input_links: - continue - - _filter_input_links( - input_links, - link_type, - correctly_linked_ids - ) - - # outputs_recursive in random order, sort by depth - outputs_recursive = item.get("outputs_recursive") - if not outputs_recursive: - continue - - for output in sorted(outputs_recursive, key=lambda o: o["depth"]): - output_links = output.get("data", {}).get("inputLinks") - if not output_links and output["type"] != "hero_version": - continue - - # Leaf - if output["_id"] not in correctly_linked_ids: - continue - - _filter_input_links( - output_links, - link_type, - correctly_linked_ids - ) - - referenced_version_ids.add(output["_id"]) - - return referenced_version_ids - - -def _filter_input_links(input_links, link_type, correctly_linked_ids): - if not input_links: # to handle hero versions - return - - for input_link in input_links: - if link_type and input_link["type"] != link_type: - continue - - link_id = input_link.get("id") or input_link.get("_id") - if link_id is not None: - correctly_linked_ids.add(link_id) +if not AYON_SERVER_ENABLED: + from .mongo.entity_links import * +else: + from .server.entity_links import * diff --git a/openpype/client/mongo/__init__.py b/openpype/client/mongo/__init__.py new file mode 100644 index 0000000000..9f62d7a9cf --- /dev/null +++ b/openpype/client/mongo/__init__.py @@ -0,0 +1,26 @@ +from .mongo import ( + MongoEnvNotSet, + get_default_components, + should_add_certificate_path_to_mongo_url, + validate_mongo_connection, + OpenPypeMongoConnection, + get_project_database, + get_project_connection, + load_json_file, + replace_project_documents, + store_project_documents, +) + + +__all__ = ( + "MongoEnvNotSet", + "get_default_components", + "should_add_certificate_path_to_mongo_url", + "validate_mongo_connection", + "OpenPypeMongoConnection", + "get_project_database", + "get_project_connection", + "load_json_file", + "replace_project_documents", + "store_project_documents", +) diff --git a/openpype/client/mongo/entities.py b/openpype/client/mongo/entities.py new file mode 100644 index 0000000000..260fde4594 --- /dev/null +++ b/openpype/client/mongo/entities.py @@ -0,0 +1,1555 @@ +"""Unclear if these will have public functions like these. + +Goal is that most of functions here are called on (or with) an object +that has project name as a context (e.g. on 'ProjectEntity'?). + ++ We will need more specific functions doing very specific queries really fast. +""" + +import re +import collections + +import six +from bson.objectid import ObjectId + +from .mongo import get_project_database, get_project_connection + +PatternType = type(re.compile("")) + + +def _prepare_fields(fields, required_fields=None): + if not fields: + return None + + output = { + field: True + for field in fields + } + if "_id" not in output: + output["_id"] = True + + if required_fields: + for key in required_fields: + output[key] = True + return output + + +def convert_id(in_id): + """Helper function for conversion of id from string to ObjectId. + + Args: + in_id (Union[str, ObjectId, Any]): Entity id that should be converted + to right type for queries. + + Returns: + Union[ObjectId, Any]: Converted ids to ObjectId or in type. + """ + + if isinstance(in_id, six.string_types): + return ObjectId(in_id) + return in_id + + +def convert_ids(in_ids): + """Helper function for conversion of ids from string to ObjectId. + + Args: + in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that + should be converted to right type for queries. + + Returns: + List[ObjectId]: Converted ids to ObjectId. + """ + + _output = set() + for in_id in in_ids: + if in_id is not None: + _output.add(convert_id(in_id)) + return list(_output) + + +def get_projects(active=True, inactive=False, fields=None): + """Yield all project entity documents. + + Args: + active (Optional[bool]): Include active projects. Defaults to True. + inactive (Optional[bool]): Include inactive projects. + Defaults to False. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Yields: + dict: Project entity data which can be reduced to specified 'fields'. + None is returned if project with specified filters was not found. + """ + mongodb = get_project_database() + for project_name in mongodb.collection_names(): + if project_name in ("system.indexes",): + continue + project_doc = get_project( + project_name, active=active, inactive=inactive, fields=fields + ) + if project_doc is not None: + yield project_doc + + +def get_project(project_name, active=True, inactive=True, fields=None): + """Return project entity document by project name. + + Args: + project_name (str): Name of project. + active (Optional[bool]): Allow active project. Defaults to True. + inactive (Optional[bool]): Allow inactive project. Defaults to True. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Project entity data which can be reduced to + specified 'fields'. None is returned if project with specified + filters was not found. + """ + # Skip if both are disabled + if not active and not inactive: + return None + + query_filter = {"type": "project"} + # Keep query untouched if both should be available + if active and inactive: + pass + + # Add filter to keep only active + elif active: + query_filter["$or"] = [ + {"data.active": {"$exists": False}}, + {"data.active": True}, + ] + + # Add filter to keep only inactive + elif inactive: + query_filter["$or"] = [ + {"data.active": {"$exists": False}}, + {"data.active": False}, + ] + + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_whole_project(project_name): + """Receive all documents from project. + + Helper that can be used to get all document from whole project. For example + for backups etc. + + Returns: + Cursor: Query cursor as iterable which returns all documents from + project collection. + """ + + conn = get_project_connection(project_name) + return conn.find({}) + + +def get_asset_by_id(project_name, asset_id, fields=None): + """Receive asset data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_id (Union[str, ObjectId]): Asset's id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. + """ + + asset_id = convert_id(asset_id) + if not asset_id: + return None + + query_filter = {"type": "asset", "_id": asset_id} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_asset_by_name(project_name, asset_name, fields=None): + """Receive asset data by its name. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_name (str): Asset's name. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. + """ + + if not asset_name: + return None + + query_filter = {"type": "asset", "name": asset_name} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +# NOTE this could be just public function? +# - any better variable name instead of 'standard'? +# - same approach can be used for rest of types +def _get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + standard=True, + archived=False, + fields=None +): + """Assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + standard (bool): Query standard assets (type 'asset'). + archived (bool): Query archived assets (type 'archived_asset'). + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + asset_types = [] + if standard: + asset_types.append("asset") + if archived: + asset_types.append("archived_asset") + + if not asset_types: + return [] + + if len(asset_types) == 1: + query_filter = {"type": asset_types[0]} + else: + query_filter = {"type": {"$in": asset_types}} + + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + query_filter["_id"] = {"$in": asset_ids} + + if asset_names is not None: + if not asset_names: + return [] + query_filter["name"] = {"$in": list(asset_names)} + + if parent_ids is not None: + parent_ids = convert_ids(parent_ids) + if not parent_ids: + return [] + query_filter["data.visualParent"] = {"$in": parent_ids} + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + archived=False, + fields=None +): + """Assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + archived (bool): Add also archived assets. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, + asset_ids, + asset_names, + parent_ids, + True, + archived, + fields + ) + + +def get_archived_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + fields=None +): + """Archived assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all archived assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, asset_ids, asset_names, parent_ids, False, True, fields + ) + + +def get_asset_ids_with_subsets(project_name, asset_ids=None): + """Find out which assets have existing subsets. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Look only for entered + asset ids. + + Returns: + Iterable[ObjectId]: Asset ids that have existing subsets. + """ + + subset_query = { + "type": "subset" + } + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + subset_query["parent"] = {"$in": asset_ids} + + conn = get_project_connection(project_name) + result = conn.aggregate([ + { + "$match": subset_query + }, + { + "$group": { + "_id": "$parent", + "count": {"$sum": 1} + } + } + ]) + asset_ids_with_subsets = [] + for item in result: + asset_id = item["_id"] + count = item["count"] + if count > 0: + asset_ids_with_subsets.append(asset_id) + return asset_ids_with_subsets + + +def get_subset_by_id(project_name, subset_id, fields=None): + """Single subset entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Id of subset which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + query_filters = {"type": "subset", "_id": subset_id} + conn = get_project_connection(project_name) + return conn.find_one(query_filters, _prepare_fields(fields)) + + +def get_subset_by_name(project_name, subset_name, asset_id, fields=None): + """Single subset entity data by its name and its version id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_name (str): Name of subset. + asset_id (Union[str, ObjectId]): Id of parent asset. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. + """ + if not subset_name: + return None + + asset_id = convert_id(asset_id) + if not asset_id: + return None + + query_filters = { + "type": "subset", + "name": subset_name, + "parent": asset_id + } + conn = get_project_connection(project_name) + return conn.find_one(query_filters, _prepare_fields(fields)) + + +def get_subsets( + project_name, + subset_ids=None, + subset_names=None, + asset_ids=None, + names_by_asset_ids=None, + archived=False, + fields=None +): + """Subset entities data from one project filtered by entered filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should be + queried. Filter ignored if 'None' is passed. + subset_names (Iterable[str]): Subset names that should be queried. + Filter ignored if 'None' is passed. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids under which + should look for the subsets. Filter ignored if 'None' is passed. + names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering + using asset ids and list of subset names under the asset. + archived (bool): Look for archived subsets too. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching subsets. + """ + + subset_types = ["subset"] + if archived: + subset_types.append("archived_subset") + + if len(subset_types) == 1: + query_filter = {"type": subset_types[0]} + else: + query_filter = {"type": {"$in": subset_types}} + + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + query_filter["parent"] = {"$in": asset_ids} + + if subset_ids is not None: + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return [] + query_filter["_id"] = {"$in": subset_ids} + + if subset_names is not None: + if not subset_names: + return [] + query_filter["name"] = {"$in": list(subset_names)} + + if names_by_asset_ids is not None: + or_query = [] + for asset_id, names in names_by_asset_ids.items(): + if asset_id and names: + or_query.append({ + "parent": convert_id(asset_id), + "name": {"$in": list(names)} + }) + if not or_query: + return [] + query_filter["$or"] = or_query + + conn = get_project_connection(project_name) + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_subset_families(project_name, subset_ids=None): + """Set of main families of subsets. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should + be queried. All subsets from project are used if 'None' is passed. + + Returns: + set[str]: Main families of matching subsets. + """ + + subset_filter = { + "type": "subset" + } + if subset_ids is not None: + if not subset_ids: + return set() + subset_filter["_id"] = {"$in": list(subset_ids)} + + conn = get_project_connection(project_name) + result = list(conn.aggregate([ + {"$match": subset_filter}, + {"$project": { + "family": {"$arrayElemAt": ["$data.families", 0]} + }}, + {"$group": { + "_id": "family_group", + "families": {"$addToSet": "$family"} + }} + ])) + if result: + return set(result[0]["families"]) + return set() + + +def get_version_by_id(project_name, version_id, fields=None): + """Single version entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id: + return None + + query_filter = { + "type": {"$in": ["version", "hero_version"]}, + "_id": version_id + } + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_version_by_name(project_name, version, subset_id, fields=None): + """Single version entity data by its name and subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + version (int): name of version entity (its version). + subset_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + conn = get_project_connection(project_name) + query_filter = { + "type": "version", + "parent": subset_id, + "name": version + } + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def version_is_latest(project_name, version_id): + """Is version the latest from its subset. + + Note: + Hero versions are considered as latest. + + Todo: + Maybe raise exception when version was not found? + + Args: + project_name (str):Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Version id which is checked. + + Returns: + bool: True if is latest version from subset else False. + """ + + version_id = convert_id(version_id) + if not version_id: + return False + version_doc = get_version_by_id( + project_name, version_id, fields=["_id", "type", "parent"] + ) + # What to do when version is not found? + if not version_doc: + return False + + if version_doc["type"] == "hero_version": + return True + + last_version = get_last_version_by_subset_id( + project_name, version_doc["parent"], fields=["_id"] + ) + return last_version["_id"] == version_id + + +def _get_versions( + project_name, + subset_ids=None, + version_ids=None, + versions=None, + standard=True, + hero=False, + fields=None +): + version_types = [] + if standard: + version_types.append("version") + + if hero: + version_types.append("hero_version") + + if not version_types: + return [] + elif len(version_types) == 1: + query_filter = {"type": version_types[0]} + else: + query_filter = {"type": {"$in": version_types}} + + if subset_ids is not None: + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return [] + query_filter["parent"] = {"$in": subset_ids} + + if version_ids is not None: + version_ids = convert_ids(version_ids) + if not version_ids: + return [] + query_filter["_id"] = {"$in": version_ids} + + if versions is not None: + versions = list(versions) + if not versions: + return [] + + if len(versions) == 1: + query_filter["name"] = versions[0] + else: + query_filter["name"] = {"$in": versions} + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=False, + fields=None +): + """Version entities data from one project filtered by entered filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + version_ids (Iterable[Union[str, ObjectId]]): Version ids that will + be queried. Filter ignored if 'None' is passed. + subset_ids (Iterable[str]): Subset ids that will be queried. + Filter ignored if 'None' is passed. + versions (Iterable[int]): Version names (as integers). + Filter ignored if 'None' is passed. + hero (bool): Look also for hero versions. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching versions. + """ + + return _get_versions( + project_name, + subset_ids, + version_ids, + versions, + standard=True, + hero=hero, + fields=fields + ) + + +def get_hero_version_by_subset_id(project_name, subset_id, fields=None): + """Hero version by subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Subset id under which + is hero version. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + versions = list(_get_versions( + project_name, + subset_ids=[subset_id], + standard=False, + hero=True, + fields=fields + )) + if versions: + return versions[0] + return None + + +def get_hero_version_by_id(project_name, version_id, fields=None): + """Hero version by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Hero version id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id: + return None + + versions = list(_get_versions( + project_name, + version_ids=[version_id], + standard=False, + hero=True, + fields=fields + )) + if versions: + return versions[0] + return None + + +def get_hero_versions( + project_name, + subset_ids=None, + version_ids=None, + fields=None +): + """Hero version entities data from one project filtered by entered filters. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids for which + should look for hero versions. Filter ignored if 'None' is passed. + version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter + ignored if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor|list: Iterable yielding hero versions matching passed filters. + """ + + return _get_versions( + project_name, + subset_ids, + version_ids, + standard=False, + hero=True, + fields=fields + ) + + +def get_output_link_versions(project_name, version_id, fields=None): + """Versions where passed version was used as input. + + Question: + Not 100% sure about the usage of the function so the name and docstring + maybe does not match what it does? + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Version id which can be used + as input link for other versions. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Iterable: Iterable cursor yielding versions that are used as input + links for passed version. + """ + + version_id = convert_id(version_id) + if not version_id: + return [] + + conn = get_project_connection(project_name) + # Does make sense to look for hero versions? + query_filter = { + "type": "version", + "data.inputLinks.id": version_id + } + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_last_versions(project_name, subset_ids, active=None, fields=None): + """Latest versions for entered subset_ids. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids. + active (Optional[bool]): If True only active versions are returned. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + dict[ObjectId, int]: Key is subset id and value is last version name. + """ + + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return {} + + if fields is not None: + fields = list(fields) + if not fields: + return {} + + # Avoid double query if only name and _id are requested + name_needed = False + limit_query = False + if fields: + fields_s = set(fields) + if "name" in fields_s: + name_needed = True + fields_s.remove("name") + + for field in ("_id", "parent"): + if field in fields_s: + fields_s.remove(field) + limit_query = len(fields_s) == 0 + + group_item = { + "_id": "$parent", + "_version_id": {"$last": "$_id"} + } + # Add name if name is needed (only for limit query) + if name_needed: + group_item["name"] = {"$last": "$name"} + + aggregate_filter = { + "type": "version", + "parent": {"$in": subset_ids} + } + if active is False: + aggregate_filter["data.active"] = active + elif active is True: + aggregate_filter["$or"] = [ + {"data.active": {"$exists": 0}}, + {"data.active": active}, + ] + + aggregation_pipeline = [ + # Find all versions of those subsets + {"$match": aggregate_filter}, + # Sorting versions all together + {"$sort": {"name": 1}}, + # Group them by "parent", but only take the last + {"$group": group_item} + ] + + conn = get_project_connection(project_name) + aggregate_result = conn.aggregate(aggregation_pipeline) + if limit_query: + output = {} + for item in aggregate_result: + subset_id = item["_id"] + item_data = {"_id": item["_version_id"], "parent": subset_id} + if name_needed: + item_data["name"] = item["name"] + output[subset_id] = item_data + return output + + version_ids = [ + doc["_version_id"] + for doc in aggregate_result + ] + + fields = _prepare_fields(fields, ["parent"]) + + version_docs = get_versions( + project_name, version_ids=version_ids, fields=fields + ) + + return { + version_doc["parent"]: version_doc + for version_doc in version_docs + } + + +def get_last_version_by_subset_id(project_name, subset_id, fields=None): + """Last version for passed subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + last_versions = get_last_versions( + project_name, subset_ids=[subset_id], fields=fields + ) + return last_versions.get(subset_id) + + +def get_last_version_by_subset_name( + project_name, subset_name, asset_id=None, asset_name=None, fields=None +): + """Last version for passed subset name under asset id/name. + + It is required to pass 'asset_id' or 'asset_name'. Asset id is recommended + if is available. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_name (str): Name of subset. + asset_id (Union[str, ObjectId]): Asset id which is parent of passed + subset name. + asset_name (str): Asset name which is parent of passed subset name. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + if not asset_id and not asset_name: + return None + + if not asset_id: + asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) + if not asset_doc: + return None + asset_id = asset_doc["_id"] + subset_doc = get_subset_by_name( + project_name, subset_name, asset_id, fields=["_id"] + ) + if not subset_doc: + return None + return get_last_version_by_subset_id( + project_name, subset_doc["_id"], fields=fields + ) + + +def get_representation_by_id(project_name, representation_id, fields=None): + """Representation entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + representation_id (Union[str, ObjectId]): Representation id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Representation entity data which can be reduced to + specified 'fields'. None is returned if representation with + specified filters was not found. + """ + + if not representation_id: + return None + + repre_types = ["representation", "archived_representation"] + query_filter = { + "type": {"$in": repre_types} + } + if representation_id is not None: + query_filter["_id"] = convert_id(representation_id) + + conn = get_project_connection(project_name) + + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_representation_by_name( + project_name, representation_name, version_id, fields=None +): + """Representation entity data by its name and its version id. + + Args: + project_name (str): Name of project where to look for queried entities. + representation_name (str): Representation name. + version_id (Union[str, ObjectId]): Id of parent version entity. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[dict[str, Any], None]: Representation entity data which can be + reduced to specified 'fields'. None is returned if representation + with specified filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id or not representation_name: + return None + repre_types = ["representation", "archived_representations"] + query_filter = { + "type": {"$in": repre_types}, + "name": representation_name, + "parent": version_id + } + + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def _flatten_dict(data): + flatten_queue = collections.deque() + flatten_queue.append(data) + output = {} + while flatten_queue: + item = flatten_queue.popleft() + for key, value in item.items(): + if not isinstance(value, dict): + output[key] = value + continue + + tmp = {} + for subkey, subvalue in value.items(): + new_key = "{}.{}".format(key, subkey) + tmp[new_key] = subvalue + flatten_queue.append(tmp) + return output + + +def _regex_filters(filters): + output = [] + for key, value in filters.items(): + regexes = [] + a_values = [] + if isinstance(value, PatternType): + regexes.append(value) + elif isinstance(value, (list, tuple, set)): + for item in value: + if isinstance(item, PatternType): + regexes.append(item) + else: + a_values.append(item) + else: + a_values.append(value) + + key_filters = [] + if len(a_values) == 1: + key_filters.append({key: a_values[0]}) + elif a_values: + key_filters.append({key: {"$in": a_values}}) + + for regex in regexes: + key_filters.append({key: {"$regex": regex}}) + + if len(key_filters) == 1: + output.append(key_filters[0]) + else: + output.append({"$or": key_filters}) + + return output + + +def _get_representations( + project_name, + representation_ids, + representation_names, + version_ids, + context_filters, + names_by_version_ids, + standard, + archived, + fields +): + default_output = [] + repre_types = [] + if standard: + repre_types.append("representation") + if archived: + repre_types.append("archived_representation") + + if not repre_types: + return default_output + + if len(repre_types) == 1: + query_filter = {"type": repre_types[0]} + else: + query_filter = {"type": {"$in": repre_types}} + + if representation_ids is not None: + representation_ids = convert_ids(representation_ids) + if not representation_ids: + return default_output + query_filter["_id"] = {"$in": representation_ids} + + if representation_names is not None: + if not representation_names: + return default_output + query_filter["name"] = {"$in": list(representation_names)} + + if version_ids is not None: + version_ids = convert_ids(version_ids) + if not version_ids: + return default_output + query_filter["parent"] = {"$in": version_ids} + + or_queries = [] + if names_by_version_ids is not None: + or_query = [] + for version_id, names in names_by_version_ids.items(): + if version_id and names: + or_query.append({ + "parent": convert_id(version_id), + "name": {"$in": list(names)} + }) + if not or_query: + return default_output + or_queries.append(or_query) + + if context_filters is not None: + if not context_filters: + return [] + _flatten_filters = _flatten_dict(context_filters) + flatten_filters = {} + for key, value in _flatten_filters.items(): + if not key.startswith("context"): + key = "context.{}".format(key) + flatten_filters[key] = value + + for item in _regex_filters(flatten_filters): + for key, value in item.items(): + if key != "$or": + query_filter[key] = value + + elif value: + or_queries.append(value) + + if len(or_queries) == 1: + query_filter["$or"] = or_queries[0] + elif or_queries: + and_query = [] + for or_query in or_queries: + if isinstance(or_query, list): + or_query = {"$or": or_query} + and_query.append(or_query) + query_filter["$and"] = and_query + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + archived=False, + standard=True, + fields=None +): + """Representation entities data from one project filtered by filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + representation_ids (Iterable[Union[str, ObjectId]]): Representation ids + used as filter. Filter ignored if 'None' is passed. + representation_names (Iterable[str]): Representations names used + as filter. Filter ignored if 'None' is passed. + version_ids (Iterable[str]): Subset ids used as parent filter. Filter + ignored if 'None' is passed. + context_filters (Dict[str, List[str, PatternType]]): Filter by + representation context fields. + names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering + using version ids and list of names under the version. + archived (bool): Output will also contain archived representations. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching representations. + """ + + return _get_representations( + project_name=project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + standard=standard, + archived=archived, + fields=fields + ) + + +def get_archived_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + fields=None +): + """Archived representation entities data from project with applied filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + representation_ids (Iterable[Union[str, ObjectId]]): Representation ids + used as filter. Filter ignored if 'None' is passed. + representation_names (Iterable[str]): Representations names used + as filter. Filter ignored if 'None' is passed. + version_ids (Iterable[str]): Subset ids used as parent filter. Filter + ignored if 'None' is passed. + context_filters (Dict[str, List[str, PatternType]]): Filter by + representation context fields. + names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering + using version ids and list of names under the version. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching representations. + """ + + return _get_representations( + project_name=project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + standard=False, + archived=True, + fields=fields + ) + + +def get_representations_parents(project_name, representations): + """Prepare parents of representation entities. + + Each item of returned dictionary contains version, subset, asset + and project in that order. + + Args: + project_name (str): Name of project where to look for queried entities. + representations (List[dict]): Representation entities with at least + '_id' and 'parent' keys. + + Returns: + dict[ObjectId, tuple]: Parents by representation id. + """ + + repre_docs_by_version_id = collections.defaultdict(list) + version_docs_by_version_id = {} + version_docs_by_subset_id = collections.defaultdict(list) + subset_docs_by_subset_id = {} + subset_docs_by_asset_id = collections.defaultdict(list) + output = {} + for repre_doc in representations: + repre_id = repre_doc["_id"] + version_id = repre_doc["parent"] + output[repre_id] = (None, None, None, None) + repre_docs_by_version_id[version_id].append(repre_doc) + + version_docs = get_versions( + project_name, + version_ids=repre_docs_by_version_id.keys(), + hero=True + ) + for version_doc in version_docs: + version_id = version_doc["_id"] + subset_id = version_doc["parent"] + version_docs_by_version_id[version_id] = version_doc + version_docs_by_subset_id[subset_id].append(version_doc) + + subset_docs = get_subsets( + project_name, subset_ids=version_docs_by_subset_id.keys() + ) + for subset_doc in subset_docs: + subset_id = subset_doc["_id"] + asset_id = subset_doc["parent"] + subset_docs_by_subset_id[subset_id] = subset_doc + subset_docs_by_asset_id[asset_id].append(subset_doc) + + asset_docs = get_assets( + project_name, asset_ids=subset_docs_by_asset_id.keys() + ) + asset_docs_by_id = { + asset_doc["_id"]: asset_doc + for asset_doc in asset_docs + } + + project_doc = get_project(project_name) + + for version_id, repre_docs in repre_docs_by_version_id.items(): + asset_doc = None + subset_doc = None + version_doc = version_docs_by_version_id.get(version_id) + if version_doc: + subset_id = version_doc["parent"] + subset_doc = subset_docs_by_subset_id.get(subset_id) + if subset_doc: + asset_id = subset_doc["parent"] + asset_doc = asset_docs_by_id.get(asset_id) + + for repre_doc in repre_docs: + repre_id = repre_doc["_id"] + output[repre_id] = ( + version_doc, subset_doc, asset_doc, project_doc + ) + return output + + +def get_representation_parents(project_name, representation): + """Prepare parents of representation entity. + + Each item of returned dictionary contains version, subset, asset + and project in that order. + + Args: + project_name (str): Name of project where to look for queried entities. + representation (dict): Representation entities with at least + '_id' and 'parent' keys. + + Returns: + dict[ObjectId, tuple]: Parents by representation id. + """ + + if not representation: + return None + + repre_id = representation["_id"] + parents_by_repre_id = get_representations_parents( + project_name, [representation] + ) + return parents_by_repre_id[repre_id] + + +def get_thumbnail_id_from_source(project_name, src_type, src_id): + """Receive thumbnail id from source entity. + + Args: + project_name (str): Name of project where to look for queried entities. + src_type (str): Type of source entity ('asset', 'version'). + src_id (Union[str, ObjectId]): Id of source entity. + + Returns: + Union[ObjectId, None]: Thumbnail id assigned to entity. If Source + entity does not have any thumbnail id assigned. + """ + + if not src_type or not src_id: + return None + + query_filter = {"_id": convert_id(src_id)} + + conn = get_project_connection(project_name) + src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) + if src_doc: + return src_doc.get("data", {}).get("thumbnail_id") + return None + + +def get_thumbnails(project_name, thumbnail_ids, fields=None): + """Receive thumbnails entity data. + + Thumbnail entity can be used to receive binary content of thumbnail based + on its content and ThumbnailResolvers. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail + entities. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + cursor: Cursor of queried documents. + """ + + if thumbnail_ids: + thumbnail_ids = convert_ids(thumbnail_ids) + + if not thumbnail_ids: + return [] + query_filter = { + "type": "thumbnail", + "_id": {"$in": thumbnail_ids} + } + conn = get_project_connection(project_name) + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id, fields=None +): + """Receive thumbnail entity data. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Thumbnail entity data which can be reduced to + specified 'fields'.None is returned if thumbnail with specified + filters was not found. + """ + + if not thumbnail_id: + return None + query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_workfile_info( + project_name, asset_id, task_name, filename, fields=None +): + """Document with workfile information. + + Warning: + Query is based on filename and context which does not meant it will + find always right and expected result. Information have limited usage + and is not recommended to use it as source information about workfile. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_id (Union[str, ObjectId]): Id of asset entity. + task_name (str): Task name on asset. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Workfile entity data which can be reduced to + specified 'fields'.None is returned if workfile with specified + filters was not found. + """ + + if not asset_id or not task_name or not filename: + return None + + query_filter = { + "type": "workfile", + "parent": convert_id(asset_id), + "task_name": task_name, + "filename": filename + } + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +""" +## Custom data storage: +- Settings - OP settings overrides and local settings +- Logging - logs from Logger +- Webpublisher - jobs +- Ftrack - events +- Maya - Shaders + - openpype/hosts/maya/api/shader_definition_editor.py + - openpype/hosts/maya/plugins/publish/validate_model_name.py + +## Global publish plugins +- openpype/plugins/publish/extract_hierarchy_avalon.py + Create: + - asset + Update: + - asset + +## Lib +- openpype/lib/avalon_context.py + Update: + - workfile data +- openpype/lib/project_backpack.py + Update: + - project +""" diff --git a/openpype/client/mongo/entity_links.py b/openpype/client/mongo/entity_links.py new file mode 100644 index 0000000000..fd13a2d83b --- /dev/null +++ b/openpype/client/mongo/entity_links.py @@ -0,0 +1,240 @@ +from .mongo import get_project_connection +from .entities import ( + get_assets, + get_asset_by_id, + get_version_by_id, + get_representation_by_id, + convert_id, +) + + +def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): + """Extract linked asset ids from asset document. + + One of asset document or asset id must be passed. + + Note: + Asset links now works only from asset to assets. + + Args: + asset_doc (dict): Asset document from DB. + + Returns: + List[Union[ObjectId, str]]: Asset ids of input links. + """ + + output = [] + if not asset_doc and not asset_id: + return output + + if not asset_doc: + asset_doc = get_asset_by_id( + project_name, asset_id, fields=["data.inputLinks"] + ) + + input_links = asset_doc["data"].get("inputLinks") + if not input_links: + return output + + for item in input_links: + # Backwards compatibility for "_id" key which was replaced with + # "id" + if "_id" in item: + link_id = item["_id"] + else: + link_id = item["id"] + output.append(link_id) + return output + + +def get_linked_assets( + project_name, asset_doc=None, asset_id=None, fields=None +): + """Return linked assets based on passed asset document. + + One of asset document or asset id must be passed. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_doc (Dict[str, Any]): Asset document from database. + asset_id (Union[ObjectId, str]): Asset id. Can be used instead of + asset document. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + List[Dict[str, Any]]: Asset documents of input links for passed + asset doc. + """ + + if not asset_doc: + if not asset_id: + return [] + asset_doc = get_asset_by_id( + project_name, + asset_id, + fields=["data.inputLinks"] + ) + if not asset_doc: + return [] + + link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc) + if not link_ids: + return [] + + return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) + + +def get_linked_representation_id( + project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None +): + """Returns list of linked ids of particular type (if provided). + + One of representation document or representation id must be passed. + Note: + Representation links now works only from representation through version + back to representations. + + Args: + project_name (str): Name of project where look for links. + repre_doc (Dict[str, Any]): Representation document. + repre_id (Union[ObjectId, str]): Representation id. + link_type (str): Type of link (e.g. 'reference', ...). + max_depth (int): Limit recursion level. Default: 0 + + Returns: + List[ObjectId] Linked representation ids. + """ + + if repre_doc: + repre_id = repre_doc["_id"] + + if repre_id: + repre_id = convert_id(repre_id) + + if not repre_id and not repre_doc: + return [] + + version_id = None + if repre_doc: + version_id = repre_doc.get("parent") + + if not version_id: + repre_doc = get_representation_by_id( + project_name, repre_id, fields=["parent"] + ) + version_id = repre_doc["parent"] + + if not version_id: + return [] + + version_doc = get_version_by_id( + project_name, version_id, fields=["type", "version_id"] + ) + if version_doc["type"] == "hero_version": + version_id = version_doc["version_id"] + + if max_depth is None: + max_depth = 0 + + match = { + "_id": version_id, + # Links are not stored to hero versions at this moment so filter + # is limited to just versions + "type": "version" + } + + graph_lookup = { + "from": project_name, + "startWith": "$data.inputLinks.id", + "connectFromField": "data.inputLinks.id", + "connectToField": "_id", + "as": "outputs_recursive", + "depthField": "depth" + } + if max_depth != 0: + # We offset by -1 since 0 basically means no recursion + # but the recursion only happens after the initial lookup + # for outputs. + graph_lookup["maxDepth"] = max_depth - 1 + + query_pipeline = [ + # Match + {"$match": match}, + # Recursive graph lookup for inputs + {"$graphLookup": graph_lookup} + ] + + conn = get_project_connection(project_name) + result = conn.aggregate(query_pipeline) + referenced_version_ids = _process_referenced_pipeline_result( + result, link_type + ) + if not referenced_version_ids: + return [] + + ref_ids = conn.distinct( + "_id", + filter={ + "parent": {"$in": list(referenced_version_ids)}, + "type": "representation" + } + ) + + return list(ref_ids) + + +def _process_referenced_pipeline_result(result, link_type): + """Filters result from pipeline for particular link_type. + + Pipeline cannot use link_type directly in a query. + + Returns: + (list) + """ + + referenced_version_ids = set() + correctly_linked_ids = set() + for item in result: + input_links = item.get("data", {}).get("inputLinks") + if not input_links: + continue + + _filter_input_links( + input_links, + link_type, + correctly_linked_ids + ) + + # outputs_recursive in random order, sort by depth + outputs_recursive = item.get("outputs_recursive") + if not outputs_recursive: + continue + + for output in sorted(outputs_recursive, key=lambda o: o["depth"]): + # Leaf + if output["_id"] not in correctly_linked_ids: + continue + + _filter_input_links( + output.get("data", {}).get("inputLinks"), + link_type, + correctly_linked_ids + ) + + referenced_version_ids.add(output["_id"]) + + return referenced_version_ids + + +def _filter_input_links(input_links, link_type, correctly_linked_ids): + if not input_links: # to handle hero versions + return + + for input_link in input_links: + if link_type and input_link["type"] != link_type: + continue + + link_id = input_link.get("id") or input_link.get("_id") + if link_id is not None: + correctly_linked_ids.add(link_id) diff --git a/openpype/client/mongo.py b/openpype/client/mongo/mongo.py similarity index 98% rename from openpype/client/mongo.py rename to openpype/client/mongo/mongo.py index 251041c028..2be426efeb 100644 --- a/openpype/client/mongo.py +++ b/openpype/client/mongo/mongo.py @@ -11,6 +11,7 @@ from bson.json_util import ( CANONICAL_JSON_OPTIONS ) +from openpype import AYON_SERVER_ENABLED if sys.version_info[0] == 2: from urlparse import urlparse, parse_qs else: @@ -134,7 +135,7 @@ def should_add_certificate_path_to_mongo_url(mongo_url): add_certificate = False # Check if url 'ssl' or 'tls' are set to 'true' for key in ("ssl", "tls"): - if key in query and "true" in query["ssl"]: + if key in query and "true" in query[key]: add_certificate = True break @@ -206,6 +207,8 @@ class OpenPypeMongoConnection: @classmethod def create_connection(cls, mongo_url, timeout=None, retry_attempts=None): + if AYON_SERVER_ENABLED: + raise RuntimeError("Created mongo connection in AYON mode") parsed = urlparse(mongo_url) # Force validation of scheme if parsed.scheme not in ["mongodb", "mongodb+srv"]: @@ -221,7 +224,7 @@ class OpenPypeMongoConnection: "serverSelectionTimeoutMS": timeout } if should_add_certificate_path_to_mongo_url(mongo_url): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() mongo_client = pymongo.MongoClient(mongo_url, **kwargs) diff --git a/openpype/client/mongo/operations.py b/openpype/client/mongo/operations.py new file mode 100644 index 0000000000..3537aa4a3d --- /dev/null +++ b/openpype/client/mongo/operations.py @@ -0,0 +1,632 @@ +import re +import copy +import collections + +from bson.objectid import ObjectId +from pymongo import DeleteOne, InsertOne, UpdateOne + +from openpype.client.operations_base import ( + REMOVED_VALUE, + CreateOperation, + UpdateOperation, + DeleteOperation, + BaseOperationsSession +) +from .mongo import get_project_connection +from .entities import get_project + + +PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" +PROJECT_NAME_REGEX = re.compile( + "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) +) + +CURRENT_PROJECT_SCHEMA = "openpype:project-3.0" +CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0" +CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0" +CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0" +CURRENT_VERSION_SCHEMA = "openpype:version-3.0" +CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0" +CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0" +CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0" +CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0" + + +def _create_or_convert_to_mongo_id(mongo_id): + if mongo_id is None: + return ObjectId() + return ObjectId(mongo_id) + + +def new_project_document( + project_name, project_code, config, data=None, entity_id=None +): + """Create skeleton data of project document. + + Args: + project_name (str): Name of project. Used as identifier of a project. + project_code (str): Shorter version of projet without spaces and + special characters (in most of cases). Should be also considered + as unique name across projects. + config (Dic[str, Any]): Project config consist of roots, templates, + applications and other project Anatomy related data. + data (Dict[str, Any]): Project data with information about it's + attributes (e.g. 'fps' etc.) or integration specific keys. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of project document. + """ + + if data is None: + data = {} + + data["code"] = project_code + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "name": project_name, + "type": CURRENT_PROJECT_SCHEMA, + "entity_data": data, + "config": config + } + + +def new_asset_document( + name, project_id, parent_id, parents, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + name (str): Is considered as unique identifier of asset in project. + project_id (Union[str, ObjectId]): Id of project doument. + parent_id (Union[str, ObjectId]): Id of parent asset. + parents (List[str]): List of parent assets names. + data (Dict[str, Any]): Asset document data. Empty dictionary is used + if not passed. Value of 'parent_id' is used to fill 'visualParent'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of asset document. + """ + + if data is None: + data = {} + if parent_id is not None: + parent_id = ObjectId(parent_id) + data["visualParent"] = parent_id + data["parents"] = parents + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "asset", + "name": name, + "parent": ObjectId(project_id), + "data": data, + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + +def new_subset_document(name, family, asset_id, data=None, entity_id=None): + """Create skeleton data of subset document. + + Args: + name (str): Is considered as unique identifier of subset under asset. + family (str): Subset's family. + asset_id (Union[str, ObjectId]): Id of parent asset. + data (Dict[str, Any]): Subset document data. Empty dictionary is used + if not passed. Value of 'family' is used to fill 'family'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of subset document. + """ + + if data is None: + data = {} + data["family"] = family + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_SUBSET_SCHEMA, + "type": "subset", + "name": name, + "data": data, + "parent": asset_id + } + + +def new_version_doc(version, subset_id, data=None, entity_id=None): + """Create skeleton data of version document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_VERSION_SCHEMA, + "type": "version", + "name": int(version), + "parent": subset_id, + "data": data + } + + +def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None): + """Create skeleton data of hero version document. + + Args: + version_id (ObjectId): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_HERO_VERSION_SCHEMA, + "type": "hero_version", + "version_id": version_id, + "parent": subset_id, + "data": data + } + + +def new_representation_doc( + name, version_id, context, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + version_id (Union[str, ObjectId]): Id of parent version. + context (Dict[str, Any]): Representation context used for fill template + of to query. + data (Dict[str, Any]): Representation document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_REPRESENTATION_SCHEMA, + "type": "representation", + "parent": version_id, + "name": name, + "data": data, + + # Imprint shortcut to context for performance reasons. + "context": context + } + + +def new_thumbnail_doc(data=None, entity_id=None): + """Create skeleton data of thumbnail document. + + Args: + data (Dict[str, Any]): Thumbnail document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of thumbnail document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": data + } + + +def new_workfile_info_doc( + filename, asset_id, task_name, files, data=None, entity_id=None +): + """Create skeleton data of workfile info document. + + Workfile document is at this moment used primarily for artist notes. + + Args: + filename (str): Filename of workfile. + asset_id (Union[str, ObjectId]): Id of asset under which workfile live. + task_name (str): Task under which was workfile created. + files (List[str]): List of rootless filepaths related to workfile. + data (Dict[str, Any]): Additional metadata. + + Returns: + Dict[str, Any]: Skeleton of workfile info document. + """ + + if not data: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "workfile", + "parent": ObjectId(asset_id), + "task_name": task_name, + "filename": filename, + "data": data, + "files": files + } + + +def _prepare_update_data(old_doc, new_doc, replace): + changes = {} + for key, value in new_doc.items(): + if key not in old_doc or value != old_doc[key]: + changes[key] = value + + if replace: + for key in old_doc.keys(): + if key not in new_doc: + changes[key] = REMOVED_VALUE + return changes + + +def prepare_subset_update_data(old_doc, new_doc, replace=True): + """Compare two subset documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_version_update_data(old_doc, new_doc, replace=True): + """Compare two version documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_hero_version_update_data(old_doc, new_doc, replace=True): + """Compare two hero version documents and prepare update data. + + Based on compared values will create update data for 'UpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_representation_update_data(old_doc, new_doc, replace=True): + """Compare two representation documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): + """Compare two workfile info documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +class MongoCreateOperation(CreateOperation): + """Operation to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data): + super(MongoCreateOperation, self).__init__( + project_name, entity_type, data + ) + + if "_id" not in self._data: + self._data["_id"] = ObjectId() + else: + self._data["_id"] = ObjectId(self._data["_id"]) + + @property + def entity_id(self): + return self._data["_id"] + + def to_mongo_operation(self): + return InsertOne(copy.deepcopy(self._data)) + + +class MongoUpdateOperation(UpdateOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__(self, project_name, entity_type, entity_id, update_data): + super(MongoUpdateOperation, self).__init__( + project_name, entity_type, entity_id, update_data + ) + + self._entity_id = ObjectId(self._entity_id) + + def to_mongo_operation(self): + unset_data = {} + set_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + unset_data[key] = None + else: + set_data[key] = value + + op_data = {} + if unset_data: + op_data["$unset"] = unset_data + if set_data: + op_data["$set"] = set_data + + if not op_data: + return None + + return UpdateOne( + {"_id": self.entity_id}, + op_data + ) + + +class MongoDeleteOperation(DeleteOperation): + """Operation to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id): + super(MongoDeleteOperation, self).__init__( + project_name, entity_type, entity_id + ) + + self._entity_id = ObjectId(self._entity_id) + + def to_mongo_operation(self): + return DeleteOne({"_id": self.entity_id}) + + +class MongoOperationsSession(BaseOperationsSession): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be sonsidered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and document + values are not validated. + + All operations must be related to single project. + + Args: + project_name (str): Project name to which are operations related. + """ + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + for project_name, operations in operations_by_project.items(): + bulk_writes = [] + for operation in operations: + mongo_op = operation.to_mongo_operation() + if mongo_op is not None: + bulk_writes.append(mongo_op) + + if bulk_writes: + collection = get_project_connection(project_name) + collection.bulk_write(bulk_writes) + + def create_entity(self, project_name, entity_type, data): + """Fast access to 'MongoCreateOperation'. + + Returns: + MongoCreateOperation: Object of update operation. + """ + + operation = MongoCreateOperation(project_name, entity_type, data) + self.add(operation) + return operation + + def update_entity(self, project_name, entity_type, entity_id, update_data): + """Fast access to 'MongoUpdateOperation'. + + Returns: + MongoUpdateOperation: Object of update operation. + """ + + operation = MongoUpdateOperation( + project_name, entity_type, entity_id, update_data + ) + self.add(operation) + return operation + + def delete_entity(self, project_name, entity_type, entity_id): + """Fast access to 'MongoDeleteOperation'. + + Returns: + MongoDeleteOperation: Object of delete operation. + """ + + operation = MongoDeleteOperation(project_name, entity_type, entity_id) + self.add(operation) + return operation + + +def create_project( + project_name, + project_code, + library_project=False, +): + """Create project using OpenPype settings. + + This project creation function is not validating project document on + creation. It is because project document is created blindly with only + minimum required information about project which is it's name, code, type + and schema. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name(str): New project name. Should be unique. + project_code(str): Project's code should be unique too. + library_project(bool): Project is library project. + + Raises: + ValueError: When project name already exists in MongoDB. + + Returns: + dict: Created project document. + """ + + from openpype.settings import ProjectSettings, SaveWarningExc + from openpype.pipeline.schema import validate + + if get_project(project_name, fields=["name"]): + raise ValueError("Project with name \"{}\" already exists".format( + project_name + )) + + if not PROJECT_NAME_REGEX.match(project_name): + raise ValueError(( + "Project name \"{}\" contain invalid characters" + ).format(project_name)) + + project_doc = { + "type": "project", + "name": project_name, + "data": { + "code": project_code, + "library_project": library_project + }, + "schema": CURRENT_PROJECT_SCHEMA + } + + op_session = MongoOperationsSession() + # Insert document with basic data + create_op = op_session.create_entity( + project_name, project_doc["type"], project_doc + ) + op_session.commit() + + # Load ProjectSettings for the project and save it to store all attributes + # and Anatomy + try: + project_settings_entity = ProjectSettings(project_name) + project_settings_entity.save() + except SaveWarningExc as exc: + print(str(exc)) + except Exception: + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + project_doc = get_project(project_name) + + try: + # Validate created project document + validate(project_doc) + except Exception: + # Remove project if is not valid + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + return project_doc diff --git a/openpype/client/operations.py b/openpype/client/operations.py index e8c9d28636..8bc09dffd3 100644 --- a/openpype/client/operations.py +++ b/openpype/client/operations.py @@ -1,797 +1,24 @@ -import re -import uuid -import copy -import collections -from abc import ABCMeta, abstractmethod, abstractproperty +from openpype import AYON_SERVER_ENABLED -import six -from bson.objectid import ObjectId -from pymongo import DeleteOne, InsertOne, UpdateOne +from .operations_base import REMOVED_VALUE +if not AYON_SERVER_ENABLED: + from .mongo.operations import * + OperationsSession = MongoOperationsSession -from .mongo import get_project_connection -from .entities import get_project - -REMOVED_VALUE = object() - -PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" -PROJECT_NAME_REGEX = re.compile( - "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) -) - -CURRENT_PROJECT_SCHEMA = "openpype:project-3.0" -CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0" -CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0" -CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0" -CURRENT_VERSION_SCHEMA = "openpype:version-3.0" -CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0" -CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0" -CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0" -CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0" - - -def _create_or_convert_to_mongo_id(mongo_id): - if mongo_id is None: - return ObjectId() - return ObjectId(mongo_id) - - -def new_project_document( - project_name, project_code, config, data=None, entity_id=None -): - """Create skeleton data of project document. - - Args: - project_name (str): Name of project. Used as identifier of a project. - project_code (str): Shorter version of projet without spaces and - special characters (in most of cases). Should be also considered - as unique name across projects. - config (Dic[str, Any]): Project config consist of roots, templates, - applications and other project Anatomy related data. - data (Dict[str, Any]): Project data with information about it's - attributes (e.g. 'fps' etc.) or integration specific keys. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of project document. - """ - - if data is None: - data = {} - - data["code"] = project_code - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "name": project_name, - "type": CURRENT_PROJECT_SCHEMA, - "entity_data": data, - "config": config - } - - -def new_asset_document( - name, project_id, parent_id, parents, data=None, entity_id=None -): - """Create skeleton data of asset document. - - Args: - name (str): Is considered as unique identifier of asset in project. - project_id (Union[str, ObjectId]): Id of project doument. - parent_id (Union[str, ObjectId]): Id of parent asset. - parents (List[str]): List of parent assets names. - data (Dict[str, Any]): Asset document data. Empty dictionary is used - if not passed. Value of 'parent_id' is used to fill 'visualParent'. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of asset document. - """ - - if data is None: - data = {} - if parent_id is not None: - parent_id = ObjectId(parent_id) - data["visualParent"] = parent_id - data["parents"] = parents - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "asset", - "name": name, - "parent": ObjectId(project_id), - "data": data, - "schema": CURRENT_ASSET_DOC_SCHEMA - } - - -def new_subset_document(name, family, asset_id, data=None, entity_id=None): - """Create skeleton data of subset document. - - Args: - name (str): Is considered as unique identifier of subset under asset. - family (str): Subset's family. - asset_id (Union[str, ObjectId]): Id of parent asset. - data (Dict[str, Any]): Subset document data. Empty dictionary is used - if not passed. Value of 'family' is used to fill 'family'. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of subset document. - """ - - if data is None: - data = {} - data["family"] = family - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_SUBSET_SCHEMA, - "type": "subset", - "name": name, - "data": data, - "parent": asset_id - } - - -def new_version_doc(version, subset_id, data=None, entity_id=None): - """Create skeleton data of version document. - - Args: - version (int): Is considered as unique identifier of version - under subset. - subset_id (Union[str, ObjectId]): Id of parent subset. - data (Dict[str, Any]): Version document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_VERSION_SCHEMA, - "type": "version", - "name": int(version), - "parent": subset_id, - "data": data - } - - -def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None): - """Create skeleton data of hero version document. - - Args: - version_id (ObjectId): Is considered as unique identifier of version - under subset. - subset_id (Union[str, ObjectId]): Id of parent subset. - data (Dict[str, Any]): Version document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_HERO_VERSION_SCHEMA, - "type": "hero_version", - "version_id": version_id, - "parent": subset_id, - "data": data - } - - -def new_representation_doc( - name, version_id, context, data=None, entity_id=None -): - """Create skeleton data of asset document. - - Args: - version (int): Is considered as unique identifier of version - under subset. - version_id (Union[str, ObjectId]): Id of parent version. - context (Dict[str, Any]): Representation context used for fill template - of to query. - data (Dict[str, Any]): Representation document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_REPRESENTATION_SCHEMA, - "type": "representation", - "parent": version_id, - "name": name, - "data": data, - # Imprint shortcut to context for performance reasons. - "context": context - } - - -def new_thumbnail_doc(data=None, entity_id=None): - """Create skeleton data of thumbnail document. - - Args: - data (Dict[str, Any]): Thumbnail document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of thumbnail document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "thumbnail", - "schema": CURRENT_THUMBNAIL_SCHEMA, - "data": data - } - - -def new_workfile_info_doc( - filename, asset_id, task_name, files, data=None, entity_id=None -): - """Create skeleton data of workfile info document. - - Workfile document is at this moment used primarily for artist notes. - - Args: - filename (str): Filename of workfile. - asset_id (Union[str, ObjectId]): Id of asset under which workfile live. - task_name (str): Task under which was workfile created. - files (List[str]): List of rootless filepaths related to workfile. - data (Dict[str, Any]): Additional metadata. - - Returns: - Dict[str, Any]: Skeleton of workfile info document. - """ - - if not data: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "workfile", - "parent": ObjectId(asset_id), - "task_name": task_name, - "filename": filename, - "data": data, - "files": files - } - - -def _prepare_update_data(old_doc, new_doc, replace): - changes = {} - for key, value in new_doc.items(): - if key not in old_doc or value != old_doc[key]: - changes[key] = value - - if replace: - for key in old_doc.keys(): - if key not in new_doc: - changes[key] = REMOVED_VALUE - return changes - - -def prepare_subset_update_data(old_doc, new_doc, replace=True): - """Compare two subset documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_version_update_data(old_doc, new_doc, replace=True): - """Compare two version documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_hero_version_update_data(old_doc, new_doc, replace=True): - """Compare two hero version documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_representation_update_data(old_doc, new_doc, replace=True): - """Compare two representation documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): - """Compare two workfile info documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -@six.add_metaclass(ABCMeta) -class AbstractOperation(object): - """Base operation class. - - Operation represent a call into database. The call can create, change or - remove data. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - """ - - def __init__(self, project_name, entity_type): - self._project_name = project_name - self._entity_type = entity_type - self._id = str(uuid.uuid4()) - - @property - def project_name(self): - return self._project_name - - @property - def id(self): - """Identifier of operation.""" - - return self._id - - @property - def entity_type(self): - return self._entity_type - - @abstractproperty - def operation_name(self): - """Stringified type of operation.""" - - pass - - @abstractmethod - def to_mongo_operation(self): - """Convert operation to Mongo batch operation.""" - - pass - - def to_data(self): - """Convert operation to data that can be converted to json or others. - - Warning: - Current state returns ObjectId objects which cannot be parsed by - json. - - Returns: - Dict[str, Any]: Description of operation. - """ - - return { - "id": self._id, - "entity_type": self.entity_type, - "project_name": self.project_name, - "operation": self.operation_name - } - - -class CreateOperation(AbstractOperation): - """Operation to create an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - data (Dict[str, Any]): Data of entity that will be created. - """ - - operation_name = "create" - - def __init__(self, project_name, entity_type, data): - super(CreateOperation, self).__init__(project_name, entity_type) - - if not data: - data = {} - else: - data = copy.deepcopy(dict(data)) - - if "_id" not in data: - data["_id"] = ObjectId() - else: - data["_id"] = ObjectId(data["_id"]) - - self._entity_id = data["_id"] - self._data = data - - def __setitem__(self, key, value): - self.set_value(key, value) - - def __getitem__(self, key): - return self.data[key] - - def set_value(self, key, value): - self.data[key] = value - - def get(self, key, *args, **kwargs): - return self.data.get(key, *args, **kwargs) - - @property - def entity_id(self): - return self._entity_id - - @property - def data(self): - return self._data - - def to_mongo_operation(self): - return InsertOne(copy.deepcopy(self._data)) - - def to_data(self): - output = super(CreateOperation, self).to_data() - output["data"] = copy.deepcopy(self.data) - return output - - -class UpdateOperation(AbstractOperation): - """Operation to update an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - entity_id (Union[str, ObjectId]): Identifier of an entity. - update_data (Dict[str, Any]): Key -> value changes that will be set in - database. If value is set to 'REMOVED_VALUE' the key will be - removed. Only first level of dictionary is checked (on purpose). - """ - - operation_name = "update" - - def __init__(self, project_name, entity_type, entity_id, update_data): - super(UpdateOperation, self).__init__(project_name, entity_type) - - self._entity_id = ObjectId(entity_id) - self._update_data = update_data - - @property - def entity_id(self): - return self._entity_id - - @property - def update_data(self): - return self._update_data - - def to_mongo_operation(self): - unset_data = {} - set_data = {} - for key, value in self._update_data.items(): - if value is REMOVED_VALUE: - unset_data[key] = None - else: - set_data[key] = value - - op_data = {} - if unset_data: - op_data["$unset"] = unset_data - if set_data: - op_data["$set"] = set_data - - if not op_data: - return None - - return UpdateOne( - {"_id": self.entity_id}, - op_data - ) - - def to_data(self): - changes = {} - for key, value in self._update_data.items(): - if value is REMOVED_VALUE: - value = None - changes[key] = value - - output = super(UpdateOperation, self).to_data() - output.update({ - "entity_id": self.entity_id, - "changes": changes - }) - return output - - -class DeleteOperation(AbstractOperation): - """Operation to delete an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - entity_id (Union[str, ObjectId]): Entity id that will be removed. - """ - - operation_name = "delete" - - def __init__(self, project_name, entity_type, entity_id): - super(DeleteOperation, self).__init__(project_name, entity_type) - - self._entity_id = ObjectId(entity_id) - - @property - def entity_id(self): - return self._entity_id - - def to_mongo_operation(self): - return DeleteOne({"_id": self.entity_id}) - - def to_data(self): - output = super(DeleteOperation, self).to_data() - output["entity_id"] = self.entity_id - return output - - -class OperationsSession(object): - """Session storing operations that should happen in an order. - - At this moment does not handle anything special can be sonsidered as - stupid list of operations that will happen after each other. If creation - of same entity is there multiple times it's handled in any way and document - values are not validated. - - All operations must be related to single project. - - Args: - project_name (str): Project name to which are operations related. - """ - - def __init__(self): - self._operations = [] - - def add(self, operation): - """Add operation to be processed. - - Args: - operation (BaseOperation): Operation that should be processed. - """ - if not isinstance( - operation, - (CreateOperation, UpdateOperation, DeleteOperation) - ): - raise TypeError("Expected Operation object got {}".format( - str(type(operation)) - )) - - self._operations.append(operation) - - def append(self, operation): - """Add operation to be processed. - - Args: - operation (BaseOperation): Operation that should be processed. - """ - - self.add(operation) - - def extend(self, operations): - """Add operations to be processed. - - Args: - operations (List[BaseOperation]): Operations that should be - processed. - """ - - for operation in operations: - self.add(operation) - - def remove(self, operation): - """Remove operation.""" - - self._operations.remove(operation) - - def clear(self): - """Clear all registered operations.""" - - self._operations = [] - - def to_data(self): - return [ - operation.to_data() - for operation in self._operations - ] - - def commit(self): - """Commit session operations.""" - - operations, self._operations = self._operations, [] - if not operations: - return - - operations_by_project = collections.defaultdict(list) - for operation in operations: - operations_by_project[operation.project_name].append(operation) - - for project_name, operations in operations_by_project.items(): - bulk_writes = [] - for operation in operations: - mongo_op = operation.to_mongo_operation() - if mongo_op is not None: - bulk_writes.append(mongo_op) - - if bulk_writes: - collection = get_project_connection(project_name) - collection.bulk_write(bulk_writes) - - def create_entity(self, project_name, entity_type, data): - """Fast access to 'CreateOperation'. - - Returns: - CreateOperation: Object of update operation. - """ - - operation = CreateOperation(project_name, entity_type, data) - self.add(operation) - return operation - - def update_entity(self, project_name, entity_type, entity_id, update_data): - """Fast access to 'UpdateOperation'. - - Returns: - UpdateOperation: Object of update operation. - """ - - operation = UpdateOperation( - project_name, entity_type, entity_id, update_data - ) - self.add(operation) - return operation - - def delete_entity(self, project_name, entity_type, entity_id): - """Fast access to 'DeleteOperation'. - - Returns: - DeleteOperation: Object of delete operation. - """ - - operation = DeleteOperation(project_name, entity_type, entity_id) - self.add(operation) - return operation - - -def create_project( - project_name, - project_code, - library_project=False, -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Note: - This function is here to be OP v4 ready but in v3 has more logic - to do. That's why inner imports are in the body. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - """ - - from openpype.settings import ProjectSettings, SaveWarningExc - from openpype.pipeline.schema import validate - - if get_project(project_name, fields=["name"]): - raise ValueError("Project with name \"{}\" already exists".format( - project_name - )) - - if not PROJECT_NAME_REGEX.match(project_name): - raise ValueError(( - "Project name \"{}\" contain invalid characters" - ).format(project_name)) - - project_doc = { - "type": "project", - "name": project_name, - "data": { - "code": project_code, - "library_project": library_project, - }, - "schema": CURRENT_PROJECT_SCHEMA - } - - op_session = OperationsSession() - # Insert document with basic data - create_op = op_session.create_entity( - project_name, project_doc["type"], project_doc +else: + from ayon_api.server_api import ( + PROJECT_NAME_ALLOWED_SYMBOLS, + PROJECT_NAME_REGEX, + ) + from .server.operations import * + from .mongo.operations import ( + CURRENT_PROJECT_SCHEMA, + CURRENT_PROJECT_CONFIG_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_WORKFILE_INFO_SCHEMA, + CURRENT_THUMBNAIL_SCHEMA ) - op_session.commit() - - # Load ProjectSettings for the project and save it to store all attributes - # and Anatomy - try: - project_settings_entity = ProjectSettings(project_name) - project_settings_entity.save() - except SaveWarningExc as exc: - print(str(exc)) - except Exception: - op_session.delete_entity( - project_name, project_doc["type"], create_op.entity_id - ) - op_session.commit() - raise - - project_doc = get_project(project_name) - - try: - # Validate created project document - validate(project_doc) - except Exception: - # Remove project if is not valid - op_session.delete_entity( - project_name, project_doc["type"], create_op.entity_id - ) - op_session.commit() - raise - - return project_doc diff --git a/openpype/client/operations_base.py b/openpype/client/operations_base.py new file mode 100644 index 0000000000..887b237b1c --- /dev/null +++ b/openpype/client/operations_base.py @@ -0,0 +1,289 @@ +import uuid +import copy +from abc import ABCMeta, abstractmethod, abstractproperty +import six + +REMOVED_VALUE = object() + + +@six.add_metaclass(ABCMeta) +class AbstractOperation(object): + """Base operation class. + + Operation represent a call into database. The call can create, change or + remove data. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + """ + + def __init__(self, project_name, entity_type): + self._project_name = project_name + self._entity_type = entity_type + self._id = str(uuid.uuid4()) + + @property + def project_name(self): + return self._project_name + + @property + def id(self): + """Identifier of operation.""" + + return self._id + + @property + def entity_type(self): + return self._entity_type + + @abstractproperty + def operation_name(self): + """Stringified type of operation.""" + + pass + + def to_data(self): + """Convert operation to data that can be converted to json or others. + + Warning: + Current state returns ObjectId objects which cannot be parsed by + json. + + Returns: + Dict[str, Any]: Description of operation. + """ + + return { + "id": self._id, + "entity_type": self.entity_type, + "project_name": self.project_name, + "operation": self.operation_name + } + + +class CreateOperation(AbstractOperation): + """Operation to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data): + super(CreateOperation, self).__init__(project_name, entity_type) + + if not data: + data = {} + else: + data = copy.deepcopy(dict(data)) + self._data = data + + def __setitem__(self, key, value): + self.set_value(key, value) + + def __getitem__(self, key): + return self.data[key] + + def set_value(self, key, value): + self.data[key] = value + + def get(self, key, *args, **kwargs): + return self.data.get(key, *args, **kwargs) + + @abstractproperty + def entity_id(self): + pass + + @property + def data(self): + return self._data + + def to_data(self): + output = super(CreateOperation, self).to_data() + output["data"] = copy.deepcopy(self.data) + return output + + +class UpdateOperation(AbstractOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__(self, project_name, entity_type, entity_id, update_data): + super(UpdateOperation, self).__init__(project_name, entity_type) + + self._entity_id = entity_id + self._update_data = update_data + + @property + def entity_id(self): + return self._entity_id + + @property + def update_data(self): + return self._update_data + + def to_data(self): + changes = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + changes[key] = value + + output = super(UpdateOperation, self).to_data() + output.update({ + "entity_id": self.entity_id, + "changes": changes + }) + return output + + +class DeleteOperation(AbstractOperation): + """Operation to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id): + super(DeleteOperation, self).__init__(project_name, entity_type) + + self._entity_id = entity_id + + @property + def entity_id(self): + return self._entity_id + + def to_data(self): + output = super(DeleteOperation, self).to_data() + output["entity_id"] = self.entity_id + return output + + +class BaseOperationsSession(object): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be considered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and document + values are not validated. + """ + + def __init__(self): + self._operations = [] + + def __len__(self): + return len(self._operations) + + def add(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + if not isinstance( + operation, + (CreateOperation, UpdateOperation, DeleteOperation) + ): + raise TypeError("Expected Operation object got {}".format( + str(type(operation)) + )) + + self._operations.append(operation) + + def append(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + + self.add(operation) + + def extend(self, operations): + """Add operations to be processed. + + Args: + operations (List[BaseOperation]): Operations that should be + processed. + """ + + for operation in operations: + self.add(operation) + + def remove(self, operation): + """Remove operation.""" + + self._operations.remove(operation) + + def clear(self): + """Clear all registered operations.""" + + self._operations = [] + + def to_data(self): + return [ + operation.to_data() + for operation in self._operations + ] + + @abstractmethod + def commit(self): + """Commit session operations.""" + pass + + def create_entity(self, project_name, entity_type, data): + """Fast access to 'CreateOperation'. + + Returns: + CreateOperation: Object of update operation. + """ + + operation = CreateOperation(project_name, entity_type, data) + self.add(operation) + return operation + + def update_entity(self, project_name, entity_type, entity_id, update_data): + """Fast access to 'UpdateOperation'. + + Returns: + UpdateOperation: Object of update operation. + """ + + operation = UpdateOperation( + project_name, entity_type, entity_id, update_data + ) + self.add(operation) + return operation + + def delete_entity(self, project_name, entity_type, entity_id): + """Fast access to 'DeleteOperation'. + + Returns: + DeleteOperation: Object of delete operation. + """ + + operation = DeleteOperation(project_name, entity_type, entity_id) + self.add(operation) + return operation diff --git a/openpype/hosts/celaction/hooks/__init__.py b/openpype/client/server/__init__.py similarity index 100% rename from openpype/hosts/celaction/hooks/__init__.py rename to openpype/client/server/__init__.py diff --git a/openpype/client/server/constants.py b/openpype/client/server/constants.py new file mode 100644 index 0000000000..1d3f94c702 --- /dev/null +++ b/openpype/client/server/constants.py @@ -0,0 +1,18 @@ +# --- Folders --- +DEFAULT_FOLDER_FIELDS = { + "id", + "name", + "path", + "parentId", + "active", + "parents", + "thumbnailId" +} + +REPRESENTATION_FILES_FIELDS = { + "files.name", + "files.hash", + "files.id", + "files.path", + "files.size", +} diff --git a/openpype/client/server/conversion_utils.py b/openpype/client/server/conversion_utils.py new file mode 100644 index 0000000000..8c18cb1c13 --- /dev/null +++ b/openpype/client/server/conversion_utils.py @@ -0,0 +1,1339 @@ +import os +import arrow +import collections +import json + +import six + +from openpype.client.operations_base import REMOVED_VALUE +from openpype.client.mongo.operations import ( + CURRENT_PROJECT_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_WORKFILE_INFO_SCHEMA, +) +from .constants import REPRESENTATION_FILES_FIELDS +from .utils import create_entity_id, prepare_entity_changes + +# --- Project entity --- +PROJECT_FIELDS_MAPPING_V3_V4 = { + "_id": {"name"}, + "name": {"name"}, + "data": {"data", "code"}, + "data.library_project": {"library"}, + "data.code": {"code"}, + "data.active": {"active"}, +} + +# TODO this should not be hardcoded but received from server!!! +# --- Folder entity --- +FOLDER_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "label": {"label"}, + "data": { + "parentId", "parents", "active", "tasks", "thumbnailId" + }, + "data.visualParent": {"parentId"}, + "data.parents": {"parents"}, + "data.active": {"active"}, + "data.thumbnail_id": {"thumbnailId"}, + "data.entityType": {"folderType"} +} + +# --- Subset entity --- +SUBSET_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "data.active": {"active"}, + "parent": {"folderId"} +} + +# --- Version entity --- +VERSION_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"version"}, + "parent": {"productId"} +} + +# --- Representation entity --- +REPRESENTATION_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "parent": {"versionId"}, + "context": {"context"}, + "files": {"files"}, +} + + +def project_fields_v3_to_v4(fields, con): + """Convert project fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + # TODO config fields + # - config.apps + # - config.groups + if not fields: + return None + + project_attribs = con.get_attributes_for_type("project") + output = set() + for field in fields: + # If config is needed the rest api call must be used + if field.startswith("config"): + return None + + if field in PROJECT_FIELDS_MAPPING_V3_V4: + output |= PROJECT_FIELDS_MAPPING_V3_V4[field] + if field == "data": + output |= { + "attrib.{}".format(attr) + for attr in project_attribs + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in project_attribs: + output.add("attrib.{}".format(data_key)) + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "name" not in output: + output.add("name") + return output + + +def _get_default_template_name(templates): + default_template = None + for name, template in templates.items(): + if name == "default": + return "default" + + if default_template is None: + default_template = name + + return default_template + + +def _template_replacements_to_v3(template): + return ( + template + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + ) + + +def _convert_template_item(template): + # Others won't have 'directory' + if "directory" not in template: + return + folder = _template_replacements_to_v3(template.pop("directory")) + template["folder"] = folder + template["file"] = _template_replacements_to_v3(template["file"]) + template["path"] = "/".join( + (folder, template["file"]) + ) + + +def _fill_template_category(templates, cat_templates, cat_key): + default_template_name = _get_default_template_name(cat_templates) + for template_name, cat_template in cat_templates.items(): + _convert_template_item(cat_template) + if template_name == default_template_name: + templates[cat_key] = cat_template + else: + new_name = "{}_{}".format(cat_key, template_name) + templates["others"][new_name] = cat_template + + +def convert_v4_project_to_v3(project): + """Convert Project entity data from v4 structure to v3 structure. + + Args: + project (Dict[str, Any]): Project entity queried from v4 server. + + Returns: + Dict[str, Any]: Project converted to v3 structure. + """ + + if not project: + return project + + project_name = project["name"] + output = { + "_id": project_name, + "name": project_name, + "schema": CURRENT_PROJECT_SCHEMA, + "type": "project" + } + + data = project.get("data") or {} + attribs = project.get("attrib") or {} + apps_attr = attribs.pop("applications", None) or [] + applications = [ + {"name": app_name} + for app_name in apps_attr + ] + data.update(attribs) + if "tools" in data: + data["tools_env"] = data.pop("tools") + + data["entityType"] = "Project" + + config = {} + project_config = project.get("config") + + if project_config: + config["apps"] = applications + config["roots"] = project_config["roots"] + + templates = project_config["templates"] + templates["defaults"] = templates.pop("common", None) or {} + + others_templates = templates.pop("others", None) or {} + new_others_templates = {} + templates["others"] = new_others_templates + for name, template in others_templates.items(): + _convert_template_item(template) + new_others_templates[name] = template + + for key in ( + "work", + "publish", + "hero" + ): + cat_templates = templates.pop(key) + _fill_template_category(templates, cat_templates, key) + + delivery_templates = templates.pop("delivery", None) or {} + new_delivery_templates = {} + for name, delivery_template in delivery_templates.items(): + new_delivery_templates[name] = "/".join( + (delivery_template["directory"], delivery_template["file"]) + ) + templates["delivery"] = new_delivery_templates + + config["templates"] = templates + + if "taskTypes" in project: + task_types = project["taskTypes"] + new_task_types = {} + for task_type in task_types: + name = task_type.pop("name") + # Change 'shortName' to 'short_name' + task_type["short_name"] = task_type.pop("shortName", None) + new_task_types[name] = task_type + + config["tasks"] = new_task_types + + if config: + output["config"] = config + + for data_key, key in ( + ("library_project", "library"), + ("code", "code"), + ("active", "active") + ): + if key in project: + data[data_key] = project[key] + + if "attrib" in project: + for key, value in project["attrib"].items(): + data[key] = value + + if data: + output["data"] = data + return output + + +def folder_fields_v3_to_v4(fields, con): + """Convert folder fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + folder_attributes = con.get_attributes_for_type("folder") + output = set() + for field in fields: + if field in ("schema", "type", "parent"): + continue + + if field in FOLDER_FIELDS_MAPPING_V3_V4: + output |= FOLDER_FIELDS_MAPPING_V3_V4[field] + if field == "data": + output |= { + "attrib.{}".format(attr) + for attr in folder_attributes + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key == "label": + output.add("name") + + elif data_key in ("icon", "color"): + continue + + elif data_key.startswith("tasks"): + output.add("tasks") + + elif data_key in folder_attributes: + output.add("attrib.{}".format(data_key)) + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_tasks_to_v3(tasks): + """Convert v4 task item to v3 task. + + Args: + tasks (List[Dict[str, Any]]): Task entites. + + Returns: + Dict[str, Dict[str, Any]]: Tasks in v3 variant ready for v3 asset. + """ + + output = {} + for task in tasks: + task_name = task["name"] + new_task = { + "type": task["taskType"] + } + output[task_name] = new_task + return output + + +def convert_v4_folder_to_v3(folder, project_name): + """Convert v4 folder to v3 asset. + + Args: + folder (Dict[str, Any]): Folder entity data. + project_name (str): Project name from which folder was queried. + + Returns: + Dict[str, Any]: Converted v4 folder to v3 asset. + """ + + output = { + "_id": folder["id"], + "parent": project_name, + "type": "asset", + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + output_data = folder.get("data") or {} + + if "name" in folder: + output["name"] = folder["name"] + output_data["label"] = folder["name"] + + if "folderType" in folder: + output_data["entityType"] = folder["folderType"] + + for src_key, dst_key in ( + ("parentId", "visualParent"), + ("active", "active"), + ("thumbnailId", "thumbnail_id"), + ("parents", "parents"), + ): + if src_key in folder: + output_data[dst_key] = folder[src_key] + + if "attrib" in folder: + output_data.update(folder["attrib"]) + + if "tools" in output_data: + output_data["tools_env"] = output_data.pop("tools") + + if "tasks" in folder: + output_data["tasks"] = convert_v4_tasks_to_v3(folder["tasks"]) + + output["data"] = output_data + + return output + + +def subset_fields_v3_to_v4(fields, con): + """Convert subset fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + product_attributes = con.get_attributes_for_type("product") + + output = set() + for field in fields: + if field in ("schema", "type"): + continue + + if field in SUBSET_FIELDS_MAPPING_V3_V4: + output |= SUBSET_FIELDS_MAPPING_V3_V4[field] + + elif field == "data": + output.add("productType") + output.add("active") + output |= { + "attrib.{}".format(attr) + for attr in product_attributes + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in ("family", "families"): + output.add("productType") + + elif data_key in product_attributes: + output.add("attrib.{}".format(data_key)) + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_subset_to_v3(subset): + output = { + "_id": subset["id"], + "type": "subset", + "schema": CURRENT_SUBSET_SCHEMA + } + if "folderId" in subset: + output["parent"] = subset["folderId"] + + output_data = subset.get("data") or {} + + if "name" in subset: + output["name"] = subset["name"] + + if "active" in subset: + output_data["active"] = subset["active"] + + if "attrib" in subset: + attrib = subset["attrib"] + if "productGroup" in attrib: + attrib["subsetGroup"] = attrib.pop("productGroup") + output_data.update(attrib) + + family = subset.get("productType") + if family: + output_data["family"] = family + output_data["families"] = [family] + + output["data"] = output_data + + return output + + +def version_fields_v3_to_v4(fields, con): + """Convert version fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + version_attributes = con.get_attributes_for_type("version") + + output = set() + for field in fields: + if field in ("type", "schema", "version_id"): + continue + + if field in VERSION_FIELDS_MAPPING_V3_V4: + output |= VERSION_FIELDS_MAPPING_V3_V4[field] + + elif field == "data": + output |= { + "attrib.{}".format(attr) + for attr in version_attributes + } + output |= { + "author", + "createdAt", + "thumbnailId", + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in version_attributes: + output.add("attrib.{}".format(data_key)) + + elif data_key == "thumbnail_id": + output.add("thumbnailId") + + elif data_key == "time": + output.add("createdAt") + + elif data_key == "author": + output.add("author") + + elif data_key in ("tags", ): + continue + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_version_to_v3(version): + """Convert v4 version entity to v4 version. + + Args: + version (Dict[str, Any]): Queried v4 version entity. + + Returns: + Dict[str, Any]: Conveted version entity to v3 structure. + """ + + version_num = version["version"] + if version_num < 0: + output = { + "_id": version["id"], + "type": "hero_version", + "schema": CURRENT_HERO_VERSION_SCHEMA, + } + if "productId" in version: + output["parent"] = version["productId"] + + if "data" in version: + output["data"] = version["data"] + return output + + output = { + "_id": version["id"], + "type": "version", + "name": version_num, + "schema": CURRENT_VERSION_SCHEMA + } + if "productId" in version: + output["parent"] = version["productId"] + + output_data = version.get("data") or {} + if "attrib" in version: + output_data.update(version["attrib"]) + + for src_key, dst_key in ( + ("active", "active"), + ("thumbnailId", "thumbnail_id"), + ("author", "author") + ): + if src_key in version: + output_data[dst_key] = version[src_key] + + if "createdAt" in version: + created_at = arrow.get(version["createdAt"]) + output_data["time"] = created_at.strftime("%Y%m%dT%H%M%SZ") + + output["data"] = output_data + + return output + + +def representation_fields_v3_to_v4(fields, con): + """Convert representation fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + representation_attributes = con.get_attributes_for_type("representation") + + output = set() + for field in fields: + if field in ("type", "schema"): + continue + + if field in REPRESENTATION_FIELDS_MAPPING_V3_V4: + output |= REPRESENTATION_FIELDS_MAPPING_V3_V4[field] + + elif field.startswith("context"): + output.add("context") + + # TODO: 'files' can have specific attributes but the keys in v3 and v4 + # are not the same (content is not the same) + elif field.startswith("files"): + output |= REPRESENTATION_FILES_FIELDS + + elif field.startswith("data"): + output |= { + "attrib.{}".format(attr) + for attr in representation_attributes + } + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_representation_to_v3(representation): + """Convert v4 representation to v3 representation. + + Args: + representation (Dict[str, Any]): Queried representation from v4 server. + + Returns: + Dict[str, Any]: Converted representation to v3 structure. + """ + + output = { + "type": "representation", + "schema": CURRENT_REPRESENTATION_SCHEMA, + } + if "id" in representation: + output["_id"] = representation["id"] + + for v3_key, v4_key in ( + ("name", "name"), + ("parent", "versionId") + ): + if v4_key in representation: + output[v3_key] = representation[v4_key] + + if "context" in representation: + context = representation["context"] + if isinstance(context, six.string_types): + context = json.loads(context) + + if "asset" not in context and "folder" in context: + _c_folder = context["folder"] + context["asset"] = _c_folder["name"] + + elif "asset" in context and "folder" not in context: + context["folder"] = {"name": context["asset"]} + + if "product" in context: + _c_product = context.pop("product") + context["family"] = _c_product["type"] + context["subset"] = _c_product["name"] + + output["context"] = context + + if "files" in representation: + files = representation["files"] + new_files = [] + # From GraphQl is list + if isinstance(files, list): + for file_info in files: + file_info["_id"] = file_info["id"] + new_files.append(file_info) + + # From RestPoint is dictionary + elif isinstance(files, dict): + for file_id, file_info in files: + file_info["_id"] = file_id + new_files.append(file_info) + + for file_info in new_files: + if not file_info.get("sites"): + file_info["sites"] = [{ + "name": "studio" + }] + + output["files"] = new_files + + if representation.get("active") is False: + output["type"] = "archived_representation" + output["old_id"] = output["_id"] + + output_data = representation.get("data") or {} + if "attrib" in representation: + output_data.update(representation["attrib"]) + + for key, data_key in ( + ("active", "active"), + ): + if key in representation: + output_data[data_key] = representation[key] + + if "template" in output_data: + output_data["template"] = ( + output_data["template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + ) + + output["data"] = output_data + + return output + + +def workfile_info_fields_v3_to_v4(fields): + if not fields: + return None + + new_fields = set() + fields = set(fields) + for v3_key, v4_key in ( + ("_id", "id"), + ("files", "path"), + ("filename", "name"), + ("data", "data"), + ): + if v3_key in fields: + new_fields.add(v4_key) + + if "parent" in fields or "task_name" in fields: + new_fields.add("taskId") + + return new_fields + + +def convert_v4_workfile_info_to_v3(workfile_info, task): + output = { + "type": "workfile", + "schema": CURRENT_WORKFILE_INFO_SCHEMA, + } + if "id" in workfile_info: + output["_id"] = workfile_info["id"] + + if "path" in workfile_info: + output["files"] = [workfile_info["path"]] + + if "name" in workfile_info: + output["filename"] = workfile_info["name"] + + if "taskId" in workfile_info: + output["task_name"] = task["name"] + output["parent"] = task["folderId"] + + return output + + +def convert_create_asset_to_v4(asset, project, con): + folder_attributes = con.get_attributes_for_type("folder") + + asset_data = asset["data"] + parent_id = asset_data["visualParent"] + + folder = { + "name": asset["name"], + "parentId": parent_id, + } + entity_id = asset.get("_id") + if entity_id: + folder["id"] = entity_id + + attribs = {} + data = {} + for key, value in asset_data.items(): + if key in ( + "visualParent", + "thumbnail_id", + "parents", + "inputLinks", + "avalon_mongo_id", + ): + continue + + if key not in folder_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + folder["attrib"] = attribs + + if data: + folder["data"] = data + return folder + + +def convert_create_task_to_v4(task, project, con): + if not project["taskTypes"]: + raise ValueError( + "Project \"{}\" does not have any task types".format( + project["name"])) + + task_type = task["type"] + if task_type not in project["taskTypes"]: + task_type = tuple(project["taskTypes"].keys())[0] + + return { + "name": task["name"], + "taskType": task_type, + "folderId": task["folderId"] + } + + +def convert_create_subset_to_v4(subset, con): + product_attributes = con.get_attributes_for_type("product") + + subset_data = subset["data"] + product_type = subset_data.get("family") + if not product_type: + product_type = subset_data["families"][0] + + converted_product = { + "name": subset["name"], + "productType": product_type, + "folderId": subset["parent"], + } + entity_id = subset.get("_id") + if entity_id: + converted_product["id"] = entity_id + + attribs = {} + data = {} + if "subsetGroup" in subset_data: + subset_data["productGroup"] = subset_data.pop("subsetGroup") + for key, value in subset_data.items(): + if key not in product_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_product["attrib"] = attribs + + if data: + converted_product["data"] = data + + return converted_product + + +def convert_create_version_to_v4(version, con): + version_attributes = con.get_attributes_for_type("version") + converted_version = { + "version": version["name"], + "productId": version["parent"], + } + entity_id = version.get("_id") + if entity_id: + converted_version["id"] = entity_id + + version_data = version["data"] + attribs = {} + data = {} + for key, value in version_data.items(): + if key not in version_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_version["attrib"] = attribs + + if data: + converted_version["data"] = attribs + + return converted_version + + +def convert_create_hero_version_to_v4(hero_version, project_name, con): + if "version_id" in hero_version: + version_id = hero_version["version_id"] + version = con.get_version_by_id(project_name, version_id) + version["version"] = - version["version"] + + for auto_key in ( + "name", + "createdAt", + "updatedAt", + "author", + ): + version.pop(auto_key, None) + + return version + + version_attributes = con.get_attributes_for_type("version") + converted_version = { + "version": hero_version["version"], + "productId": hero_version["parent"], + } + entity_id = hero_version.get("_id") + if entity_id: + converted_version["id"] = entity_id + + version_data = hero_version["data"] + attribs = {} + data = {} + for key, value in version_data.items(): + if key not in version_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_version["attrib"] = attribs + + if data: + converted_version["data"] = attribs + + return converted_version + + +def convert_create_representation_to_v4(representation, con): + representation_attributes = con.get_attributes_for_type("representation") + + converted_representation = { + "name": representation["name"], + "versionId": representation["parent"], + } + entity_id = representation.get("_id") + if entity_id: + converted_representation["id"] = entity_id + + if representation.get("type") == "archived_representation": + converted_representation["active"] = False + + new_files = [] + for file_item in representation["files"]: + new_file_item = { + key: value + for key, value in file_item.items() + if key in ("hash", "path", "size") + } + new_file_item.update({ + "id": create_entity_id(), + "hash_type": "op3", + "name": os.path.basename(new_file_item["path"]) + }) + new_files.append(new_file_item) + + converted_representation["files"] = new_files + + context = representation["context"] + if "folder" not in context: + context["folder"] = { + "name": context.get("asset") + } + + context["product"] = { + "type": context.pop("family", None), + "name": context.pop("subset", None), + } + + attribs = {} + data = { + "context": context, + } + + representation_data = representation["data"] + representation_data["template"] = ( + representation_data["template"] + .replace("{subset}", "{product[name]}") + .replace("{family}", "{product[type]}") + ) + + for key, value in representation_data.items(): + if key not in representation_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_representation["attrib"] = attribs + + if data: + converted_representation["data"] = data + + return converted_representation + + +def convert_create_workfile_info_to_v4(data, project_name, con): + folder_id = data["parent"] + task_name = data["task_name"] + task = con.get_task_by_name(project_name, folder_id, task_name) + if not task: + return None + + workfile_attributes = con.get_attributes_for_type("workfile") + filename = data["filename"] + possible_attribs = { + "extension": os.path.splitext(filename)[-1] + } + attribs = {} + for attr in workfile_attributes: + if attr in possible_attribs: + attribs[attr] = possible_attribs[attr] + + output = { + "path": data["files"][0], + "name": filename, + "taskId": task["id"] + } + if "_id" in data: + output["id"] = data["_id"] + + if attribs: + output["attrib"] = attribs + + output_data = data.get("data") + if output_data: + output["data"] = output_data + return output + + +def _from_flat_dict(data): + output = {} + for key, value in data.items(): + output_value = output + subkeys = key.split(".") + last_key = subkeys.pop(-1) + for subkey in subkeys: + if subkey not in output_value: + output_value[subkey] = {} + output_value = output_value[subkey] + + output_value[last_key] = value + return output + + +def _to_flat_dict(data): + output = {} + flat_queue = collections.deque() + flat_queue.append(([], data)) + while flat_queue: + item = flat_queue.popleft() + parent_keys, data = item + for key, value in data.items(): + keys = list(parent_keys) + keys.append(key) + if isinstance(value, dict): + flat_queue.append((keys, value)) + else: + full_key = ".".join(keys) + output[full_key] = value + + return output + + +def convert_update_folder_to_v4(project_name, asset_id, update_data, con): + new_update_data = {} + + folder_attributes = con.get_attributes_for_type("folder") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + + has_new_parent = False + has_task_changes = False + parent_id = None + tasks = None + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if "type" in update_data: + new_update_data["active"] = update_data["type"] == "asset" + + if data: + if "thumbnail_id" in data: + new_update_data["thumbnailId"] = data.pop("thumbnail_id") + + if "tasks" in data: + tasks = data.pop("tasks") + has_task_changes = True + + if "visualParent" in data: + has_new_parent = True + parent_id = data.pop("visualParent") + + for key, value in data.items(): + if key in folder_attributes: + attribs[key] = value + else: + new_data[key] = value + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "asset": + new_update_data["active"] = True + elif new_type == "archived_asset": + new_update_data["active"] = False + + if has_new_parent: + new_update_data["parentId"] = parent_id + + if new_data: + print("Folder has new data: {}".format(new_data)) + new_update_data["data"] = new_data + + if attribs: + new_update_data["attrib"] = attribs + + if has_task_changes: + raise ValueError("Task changes of folder are not implemented") + + return _to_flat_dict(new_update_data) + + +def convert_update_subset_to_v4(project_name, subset_id, update_data, con): + new_update_data = {} + + product_attributes = con.get_attributes_for_type("product") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + if "family" in data: + family = data.pop("family") + new_update_data["productType"] = family + + if "families" in data: + families = data.pop("families") + if "productType" not in new_update_data: + new_update_data["productType"] = families[0] + + if "subsetGroup" in data: + data["productGroup"] = data.pop("subsetGroup") + for key, value in data.items(): + if key in product_attributes: + if value is REMOVED_VALUE: + value = None + attribs[key] = value + + elif value is not REMOVED_VALUE: + new_data[key] = value + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "subset": + new_update_data["active"] = True + elif new_type == "archived_subset": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["folderId"] = update_data["parent"] + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Subset has new data: {}".format(new_data)) + flat_data["data"] = new_data + + return flat_data + + +def convert_update_version_to_v4(project_name, version_id, update_data, con): + new_update_data = {} + + version_attributes = con.get_attributes_for_type("version") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + if "author" in data: + new_update_data["author"] = data.pop("author") + + if "thumbnail_id" in data: + new_update_data["thumbnailId"] = data.pop("thumbnail_id") + + for key, value in data.items(): + if key in version_attributes: + if value is REMOVED_VALUE: + value = None + attribs[key] = value + + elif value is not REMOVED_VALUE: + new_data[key] = value + + if "name" in update_data: + new_update_data["version"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "version": + new_update_data["active"] = True + elif new_type == "archived_version": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["productId"] = update_data["parent"] + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Version has new data: {}".format(new_data)) + flat_data["data"] = new_data + return flat_data + + +def convert_update_hero_version_to_v4( + project_name, hero_version_id, update_data, con +): + if "version_id" not in update_data: + return None + + version_id = update_data["version_id"] + hero_version = con.get_hero_version_by_id(project_name, hero_version_id) + version = con.get_version_by_id(project_name, version_id) + version["version"] = - version["version"] + version["id"] = hero_version_id + + for auto_key in ( + "name", + "createdAt", + "updatedAt", + "author", + ): + version.pop(auto_key, None) + + return prepare_entity_changes(hero_version, version) + + +def convert_update_representation_to_v4( + project_name, repre_id, update_data, con +): + new_update_data = {} + + folder_attributes = con.get_attributes_for_type("folder") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + for key, value in data.items(): + if key in folder_attributes: + attribs[key] = value + else: + new_data[key] = value + + if "template" in attribs: + attribs["template"] = ( + attribs["template"] + .replace("{family}", "{product[type]}") + .replace("{subset}", "{product[name]}") + ) + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "representation": + new_update_data["active"] = True + elif new_type == "archived_representation": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["versionId"] = update_data["parent"] + + if "context" in update_data: + context = update_data["context"] + if "folder" not in context and "asset" in context: + context["folder"] = {"name": context.pop("asset")} + + if "family" in context or "subset" in context: + context["product"] = { + "name": context.pop("subset"), + "type": context.pop("family"), + } + new_data["context"] = context + + if "files" in update_data: + new_files = update_data["files"] + if isinstance(new_files, dict): + new_files = list(new_files.values()) + + for item in new_files: + for key in tuple(item.keys()): + if key not in ("hash", "path", "size"): + item.pop(key) + item.update({ + "id": create_entity_id(), + "name": os.path.basename(item["path"]), + "hash_type": "op3", + }) + new_update_data["files"] = new_files + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Representation has new data: {}".format(new_data)) + flat_data["data"] = new_data + + return flat_data + + +def convert_update_workfile_info_to_v4( + project_name, workfile_id, update_data, con +): + return { + key: value + for key, value in update_data.items() + if key.startswith("data") + } diff --git a/openpype/client/server/entities.py b/openpype/client/server/entities.py new file mode 100644 index 0000000000..16223d3d91 --- /dev/null +++ b/openpype/client/server/entities.py @@ -0,0 +1,694 @@ +import collections + +from ayon_api import get_server_api_connection + +from openpype.client.mongo.operations import CURRENT_THUMBNAIL_SCHEMA + +from .openpype_comp import get_folders_with_tasks +from .conversion_utils import ( + project_fields_v3_to_v4, + convert_v4_project_to_v3, + + folder_fields_v3_to_v4, + convert_v4_folder_to_v3, + + subset_fields_v3_to_v4, + convert_v4_subset_to_v3, + + version_fields_v3_to_v4, + convert_v4_version_to_v3, + + representation_fields_v3_to_v4, + convert_v4_representation_to_v3, + + workfile_info_fields_v3_to_v4, + convert_v4_workfile_info_to_v3, +) + + +def get_projects(active=True, inactive=False, library=None, fields=None): + if not active and not inactive: + return + + if active and inactive: + active = None + elif active: + active = True + elif inactive: + active = False + + con = get_server_api_connection() + fields = project_fields_v3_to_v4(fields, con) + for project in con.get_projects(active, library, fields=fields): + yield convert_v4_project_to_v3(project) + + +def get_project(project_name, active=True, inactive=False, fields=None): + # Skip if both are disabled + con = get_server_api_connection() + fields = project_fields_v3_to_v4(fields, con) + return convert_v4_project_to_v3( + con.get_project(project_name, fields=fields) + ) + + +def get_whole_project(*args, **kwargs): + raise NotImplementedError("'get_whole_project' not implemented") + + +def _get_subsets( + project_name, + subset_ids=None, + subset_names=None, + folder_ids=None, + names_by_folder_ids=None, + archived=False, + fields=None +): + # Convert fields and add minimum required fields + con = get_server_api_connection() + fields = subset_fields_v3_to_v4(fields, con) + if fields is not None: + for key in ( + "id", + "active" + ): + fields.add(key) + + active = True + if archived: + active = None + + for subset in con.get_products( + project_name, + subset_ids, + subset_names, + folder_ids=folder_ids, + names_by_folder_ids=names_by_folder_ids, + active=active, + fields=fields, + ): + yield convert_v4_subset_to_v3(subset) + + +def _get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=True, + standard=True, + latest=None, + active=None, + fields=None +): + con = get_server_api_connection() + + fields = version_fields_v3_to_v4(fields, con) + + # Make sure 'productId' and 'version' are available when hero versions + # are queried + if fields and hero: + fields = set(fields) + fields |= {"productId", "version"} + + queried_versions = con.get_versions( + project_name, + version_ids, + subset_ids, + versions, + hero, + standard, + latest, + active=active, + fields=fields + ) + + versions = [] + hero_versions = [] + for version in queried_versions: + if version["version"] < 0: + hero_versions.append(version) + else: + versions.append(convert_v4_version_to_v3(version)) + + if hero_versions: + subset_ids = set() + versions_nums = set() + for hero_version in hero_versions: + versions_nums.add(abs(hero_version["version"])) + subset_ids.add(hero_version["productId"]) + + hero_eq_versions = con.get_versions( + project_name, + product_ids=subset_ids, + versions=versions_nums, + hero=False, + fields=["id", "version", "productId"] + ) + hero_eq_by_subset_id = collections.defaultdict(list) + for version in hero_eq_versions: + hero_eq_by_subset_id[version["productId"]].append(version) + + for hero_version in hero_versions: + abs_version = abs(hero_version["version"]) + subset_id = hero_version["productId"] + version_id = None + for version in hero_eq_by_subset_id.get(subset_id, []): + if version["version"] == abs_version: + version_id = version["id"] + break + conv_hero = convert_v4_version_to_v3(hero_version) + conv_hero["version_id"] = version_id + versions.append(conv_hero) + + return versions + + +def get_asset_by_id(project_name, asset_id, fields=None): + assets = get_assets( + project_name, asset_ids=[asset_id], fields=fields + ) + for asset in assets: + return asset + return None + + +def get_asset_by_name(project_name, asset_name, fields=None): + assets = get_assets( + project_name, asset_names=[asset_name], fields=fields + ) + for asset in assets: + return asset + return None + + +def get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + archived=False, + fields=None +): + if not project_name: + return + + active = True + if archived: + active = None + + con = get_server_api_connection() + fields = folder_fields_v3_to_v4(fields, con) + kwargs = dict( + folder_ids=asset_ids, + folder_names=asset_names, + parent_ids=parent_ids, + active=active, + fields=fields + ) + + if fields is None or "tasks" in fields: + folders = get_folders_with_tasks(con, project_name, **kwargs) + + else: + folders = con.get_folders(project_name, **kwargs) + + for folder in folders: + yield convert_v4_folder_to_v3(folder, project_name) + + +def get_archived_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + fields=None +): + return get_assets( + project_name, + asset_ids, + asset_names, + parent_ids, + True, + fields + ) + + +def get_asset_ids_with_subsets(project_name, asset_ids=None): + con = get_server_api_connection() + return con.get_folder_ids_with_products(project_name, asset_ids) + + +def get_subset_by_id(project_name, subset_id, fields=None): + subsets = get_subsets( + project_name, subset_ids=[subset_id], fields=fields + ) + for subset in subsets: + return subset + return None + + +def get_subset_by_name(project_name, subset_name, asset_id, fields=None): + subsets = get_subsets( + project_name, + subset_names=[subset_name], + asset_ids=[asset_id], + fields=fields + ) + for subset in subsets: + return subset + return None + + +def get_subsets( + project_name, + subset_ids=None, + subset_names=None, + asset_ids=None, + names_by_asset_ids=None, + archived=False, + fields=None +): + return _get_subsets( + project_name, + subset_ids, + subset_names, + asset_ids, + names_by_asset_ids, + archived, + fields=fields + ) + + +def get_subset_families(project_name, subset_ids=None): + con = get_server_api_connection() + return con.get_product_type_names(project_name, subset_ids) + + +def get_version_by_id(project_name, version_id, fields=None): + versions = get_versions( + project_name, + version_ids=[version_id], + fields=fields, + hero=True + ) + for version in versions: + return version + return None + + +def get_version_by_name(project_name, version, subset_id, fields=None): + versions = get_versions( + project_name, + subset_ids=[subset_id], + versions=[version], + fields=fields + ) + for version in versions: + return version + return None + + +def get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=False, + fields=None +): + return _get_versions( + project_name, + version_ids, + subset_ids, + versions, + hero=hero, + standard=True, + fields=fields + ) + + +def get_hero_version_by_id(project_name, version_id, fields=None): + versions = get_hero_versions( + project_name, + version_ids=[version_id], + fields=fields + ) + for version in versions: + return version + return None + + +def get_hero_version_by_subset_id( + project_name, subset_id, fields=None +): + versions = get_hero_versions( + project_name, + subset_ids=[subset_id], + fields=fields + ) + for version in versions: + return version + return None + + +def get_hero_versions( + project_name, subset_ids=None, version_ids=None, fields=None +): + return _get_versions( + project_name, + version_ids=version_ids, + subset_ids=subset_ids, + hero=True, + standard=False, + fields=fields + ) + + +def get_last_versions(project_name, subset_ids, active=None, fields=None): + if fields: + fields = set(fields) + fields.add("parent") + + versions = _get_versions( + project_name, + subset_ids=subset_ids, + latest=True, + hero=False, + active=active, + fields=fields + ) + return { + version["parent"]: version + for version in versions + } + + +def get_last_version_by_subset_id(project_name, subset_id, fields=None): + versions = _get_versions( + project_name, + subset_ids=[subset_id], + latest=True, + hero=False, + fields=fields + ) + if not versions: + return None + return versions[0] + + +def get_last_version_by_subset_name( + project_name, + subset_name, + asset_id=None, + asset_name=None, + fields=None +): + if not asset_id and not asset_name: + return None + + if not asset_id: + asset = get_asset_by_name( + project_name, asset_name, fields=["_id"] + ) + if not asset: + return None + asset_id = asset["_id"] + + subset = get_subset_by_name( + project_name, subset_name, asset_id, fields=["_id"] + ) + if not subset: + return None + return get_last_version_by_subset_id( + project_name, subset["_id"], fields=fields + ) + + +def get_output_link_versions(project_name, version_id, fields=None): + if not version_id: + return [] + + con = get_server_api_connection() + version_links = con.get_version_links( + project_name, version_id, link_direction="out") + + version_ids = { + link["entityId"] + for link in version_links + if link["entityType"] == "version" + } + if not version_ids: + return [] + + return get_versions(project_name, version_ids=version_ids, fields=fields) + + +def version_is_latest(project_name, version_id): + con = get_server_api_connection() + return con.version_is_latest(project_name, version_id) + + +def get_representation_by_id(project_name, representation_id, fields=None): + representations = get_representations( + project_name, + representation_ids=[representation_id], + fields=fields + ) + for representation in representations: + return representation + return None + + +def get_representation_by_name( + project_name, representation_name, version_id, fields=None +): + representations = get_representations( + project_name, + representation_names=[representation_name], + version_ids=[version_id], + fields=fields + ) + for representation in representations: + return representation + return None + + +def get_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + archived=False, + standard=True, + fields=None +): + if context_filters is not None: + # TODO should we add the support? + # - there was ability to fitler using regex + raise ValueError("OP v4 can't filter by representation context.") + + if not archived and not standard: + return + + if archived and not standard: + active = False + elif not archived and standard: + active = True + else: + active = None + + con = get_server_api_connection() + fields = representation_fields_v3_to_v4(fields, con) + if fields and active is not None: + fields.add("active") + + representations = con.get_representations( + project_name, + representation_ids, + representation_names, + version_ids, + names_by_version_ids, + active, + fields=fields + ) + for representation in representations: + yield convert_v4_representation_to_v3(representation) + + +def get_representation_parents(project_name, representation): + if not representation: + return None + + repre_id = representation["_id"] + parents_by_repre_id = get_representations_parents( + project_name, [representation] + ) + return parents_by_repre_id[repre_id] + + +def get_representations_parents(project_name, representations): + repre_ids = { + repre["_id"] + for repre in representations + } + con = get_server_api_connection() + parents_by_repre_id = con.get_representations_parents(project_name, + repre_ids) + folder_ids = set() + for parents in parents_by_repre_id .values(): + folder_ids.add(parents[2]["id"]) + + tasks_by_folder_id = {} + + new_parents = {} + for repre_id, parents in parents_by_repre_id .items(): + version, subset, folder, project = parents + folder_tasks = tasks_by_folder_id.get(folder["id"]) or {} + folder["tasks"] = folder_tasks + new_parents[repre_id] = ( + convert_v4_version_to_v3(version), + convert_v4_subset_to_v3(subset), + convert_v4_folder_to_v3(folder, project_name), + project + ) + return new_parents + + +def get_archived_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + fields=None +): + return get_representations( + project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + archived=True, + standard=False, + fields=fields + ) + + +def get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id, fields=None +): + """Receive thumbnail entity data. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. + entity_type (str): Type of entity for which the thumbnail should be + received. + entity_id (str): Id of entity for which the thumbnail should be + received. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + None: If thumbnail with specified id was not found. + Dict: Thumbnail entity data which can be reduced to specified 'fields'. + """ + + if not thumbnail_id or not entity_type or not entity_id: + return None + + if entity_type == "asset": + entity_type = "folder" + + elif entity_type == "hero_version": + entity_type = "version" + + return { + "_id": thumbnail_id, + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": { + "entity_type": entity_type, + "entity_id": entity_id + } + } + + +def get_thumbnails(project_name, thumbnail_contexts, fields=None): + """Get thumbnail entities. + + Warning: + This function is not OpenPype compatible. There is none usage of this + function in codebase so there is nothing to convert. The previous + implementation cannot be AYON compatible without entity types. + """ + + thumbnail_items = set() + for thumbnail_context in thumbnail_contexts: + thumbnail_id, entity_type, entity_id = thumbnail_context + thumbnail_item = get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id + ) + if thumbnail_item: + thumbnail_items.add(thumbnail_item) + return list(thumbnail_items) + + +def get_thumbnail_id_from_source(project_name, src_type, src_id): + """Receive thumbnail id from source entity. + + Args: + project_name (str): Name of project where to look for queried entities. + src_type (str): Type of source entity ('asset', 'version'). + src_id (Union[str, ObjectId]): Id of source entity. + + Returns: + ObjectId: Thumbnail id assigned to entity. + None: If Source entity does not have any thumbnail id assigned. + """ + + if not src_type or not src_id: + return None + + if src_type == "version": + version = get_version_by_id( + project_name, src_id, fields=["data.thumbnail_id"] + ) or {} + return version.get("data", {}).get("thumbnail_id") + + if src_type == "asset": + asset = get_asset_by_id( + project_name, src_id, fields=["data.thumbnail_id"] + ) or {} + return asset.get("data", {}).get("thumbnail_id") + + return None + + +def get_workfile_info( + project_name, asset_id, task_name, filename, fields=None +): + if not asset_id or not task_name or not filename: + return None + + con = get_server_api_connection() + task = con.get_task_by_name( + project_name, asset_id, task_name, fields=["id", "name", "folderId"] + ) + if not task: + return None + + fields = workfile_info_fields_v3_to_v4(fields) + + for workfile_info in con.get_workfiles_info( + project_name, task_ids=[task["id"]], fields=fields + ): + if workfile_info["name"] == filename: + return convert_v4_workfile_info_to_v3(workfile_info, task) + return None diff --git a/openpype/client/server/entity_links.py b/openpype/client/server/entity_links.py new file mode 100644 index 0000000000..d8395aabe7 --- /dev/null +++ b/openpype/client/server/entity_links.py @@ -0,0 +1,156 @@ +import ayon_api +from ayon_api import get_folder_links, get_versions_links + +from .entities import get_assets, get_representation_by_id + + +def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): + """Extract linked asset ids from asset document. + + One of asset document or asset id must be passed. + + Note: + Asset links now works only from asset to assets. + + Args: + project_name (str): Project where to look for asset. + asset_doc (dict): Asset document from DB. + asset_id (str): Asset id to find its document. + + Returns: + List[Union[ObjectId, str]]: Asset ids of input links. + """ + + output = [] + if not asset_doc and not asset_id: + return output + + if not asset_id: + asset_id = asset_doc["_id"] + + links = get_folder_links(project_name, asset_id, link_direction="in") + return [ + link["entityId"] + for link in links + if link["entityType"] == "folder" + ] + + +def get_linked_assets( + project_name, asset_doc=None, asset_id=None, fields=None +): + """Return linked assets based on passed asset document. + + One of asset document or asset id must be passed. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_doc (Dict[str, Any]): Asset document from database. + asset_id (Union[ObjectId, str]): Asset id. Can be used instead of + asset document. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + List[Dict[str, Any]]: Asset documents of input links for passed + asset doc. + """ + + link_ids = get_linked_asset_ids(project_name, asset_doc, asset_id) + if not link_ids: + return [] + return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) + + + +def get_linked_representation_id( + project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None +): + """Returns list of linked ids of particular type (if provided). + + One of representation document or representation id must be passed. + Note: + Representation links now works only from representation through version + back to representations. + + Todos: + Missing depth query. Not sure how it did find more representations in + depth, probably links to version? + + Args: + project_name (str): Name of project where look for links. + repre_doc (Dict[str, Any]): Representation document. + repre_id (Union[ObjectId, str]): Representation id. + link_type (str): Type of link (e.g. 'reference', ...). + max_depth (int): Limit recursion level. Default: 0 + + Returns: + List[ObjectId] Linked representation ids. + """ + + if repre_doc: + repre_id = repre_doc["_id"] + + if not repre_id and not repre_doc: + return [] + + version_id = None + if repre_doc: + version_id = repre_doc.get("parent") + + if not version_id: + repre_doc = get_representation_by_id( + project_name, repre_id, fields=["parent"] + ) + if repre_doc: + version_id = repre_doc["parent"] + + if not version_id: + return [] + + if max_depth is None or max_depth == 0: + max_depth = 1 + + link_types = None + if link_type: + link_types = [link_type] + + # Store already found version ids to avoid recursion, and also to store + # output -> Don't forget to remove 'version_id' at the end!!! + linked_version_ids = {version_id} + # Each loop of depth will reset this variable + versions_to_check = {version_id} + for _ in range(max_depth): + if not versions_to_check: + break + + links = get_versions_links( + project_name, + versions_to_check, + link_types=link_types, + link_direction="out") + + versions_to_check = set() + for link in links: + # Care only about version links + if link["entityType"] != "version": + continue + entity_id = link["entityId"] + # Skip already found linked version ids + if entity_id in linked_version_ids: + continue + linked_version_ids.add(entity_id) + versions_to_check.add(entity_id) + + linked_version_ids.remove(version_id) + if not linked_version_ids: + return [] + + representations = ayon_api.get_representations( + project_name, + version_ids=linked_version_ids, + fields=["id"]) + return [ + repre["id"] + for repre in representations + ] diff --git a/openpype/client/server/openpype_comp.py b/openpype/client/server/openpype_comp.py new file mode 100644 index 0000000000..a123fe3167 --- /dev/null +++ b/openpype/client/server/openpype_comp.py @@ -0,0 +1,156 @@ +import collections +from ayon_api.graphql import GraphQlQuery, FIELD_VALUE, fields_to_dict + +from .constants import DEFAULT_FOLDER_FIELDS + + +def folders_tasks_graphql_query(fields): + query = GraphQlQuery("FoldersQuery") + project_name_var = query.add_variable("projectName", "String!") + folder_ids_var = query.add_variable("folderIds", "[String!]") + parent_folder_ids_var = query.add_variable("parentFolderIds", "[String!]") + folder_paths_var = query.add_variable("folderPaths", "[String!]") + folder_names_var = query.add_variable("folderNames", "[String!]") + has_products_var = query.add_variable("folderHasProducts", "Boolean!") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + folders_field = project_field.add_field_with_edges("folders") + folders_field.set_filter("ids", folder_ids_var) + folders_field.set_filter("parentIds", parent_folder_ids_var) + folders_field.set_filter("names", folder_names_var) + folders_field.set_filter("paths", folder_paths_var) + folders_field.set_filter("hasProducts", has_products_var) + + fields = set(fields) + fields.discard("tasks") + tasks_field = folders_field.add_field_with_edges("tasks") + tasks_field.add_field("name") + tasks_field.add_field("taskType") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, folders_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def get_folders_with_tasks( + con, + project_name, + folder_ids=None, + folder_paths=None, + folder_names=None, + parent_ids=None, + active=True, + fields=None +): + """Query folders with tasks from server. + + This is for v4 compatibility where tasks were stored on assets. This is + an inefficient way how folders and tasks are queried so it was added only + as compatibility function. + + Todos: + Folder name won't be unique identifier, so we should add folder path + filtering. + + Notes: + Filter 'active' don't have direct filter in GraphQl. + + Args: + con (ServerAPI): Connection to server. + project_name (str): Name of project where folders are. + folder_ids (Iterable[str]): Folder ids to filter. + folder_paths (Iterable[str]): Folder paths used for filtering. + folder_names (Iterable[str]): Folder names used for filtering. + parent_ids (Iterable[str]): Ids of folder parents. Use 'None' + if folder is direct child of project. + active (Union[bool, None]): Filter active/inactive folders. Both + are returned if is set to None. + fields (Union[Iterable(str), None]): Fields to be queried + for folder. All possible folder fields are returned if 'None' + is passed. + + Returns: + List[Dict[str, Any]]: Queried folder entities. + """ + + if not project_name: + return [] + + filters = { + "projectName": project_name + } + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return [] + filters["folderIds"] = list(folder_ids) + + if folder_paths is not None: + folder_paths = set(folder_paths) + if not folder_paths: + return [] + filters["folderPaths"] = list(folder_paths) + + if folder_names is not None: + folder_names = set(folder_names) + if not folder_names: + return [] + filters["folderNames"] = list(folder_names) + + if parent_ids is not None: + parent_ids = set(parent_ids) + if not parent_ids: + return [] + if None in parent_ids: + # Replace 'None' with '"root"' which is used during GraphQl + # query for parent ids filter for folders without folder + # parent + parent_ids.remove(None) + parent_ids.add("root") + + if project_name in parent_ids: + # Replace project name with '"root"' which is used during + # GraphQl query for parent ids filter for folders without + # folder parent + parent_ids.remove(project_name) + parent_ids.add("root") + + filters["parentFolderIds"] = list(parent_ids) + + if fields: + fields = set(fields) + else: + fields = con.get_default_fields_for_type("folder") + fields |= DEFAULT_FOLDER_FIELDS + + if active is not None: + fields.add("active") + + query = folders_tasks_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + parsed_data = query.query(con) + folders = parsed_data["project"]["folders"] + if active is None: + return folders + return [ + folder + for folder in folders + if folder["active"] is active + ] diff --git a/openpype/client/server/operations.py b/openpype/client/server/operations.py new file mode 100644 index 0000000000..5b38405c34 --- /dev/null +++ b/openpype/client/server/operations.py @@ -0,0 +1,881 @@ +import copy +import json +import collections +import uuid +import datetime + +from bson.objectid import ObjectId +from ayon_api import get_server_api_connection + +from openpype.client.operations_base import ( + REMOVED_VALUE, + CreateOperation, + UpdateOperation, + DeleteOperation, + BaseOperationsSession +) + +from openpype.client.mongo.operations import ( + CURRENT_THUMBNAIL_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_PROJECT_SCHEMA, +) + +from .conversion_utils import ( + convert_create_asset_to_v4, + convert_create_task_to_v4, + convert_create_subset_to_v4, + convert_create_version_to_v4, + convert_create_hero_version_to_v4, + convert_create_representation_to_v4, + convert_create_workfile_info_to_v4, + + convert_update_folder_to_v4, + convert_update_subset_to_v4, + convert_update_version_to_v4, + convert_update_hero_version_to_v4, + convert_update_representation_to_v4, + convert_update_workfile_info_to_v4, +) +from .utils import create_entity_id + + +def _create_or_convert_to_id(entity_id=None): + if entity_id is None: + return create_entity_id() + + if isinstance(entity_id, ObjectId): + raise TypeError("Type of 'ObjectId' is not supported anymore.") + + # Validate if can be converted to uuid + uuid.UUID(entity_id) + return entity_id + + +def new_project_document( + project_name, project_code, config, data=None, entity_id=None +): + """Create skeleton data of project document. + + Args: + project_name (str): Name of project. Used as identifier of a project. + project_code (str): Shorter version of projet without spaces and + special characters (in most of cases). Should be also considered + as unique name across projects. + config (Dic[str, Any]): Project config consist of roots, templates, + applications and other project Anatomy related data. + data (Dict[str, Any]): Project data with information about it's + attributes (e.g. 'fps' etc.) or integration specific keys. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of project document. + """ + + if data is None: + data = {} + + data["code"] = project_code + + return { + "_id": _create_or_convert_to_id(entity_id), + "name": project_name, + "type": CURRENT_PROJECT_SCHEMA, + "entity_data": data, + "config": config + } + + +def new_asset_document( + name, project_id, parent_id, parents, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + name (str): Is considered as unique identifier of asset in project. + project_id (Union[str, ObjectId]): Id of project doument. + parent_id (Union[str, ObjectId]): Id of parent asset. + parents (List[str]): List of parent assets names. + data (Dict[str, Any]): Asset document data. Empty dictionary is used + if not passed. Value of 'parent_id' is used to fill 'visualParent'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of asset document. + """ + + if data is None: + data = {} + if parent_id is not None: + parent_id = _create_or_convert_to_id(parent_id) + data["visualParent"] = parent_id + data["parents"] = parents + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "asset", + "name": name, + # This will be ignored + "parent": project_id, + "data": data, + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + +def new_subset_document(name, family, asset_id, data=None, entity_id=None): + """Create skeleton data of subset document. + + Args: + name (str): Is considered as unique identifier of subset under asset. + family (str): Subset's family. + asset_id (Union[str, ObjectId]): Id of parent asset. + data (Dict[str, Any]): Subset document data. Empty dictionary is used + if not passed. Value of 'family' is used to fill 'family'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of subset document. + """ + + if data is None: + data = {} + data["family"] = family + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_SUBSET_SCHEMA, + "type": "subset", + "name": name, + "data": data, + "parent": _create_or_convert_to_id(asset_id) + } + + +def new_version_doc(version, subset_id, data=None, entity_id=None): + """Create skeleton data of version document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_VERSION_SCHEMA, + "type": "version", + "name": int(version), + "parent": _create_or_convert_to_id(subset_id), + "data": data + } + + +def new_hero_version_doc(subset_id, data, version=None, entity_id=None): + """Create skeleton data of hero version document. + + Args: + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + version (int): Version of source version. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if version is None: + version = -1 + elif version > 0: + version = -version + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_HERO_VERSION_SCHEMA, + "type": "hero_version", + "version": version, + "parent": _create_or_convert_to_id(subset_id), + "data": data + } + + +def new_representation_doc( + name, version_id, context, data=None, entity_id=None +): + """Create skeleton data of representation document. + + Args: + name (str): Representation name considered as unique identifier + of representation under version. + version_id (Union[str, ObjectId]): Id of parent version. + context (Dict[str, Any]): Representation context used for fill template + of to query. + data (Dict[str, Any]): Representation document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_REPRESENTATION_SCHEMA, + "type": "representation", + "parent": _create_or_convert_to_id(version_id), + "name": name, + "data": data, + + # Imprint shortcut to context for performance reasons. + "context": context + } + + +def new_thumbnail_doc(data=None, entity_id=None): + """Create skeleton data of thumbnail document. + + Args: + data (Dict[str, Any]): Thumbnail document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of thumbnail document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": data + } + + +def new_workfile_info_doc( + filename, asset_id, task_name, files, data=None, entity_id=None +): + """Create skeleton data of workfile info document. + + Workfile document is at this moment used primarily for artist notes. + + Args: + filename (str): Filename of workfile. + asset_id (Union[str, ObjectId]): Id of asset under which workfile live. + task_name (str): Task under which was workfile created. + files (List[str]): List of rootless filepaths related to workfile. + data (Dict[str, Any]): Additional metadata. + + Returns: + Dict[str, Any]: Skeleton of workfile info document. + """ + + if not data: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "workfile", + "parent": _create_or_convert_to_id(asset_id), + "task_name": task_name, + "filename": filename, + "data": data, + "files": files + } + + +def _prepare_update_data(old_doc, new_doc, replace): + changes = {} + for key, value in new_doc.items(): + if key not in old_doc or value != old_doc[key]: + changes[key] = value + + if replace: + for key in old_doc.keys(): + if key not in new_doc: + changes[key] = REMOVED_VALUE + return changes + + +def prepare_subset_update_data(old_doc, new_doc, replace=True): + """Compare two subset documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_version_update_data(old_doc, new_doc, replace=True): + """Compare two version documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_hero_version_update_data(old_doc, new_doc, replace=True): + """Compare two hero version documents and prepare update data. + + Based on compared values will create update data for 'UpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + changes = _prepare_update_data(old_doc, new_doc, replace) + changes.pop("version_id", None) + return changes + + +def prepare_representation_update_data(old_doc, new_doc, replace=True): + """Compare two representation documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + changes = _prepare_update_data(old_doc, new_doc, replace) + context = changes.get("data", {}).get("context") + # Make sure that both 'family' and 'subset' are in changes if + # one of them changed (they'll both become 'product'). + if ( + context + and ("family" in context or "subset" in context) + ): + context["family"] = new_doc["data"]["context"]["family"] + context["subset"] = new_doc["data"]["context"]["subset"] + + return changes + + +def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): + """Compare two workfile info documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +class FailedOperations(Exception): + pass + + +def entity_data_json_default(value): + if isinstance(value, datetime.datetime): + return int(value.timestamp()) + + raise TypeError( + "Object of type {} is not JSON serializable".format(str(type(value))) + ) + + +def failed_json_default(value): + return "< Failed value {} > {}".format(type(value), str(value)) + + +class ServerCreateOperation(CreateOperation): + """Operation to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + def __init__(self, project_name, entity_type, data, session): + self._session = session + + if not data: + data = {} + data = copy.deepcopy(data) + if entity_type == "project": + raise ValueError("Project cannot be created using operations") + + tasks = None + if entity_type in "asset": + # TODO handle tasks + entity_type = "folder" + if "data" in data: + tasks = data["data"].get("tasks") + + project = self._session.get_project(project_name) + new_data = convert_create_asset_to_v4(data, project, self.con) + + elif entity_type == "task": + project = self._session.get_project(project_name) + new_data = convert_create_task_to_v4(data, project, self.con) + + elif entity_type == "subset": + new_data = convert_create_subset_to_v4(data, self.con) + entity_type = "product" + + elif entity_type == "version": + new_data = convert_create_version_to_v4(data, self.con) + + elif entity_type == "hero_version": + new_data = convert_create_hero_version_to_v4( + data, project_name, self.con + ) + entity_type = "version" + + elif entity_type in ("representation", "archived_representation"): + new_data = convert_create_representation_to_v4(data, self.con) + entity_type = "representation" + + elif entity_type == "workfile": + new_data = convert_create_workfile_info_to_v4( + data, project_name, self.con + ) + + else: + raise ValueError( + "Unhandled entity type \"{}\"".format(entity_type) + ) + + # Simple check if data can be dumped into json + # - should raise error on 'ObjectId' object + try: + new_data = json.loads( + json.dumps(new_data, default=entity_data_json_default) + ) + + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps(new_data, default=failed_json_default) + )) + + super(ServerCreateOperation, self).__init__( + project_name, entity_type, new_data + ) + + if "id" not in self._data: + self._data["id"] = create_entity_id() + + if tasks: + copied_tasks = copy.deepcopy(tasks) + for task_name, task in copied_tasks.items(): + task["name"] = task_name + task["folderId"] = self._data["id"] + self.session.create_entity( + project_name, "task", task, nested_id=self.id + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + @property + def entity_id(self): + return self._data["id"] + + def to_server_operation(self): + return { + "id": self.id, + "type": "create", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": self._data + } + + +class ServerUpdateOperation(UpdateOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + def __init__( + self, project_name, entity_type, entity_id, update_data, session + ): + self._session = session + + update_data = copy.deepcopy(update_data) + if entity_type == "project": + raise ValueError("Project cannot be created using operations") + + if entity_type in ("asset", "archived_asset"): + new_update_data = convert_update_folder_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "folder" + + elif entity_type == "subset": + new_update_data = convert_update_subset_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "product" + + elif entity_type == "version": + new_update_data = convert_update_version_to_v4( + project_name, entity_id, update_data, self.con + ) + + elif entity_type == "hero_version": + new_update_data = convert_update_hero_version_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "version" + + elif entity_type in ("representation", "archived_representation"): + new_update_data = convert_update_representation_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "representation" + + elif entity_type == "workfile": + new_update_data = convert_update_workfile_info_to_v4( + project_name, entity_id, update_data, self.con + ) + + else: + raise ValueError( + "Unhandled entity type \"{}\"".format(entity_type) + ) + + try: + new_update_data = json.loads( + json.dumps(new_update_data, default=entity_data_json_default) + ) + + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps(new_update_data, default=failed_json_default) + )) + + super(ServerUpdateOperation, self).__init__( + project_name, entity_type, entity_id, new_update_data + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_server_operation(self): + if not self._update_data: + return None + + update_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + update_data[key] = value + + return { + "id": self.id, + "type": "update", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": update_data + } + + +class ServerDeleteOperation(DeleteOperation): + """Operation to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + def __init__(self, project_name, entity_type, entity_id, session): + self._session = session + + if entity_type == "asset": + entity_type = "folder" + + elif entity_type == "hero_version": + entity_type = "version" + + elif entity_type == "subset": + entity_type = "product" + + super(ServerDeleteOperation, self).__init__( + project_name, entity_type, entity_id + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_server_operation(self): + return { + "id": self.id, + "type": self.operation_name, + "entityId": self.entity_id, + "entityType": self.entity_type, + } + + +class OperationsSession(BaseOperationsSession): + def __init__(self, con=None, *args, **kwargs): + super(OperationsSession, self).__init__(*args, **kwargs) + if con is None: + con = get_server_api_connection() + self._con = con + self._project_cache = {} + self._nested_operations = collections.defaultdict(list) + + @property + def con(self): + return self._con + + def get_project(self, project_name): + if project_name not in self._project_cache: + self._project_cache[project_name] = self.con.get_project( + project_name) + return copy.deepcopy(self._project_cache[project_name]) + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + body_by_id = {} + results = [] + for project_name, operations in operations_by_project.items(): + operations_body = [] + for operation in operations: + body = operation.to_server_operation() + if body is not None: + try: + json.dumps(body) + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps( + body, indent=4, default=failed_json_default + ) + )) + + body_by_id[operation.id] = body + operations_body.append(body) + + if operations_body: + result = self._con.post( + "projects/{}/operations".format(project_name), + operations=operations_body, + canFail=False + ) + results.append(result.data) + + for result in results: + if result.get("success"): + continue + + if "operations" not in result: + raise FailedOperations( + "Operation failed. Content: {}".format(str(result)) + ) + + for op_result in result["operations"]: + if not op_result["success"]: + operation_id = op_result["id"] + raise FailedOperations(( + "Operation \"{}\" failed with data:\n{}\nError: {}." + ).format( + operation_id, + json.dumps(body_by_id[operation_id], indent=4), + op_result.get("error", "unknown"), + )) + + def create_entity(self, project_name, entity_type, data, nested_id=None): + """Fast access to 'ServerCreateOperation'. + + Args: + project_name (str): On which project the creation happens. + entity_type (str): Which entity type will be created. + data (Dicst[str, Any]): Entity data. + nested_id (str): Id of other operation from which is triggered + operation -> Operations can trigger suboperations but they + must be added to operations list after it's parent is added. + + Returns: + ServerCreateOperation: Object of update operation. + """ + + operation = ServerCreateOperation( + project_name, entity_type, data, self + ) + + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + + return operation + + def update_entity( + self, project_name, entity_type, entity_id, update_data, nested_id=None + ): + """Fast access to 'ServerUpdateOperation'. + + Returns: + ServerUpdateOperation: Object of update operation. + """ + + operation = ServerUpdateOperation( + project_name, entity_type, entity_id, update_data, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + def delete_entity( + self, project_name, entity_type, entity_id, nested_id=None + ): + """Fast access to 'ServerDeleteOperation'. + + Returns: + ServerDeleteOperation: Object of delete operation. + """ + + operation = ServerDeleteOperation( + project_name, entity_type, entity_id, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + +def create_project( + project_name, + project_code, + library_project=False, + preset_name=None, + con=None +): + """Create project using OpenPype settings. + + This project creation function is not validating project document on + creation. It is because project document is created blindly with only + minimum required information about project which is it's name, code, type + and schema. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name (str): New project name. Should be unique. + project_code (str): Project's code should be unique too. + library_project (bool): Project is library project. + preset_name (str): Name of anatomy preset. Default is used if not + passed. + con (ServerAPI): Connection to server with logged user. + + Raises: + ValueError: When project name already exists in MongoDB. + + Returns: + dict: Created project document. + """ + + if con is None: + con = get_server_api_connection() + + return con.create_project( + project_name, + project_code, + library_project, + preset_name + ) + + +def delete_project(project_name, con=None): + if con is None: + con = get_server_api_connection() + + return con.delete_project(project_name) + + +def create_thumbnail(project_name, src_filepath, thumbnail_id=None, con=None): + if con is None: + con = get_server_api_connection() + return con.create_thumbnail(project_name, src_filepath, thumbnail_id) diff --git a/openpype/client/server/thumbnails.py b/openpype/client/server/thumbnails.py new file mode 100644 index 0000000000..dc649b9651 --- /dev/null +++ b/openpype/client/server/thumbnails.py @@ -0,0 +1,229 @@ +"""Cache of thumbnails downloaded from AYON server. + +Thumbnails are cached to appdirs to predefined directory. + +This should be moved to thumbnails logic in pipeline but because it would +overflow OpenPype logic it's here for now. +""" + +import os +import time +import collections + +import appdirs + +FileInfo = collections.namedtuple( + "FileInfo", + ("path", "size", "modification_time") +) + + +class AYONThumbnailCache: + """Cache of thumbnails on local storage. + + Thumbnails are cached to appdirs to predefined directory. Each project has + own subfolder with thumbnails -> that's because each project has own + thumbnail id validation and file names are thumbnail ids with matching + extension. Extensions are predefined (.png and .jpeg). + + Cache has cleanup mechanism which is triggered on initialized by default. + + The cleanup has 2 levels: + 1. soft cleanup which remove all files that are older then 'days_alive' + 2. max size cleanup which remove all files until the thumbnails folder + contains less then 'max_filesize' + - this is time consuming so it's not triggered automatically + + Args: + cleanup (bool): Trigger soft cleanup (Cleanup expired thumbnails). + """ + + # Lifetime of thumbnails (in seconds) + # - default 3 days + days_alive = 3 + # Max size of thumbnail directory (in bytes) + # - default 2 Gb + max_filesize = 2 * 1024 * 1024 * 1024 + + def __init__(self, cleanup=True): + self._thumbnails_dir = None + self._days_alive_secs = self.days_alive * 24 * 60 * 60 + if cleanup: + self.cleanup() + + def get_thumbnails_dir(self): + """Root directory where thumbnails are stored. + + Returns: + str: Path to thumbnails root. + """ + + if self._thumbnails_dir is None: + # TODO use generic function + directory = appdirs.user_data_dir("AYON", "Ynput") + self._thumbnails_dir = os.path.join(directory, "thumbnails") + return self._thumbnails_dir + + thumbnails_dir = property(get_thumbnails_dir) + + def get_thumbnails_dir_file_info(self): + """Get information about all files in thumbnails directory. + + Returns: + List[FileInfo]: List of file information about all files. + """ + + thumbnails_dir = self.thumbnails_dir + files_info = [] + if not os.path.exists(thumbnails_dir): + return files_info + + for root, _, filenames in os.walk(thumbnails_dir): + for filename in filenames: + path = os.path.join(root, filename) + files_info.append(FileInfo( + path, os.path.getsize(path), os.path.getmtime(path) + )) + return files_info + + def get_thumbnails_dir_size(self, files_info=None): + """Got full size of thumbnail directory. + + Args: + files_info (List[FileInfo]): Prepared file information about + files in thumbnail directory. + + Returns: + int: File size of all files in thumbnail directory. + """ + + if files_info is None: + files_info = self.get_thumbnails_dir_file_info() + + if not files_info: + return 0 + + return sum( + file_info.size + for file_info in files_info + ) + + def cleanup(self, check_max_size=False): + """Cleanup thumbnails directory. + + Args: + check_max_size (bool): Also cleanup files to match max size of + thumbnails directory. + """ + + thumbnails_dir = self.get_thumbnails_dir() + # Skip if thumbnails dir does not exists yet + if not os.path.exists(thumbnails_dir): + return + + self._soft_cleanup(thumbnails_dir) + if check_max_size: + self._max_size_cleanup(thumbnails_dir) + + def _soft_cleanup(self, thumbnails_dir): + current_time = time.time() + for root, _, filenames in os.walk(thumbnails_dir): + for filename in filenames: + path = os.path.join(root, filename) + modification_time = os.path.getmtime(path) + if current_time - modification_time > self._days_alive_secs: + os.remove(path) + + def _max_size_cleanup(self, thumbnails_dir): + files_info = self.get_thumbnails_dir_file_info() + size = self.get_thumbnails_dir_size(files_info) + if size < self.max_filesize: + return + + sorted_file_info = collections.deque( + sorted(files_info, key=lambda item: item.modification_time) + ) + diff = size - self.max_filesize + while diff > 0: + if not sorted_file_info: + break + + file_info = sorted_file_info.popleft() + diff -= file_info.size + os.remove(file_info.path) + + def get_thumbnail_filepath(self, project_name, thumbnail_id): + """Get thumbnail by thumbnail id. + + Args: + project_name (str): Name of project. + thumbnail_id (str): Thumbnail id. + + Returns: + Union[str, None]: Path to thumbnail image or None if thumbnail + is not cached yet. + """ + + if not thumbnail_id: + return None + + for ext in ( + ".png", + ".jpeg", + ): + filepath = os.path.join( + self.thumbnails_dir, project_name, thumbnail_id + ext + ) + if os.path.exists(filepath): + return filepath + return None + + def get_project_dir(self, project_name): + """Path to root directory for specific project. + + Args: + project_name (str): Name of project for which root directory path + should be returned. + + Returns: + str: Path to root of project's thumbnails. + """ + + return os.path.join(self.thumbnails_dir, project_name) + + def make_sure_project_dir_exists(self, project_name): + project_dir = self.get_project_dir(project_name) + if not os.path.exists(project_dir): + os.makedirs(project_dir) + return project_dir + + def store_thumbnail(self, project_name, thumbnail_id, content, mime_type): + """Store thumbnail to cache folder. + + Args: + project_name (str): Project where the thumbnail belong to. + thumbnail_id (str): Id of thumbnail. + content (bytes): Byte content of thumbnail file. + mime_data (str): Type of content. + + Returns: + str: Path to cached thumbnail image file. + """ + + if mime_type == "image/png": + ext = ".png" + elif mime_type == "image/jpeg": + ext = ".jpeg" + else: + raise ValueError( + "Unknown mime type for thumbnail \"{}\"".format(mime_type)) + + project_dir = self.make_sure_project_dir_exists(project_name) + thumbnail_path = os.path.join(project_dir, thumbnail_id + ext) + with open(thumbnail_path, "wb") as stream: + stream.write(content) + + current_time = time.time() + os.utime(thumbnail_path, (current_time, current_time)) + + return thumbnail_path diff --git a/openpype/client/server/utils.py b/openpype/client/server/utils.py new file mode 100644 index 0000000000..ed128cfad9 --- /dev/null +++ b/openpype/client/server/utils.py @@ -0,0 +1,109 @@ +import uuid + +from openpype.client.operations_base import REMOVED_VALUE + + +def create_entity_id(): + return uuid.uuid1().hex + + +def prepare_attribute_changes(old_entity, new_entity, replace=False): + """Prepare changes of attributes on entities. + + Compare 'attrib' of old and new entity data to prepare only changed + values that should be sent to server for update. + + Example: + >>> # Limited entity data to 'attrib' + >>> old_entity = { + ... "attrib": {"attr_1": 1, "attr_2": "MyString", "attr_3": True} + ... } + >>> new_entity = { + ... "attrib": {"attr_1": 2, "attr_3": True, "attr_4": 3} + ... } + >>> # Changes if replacement should not happen + >>> expected_changes = { + ... "attr_1": 2, + ... "attr_4": 3 + ... } + >>> changes = prepare_attribute_changes(old_entity, new_entity) + >>> changes == expected_changes + True + + >>> # Changes if replacement should happen + >>> expected_changes_replace = { + ... "attr_1": 2, + ... "attr_2": REMOVED_VALUE, + ... "attr_4": 3 + ... } + >>> changes_replace = prepare_attribute_changes( + ... old_entity, new_entity, True) + >>> changes_replace == expected_changes_replace + True + + Args: + old_entity (dict[str, Any]): Data of entity queried from server. + new_entity (dict[str, Any]): Entity data with applied changes. + replace (bool): New entity should fully replace all old entity values. + + Returns: + Dict[str, Any]: Values from new entity only if value has changed. + """ + + attrib_changes = {} + new_attrib = new_entity.get("attrib") + old_attrib = old_entity.get("attrib") + if new_attrib is None: + if not replace: + return attrib_changes + new_attrib = {} + + if old_attrib is None: + return new_attrib + + for attr, new_attr_value in new_attrib.items(): + old_attr_value = old_attrib.get(attr) + if old_attr_value != new_attr_value: + attrib_changes[attr] = new_attr_value + + if replace: + for attr in old_attrib: + if attr not in new_attrib: + attrib_changes[attr] = REMOVED_VALUE + + return attrib_changes + + +def prepare_entity_changes(old_entity, new_entity, replace=False): + """Prepare changes of AYON entities. + + Compare old and new entity to filter values from new data that changed. + + Args: + old_entity (dict[str, Any]): Data of entity queried from server. + new_entity (dict[str, Any]): Entity data with applied changes. + replace (bool): All attributes should be replaced by new values. So + all attribute values that are not on new entity will be removed. + + Returns: + Dict[str, Any]: Only values from new entity that changed. + """ + + changes = {} + for key, new_value in new_entity.items(): + if key == "attrib": + continue + + old_value = old_entity.get(key) + if old_value != new_value: + changes[key] = new_value + + if replace: + for key in old_entity: + if key not in new_entity: + changes[key] = REMOVED_VALUE + + attr_changes = prepare_attribute_changes(old_entity, new_entity, replace) + if attr_changes: + changes["attrib"] = attr_changes + return changes diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index c54acbc203..1418bc210b 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddLastWorkfileToLaunchArgs(PreLaunchHook): @@ -13,8 +13,8 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): # Execute after workfile template copy order = 10 - app_groups = [ - "3dsmax", + app_groups = { + "3dsmax", "adsk_3dsmax", "maya", "nuke", "nukex", @@ -26,8 +26,9 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): "photoshop", "tvpaint", "substancepainter", - "aftereffects" - ] + "aftereffects", + } + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hooks/pre_copy_template_workfile.py b/openpype/hooks/pre_copy_template_workfile.py index 70c549919f..2203ff4396 100644 --- a/openpype/hooks/pre_copy_template_workfile.py +++ b/openpype/hooks/pre_copy_template_workfile.py @@ -1,7 +1,7 @@ import os import shutil -from openpype.lib import PreLaunchHook from openpype.settings import get_project_settings +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import ( get_custom_workfile_template, get_custom_workfile_template_by_string_context @@ -19,7 +19,8 @@ class CopyTemplateWorkfile(PreLaunchHook): # Before `AddLastWorkfileToLaunchArgs` order = 0 - app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"] + app_groups = {"blender", "photoshop", "tvpaint", "aftereffects"} + launch_types = {LaunchTypes.local} def execute(self): """Check if can copy template for context and do it if possible. diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py index 8856281120..4c9d08b375 100644 --- a/openpype/hooks/pre_create_extra_workdir_folders.py +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import create_workdir_extra_folders @@ -14,6 +14,7 @@ class CreateWorkdirExtraFolders(PreLaunchHook): # Execute after workfile template copy order = 15 + launch_types = {LaunchTypes.local} def execute(self): if not self.application.is_host: diff --git a/openpype/hooks/pre_global_host_data.py b/openpype/hooks/pre_global_host_data.py index 8a178915fb..813df24af0 100644 --- a/openpype/hooks/pre_global_host_data.py +++ b/openpype/hooks/pre_global_host_data.py @@ -1,15 +1,16 @@ from openpype.client import get_project, get_asset_by_name -from openpype.lib import ( +from openpype.lib.applications import ( PreLaunchHook, EnvironmentPrepData, prepare_app_environments, prepare_context_environments ) -from openpype.pipeline import AvalonMongoDB, Anatomy +from openpype.pipeline import Anatomy class GlobalHostDataHook(PreLaunchHook): order = -100 + launch_types = set() def execute(self): """Prepare global objects to `data` that will be used for sure.""" @@ -26,7 +27,6 @@ class GlobalHostDataHook(PreLaunchHook): "app": app, - "dbcon": self.data["dbcon"], "project_doc": self.data["project_doc"], "asset_doc": self.data["asset_doc"], @@ -62,13 +62,6 @@ class GlobalHostDataHook(PreLaunchHook): # Anatomy self.data["anatomy"] = Anatomy(project_name) - # Mongo connection - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name - dbcon.install() - - self.data["dbcon"] = dbcon - # Project document project_doc = get_project(project_name) self.data["project_doc"] = project_doc diff --git a/openpype/hooks/pre_mac_launch.py b/openpype/hooks/pre_mac_launch.py index f85557a4f0..402e9a5517 100644 --- a/openpype/hooks/pre_mac_launch.py +++ b/openpype/hooks/pre_mac_launch.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class LaunchWithTerminal(PreLaunchHook): @@ -12,7 +12,8 @@ class LaunchWithTerminal(PreLaunchHook): """ order = 1000 - platforms = ["darwin"] + platforms = {"darwin"} + launch_types = {LaunchTypes.local} def execute(self): executable = str(self.launch_context.executable) diff --git a/openpype/hooks/pre_foundry_apps.py b/openpype/hooks/pre_new_console_apps.py similarity index 71% rename from openpype/hooks/pre_foundry_apps.py rename to openpype/hooks/pre_new_console_apps.py index 21ec8e7881..9727b4fb78 100644 --- a/openpype/hooks/pre_foundry_apps.py +++ b/openpype/hooks/pre_new_console_apps.py @@ -1,8 +1,8 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes -class LaunchFoundryAppsWindows(PreLaunchHook): +class LaunchNewConsoleApps(PreLaunchHook): """Foundry applications have specific way how to launch them. Nuke is executed "like" python process so it is required to pass @@ -13,12 +13,15 @@ class LaunchFoundryAppsWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["nuke", "nukeassist", "nukex", "hiero", "nukestudio"] - platforms = ["windows"] + app_groups = { + "nuke", "nukeassist", "nukex", "hiero", "nukestudio", "mayapy" + } + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE - # - on Windows nuke will create new window using its console + # - on Windows some apps will create new window using its console # Set `stdout` and `stderr` to None so new created console does not # have redirected output to DEVNULL in build self.launch_context.kwargs.update({ diff --git a/openpype/hooks/pre_non_python_host_launch.py b/openpype/hooks/pre_non_python_host_launch.py index 043cb3c7f6..d9e912c826 100644 --- a/openpype/hooks/pre_non_python_host_launch.py +++ b/openpype/hooks/pre_non_python_host_launch.py @@ -1,10 +1,11 @@ import os -from openpype.lib import ( +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import ( + get_non_python_host_kwargs, PreLaunchHook, - get_openpype_execute_args + LaunchTypes, ) -from openpype.lib.applications import get_non_python_host_kwargs from openpype import PACKAGE_DIR as OPENPYPE_DIR @@ -16,9 +17,10 @@ class NonPythonHostHook(PreLaunchHook): python script which launch the host. For these cases it is necessary to prepend python (or openpype) executable and script path before application's. """ - app_groups = ["harmony", "photoshop", "aftereffects"] + app_groups = {"harmony", "photoshop", "aftereffects"} order = 20 + launch_types = {LaunchTypes.local} def execute(self): # Pop executable @@ -54,4 +56,3 @@ class NonPythonHostHook(PreLaunchHook): self.launch_context.kwargs = \ get_non_python_host_kwargs(self.launch_context.kwargs) - diff --git a/openpype/hooks/pre_ocio_hook.py b/openpype/hooks/pre_ocio_hook.py index 8f462665bc..e695cf3fe8 100644 --- a/openpype/hooks/pre_ocio_hook.py +++ b/openpype/hooks/pre_ocio_hook.py @@ -1,8 +1,6 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook -from openpype.pipeline.colorspace import ( - get_imageio_config -) +from openpype.pipeline.colorspace import get_imageio_config from openpype.pipeline.template_data import get_template_data_with_names @@ -10,18 +8,19 @@ class OCIOEnvHook(PreLaunchHook): """Set OCIO environment variable for hosts that use OpenColorIO.""" order = 0 - hosts = [ + hosts = { "substancepainter", "fusion", "blender", "aftereffects", - "max", + "3dsmax", "houdini", "maya", "nuke", "hiero", - "resolve" - ] + "resolve", + } + launch_types = set() def execute(self): """Hook entry method.""" @@ -39,12 +38,16 @@ class OCIOEnvHook(PreLaunchHook): host_name=self.host_name, project_settings=self.data["project_settings"], anatomy_data=template_data, - anatomy=self.data["anatomy"] + anatomy=self.data["anatomy"], + env=self.launch_context.env, ) if config_data: ocio_path = config_data["path"] + if self.host_name in ["nuke", "hiero"]: + ocio_path = ocio_path.replace("\\", "/") + self.log.info( f"Setting OCIO environment to config path: {ocio_path}") diff --git a/openpype/host/dirmap.py b/openpype/host/dirmap.py index 42bf80ecec..96a98e808e 100644 --- a/openpype/host/dirmap.py +++ b/openpype/host/dirmap.py @@ -32,19 +32,26 @@ class HostDirmap(object): """ def __init__( - self, host_name, project_name, project_settings=None, sync_module=None + self, + host_name, + project_name, + project_settings=None, + sync_module=None ): self.host_name = host_name self.project_name = project_name self._project_settings = project_settings - self._sync_module = sync_module # to limit reinit of Modules + self._sync_module = sync_module + # to limit reinit of Modules + self._sync_module_discovered = sync_module is not None self._log = None @property def sync_module(self): - if self._sync_module is None: + if not self._sync_module_discovered: + self._sync_module_discovered = True manager = ModulesManager() - self._sync_module = manager["sync_server"] + self._sync_module = manager.get("sync_server") return self._sync_module @property @@ -149,23 +156,27 @@ class HostDirmap(object): Returns: dict : { "source-path": [XXX], "destination-path": [YYYY]} """ - project_name = os.getenv("AVALON_PROJECT") + project_name = self.project_name + sync_module = self.sync_module mapping = {} - if (not self.sync_module.enabled or - project_name not in self.sync_module.get_enabled_projects()): + if ( + sync_module is None + or not sync_module.enabled + or project_name not in sync_module.get_enabled_projects() + ): return mapping - active_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_active_site(project_name)) - remote_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_remote_site(project_name)) + active_site = sync_module.get_local_normalized_site( + sync_module.get_active_site(project_name)) + remote_site = sync_module.get_local_normalized_site( + sync_module.get_remote_site(project_name)) self.log.debug( "active {} - remote {}".format(active_site, remote_site) ) if active_site == "local" and active_site != remote_site: - sync_settings = self.sync_module.get_sync_project_setting( + sync_settings = sync_module.get_sync_project_setting( project_name, exclude_locals=False, cached=False) @@ -179,7 +190,7 @@ class HostDirmap(object): self.log.debug("remote overrides {}".format(remote_overrides)) current_platform = platform.system().lower() - remote_provider = self.sync_module.get_provider_for_site( + remote_provider = sync_module.get_provider_for_site( project_name, remote_site ) # dirmap has sense only with regular disk provider, in the workfile diff --git a/openpype/hosts/aftereffects/api/extension.zxp b/openpype/hosts/aftereffects/api/extension.zxp index 358e9740d3..933dc7dc6c 100644 Binary files a/openpype/hosts/aftereffects/api/extension.zxp and b/openpype/hosts/aftereffects/api/extension.zxp differ diff --git a/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml b/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml index 0057758320..7329a9e723 100644 --- a/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml +++ b/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml @@ -1,5 +1,5 @@ - @@ -10,22 +10,22 @@ - + - + - - + + - + - - + + - + @@ -63,7 +63,7 @@ 550 400 --> - + ./icons/iconNormal.png @@ -71,9 +71,9 @@ ./icons/iconDisabled.png ./icons/iconDarkNormal.png ./icons/iconDarkRollover.png - + - \ No newline at end of file + diff --git a/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx b/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx index bc443930df..c00844e637 100644 --- a/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx +++ b/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx @@ -215,6 +215,8 @@ function _getItem(item, comps, folders, footages){ * Refactor */ var item_type = ''; + var path = ''; + var containing_comps = []; if (item instanceof FolderItem){ item_type = 'folder'; if (!folders){ @@ -222,10 +224,18 @@ function _getItem(item, comps, folders, footages){ } } if (item instanceof FootageItem){ - item_type = 'footage'; if (!footages){ return "{}"; } + item_type = 'footage'; + if (item.file){ + path = item.file.fsName; + } + if (item.usedIn){ + for (j = 0; j < item.usedIn.length; ++j){ + containing_comps.push(item.usedIn[j].id); + } + } } if (item instanceof CompItem){ item_type = 'comp'; @@ -236,7 +246,9 @@ function _getItem(item, comps, folders, footages){ var item = {"name": item.name, "id": item.id, - "type": item_type}; + "type": item_type, + "path": path, + "containing_comps": containing_comps}; return JSON.stringify(item); } diff --git a/openpype/hosts/aftereffects/api/launch_logic.py b/openpype/hosts/aftereffects/api/launch_logic.py index ea71122042..e90c3dc5b8 100644 --- a/openpype/hosts/aftereffects/api/launch_logic.py +++ b/openpype/hosts/aftereffects/api/launch_logic.py @@ -13,13 +13,13 @@ from wsrpc_aiohttp import ( WebSocketAsync ) -from qtpy import QtCore, QtWidgets +from qtpy import QtCore from openpype.lib import Logger -from openpype.tools.utils import host_tools from openpype.tests.lib import is_in_tests from openpype.pipeline import install_host, legacy_io from openpype.modules import ModulesManager +from openpype.tools.utils import host_tools, get_openpype_qt_app from openpype.tools.adobe_webserver.app import WebServerTool from .ws_stub import get_stub @@ -43,7 +43,7 @@ def main(*subprocess_args): install_host(host) os.environ["OPENPYPE_LOG_NO_COLORS"] = "False" - app = QtWidgets.QApplication([]) + app = get_openpype_qt_app() app.setQuitOnLastWindowClosed(False) launcher = ProcessLauncher(subprocess_args) diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index 5566ca9e5b..8fc7a70dd8 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -23,6 +23,7 @@ from openpype.host import ( ILoadHost, IPublishHost ) +from openpype.tools.utils import get_openpype_qt_app from .launch_logic import get_stub from .ws_stub import ConnectionNotEstablishedYet @@ -236,10 +237,7 @@ def check_inventory(): return # Warn about outdated containers. - _app = QtWidgets.QApplication.instance() - if not _app: - print("Starting new QApplication..") - _app = QtWidgets.QApplication([]) + _app = get_openpype_qt_app() message_box = QtWidgets.QMessageBox() message_box.setIcon(QtWidgets.QMessageBox.Warning) diff --git a/openpype/hosts/aftereffects/api/ws_stub.py b/openpype/hosts/aftereffects/api/ws_stub.py index f5b96fa63a..18f530e272 100644 --- a/openpype/hosts/aftereffects/api/ws_stub.py +++ b/openpype/hosts/aftereffects/api/ws_stub.py @@ -37,6 +37,9 @@ class AEItem(object): height = attr.ib(default=None) is_placeholder = attr.ib(default=False) uuid = attr.ib(default=False) + path = attr.ib(default=False) # path to FootageItem to validate + # list of composition Footage is in + containing_comps = attr.ib(factory=list) class AfterEffectsServerStub(): @@ -704,7 +707,10 @@ class AfterEffectsServerStub(): d.get("instance_id"), d.get("width"), d.get("height"), - d.get("is_placeholder")) + d.get("is_placeholder"), + d.get("uuid"), + d.get("path"), + d.get("containing_comps"),) ret.append(item) return ret diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index fa79fac78f..fbe600ae68 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -28,7 +28,6 @@ class RenderCreator(Creator): create_allow_context_change = True # Settings - default_variants = [] mark_for_review = True def create(self, subset_name_from_ui, data, pre_create_data): @@ -165,12 +164,16 @@ class RenderCreator(Creator): api.get_stub().rename_item(comp_id, new_comp_name) - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["aftereffects"]["create"]["RenderCreator"] ) self.mark_for_review = plugin_settings["mark_for_review"] + self.default_variants = plugin_settings.get( + "default_variants", + plugin_settings.get("defaults") or [] + ) def get_detail_description(self): return """Creator for Render instances diff --git a/openpype/hosts/aftereffects/plugins/load/load_background.py b/openpype/hosts/aftereffects/plugins/load/load_background.py index e7c29fee5a..16f45074aa 100644 --- a/openpype/hosts/aftereffects/plugins/load/load_background.py +++ b/openpype/hosts/aftereffects/plugins/load/load_background.py @@ -33,9 +33,10 @@ class BackgroundLoader(api.AfterEffectsLoader): existing_items, "{}_{}".format(context["asset"]["name"], name)) - layers = get_background_layers(self.fname) + path = self.filepath_from_context(context) + layers = get_background_layers(path) if not layers: - raise ValueError("No layers found in {}".format(self.fname)) + raise ValueError("No layers found in {}".format(path)) comp = stub.import_background(None, stub.LOADED_ICON + comp_name, layers) diff --git a/openpype/hosts/aftereffects/plugins/load/load_file.py b/openpype/hosts/aftereffects/plugins/load/load_file.py index 33a86aa505..8d52aac546 100644 --- a/openpype/hosts/aftereffects/plugins/load/load_file.py +++ b/openpype/hosts/aftereffects/plugins/load/load_file.py @@ -29,32 +29,27 @@ class FileLoader(api.AfterEffectsLoader): import_options = {} - file = self.fname + path = self.filepath_from_context(context) - repr_cont = context["representation"]["context"] - if "#" not in file: - frame = repr_cont.get("frame") - if frame: - padding = len(frame) - file = file.replace(frame, "#" * padding) - import_options['sequence'] = True + if len(context["representation"]["files"]) > 1: + import_options['sequence'] = True - if not file: + if not path: repr_id = context["representation"]["_id"] self.log.warning( "Representation id `{}` is failing to load".format(repr_id)) return - file = file.replace("\\", "/") - if '.psd' in file: + path = path.replace("\\", "/") + if '.psd' in path: import_options['ImportAsType'] = 'ImportAsType.COMP' - comp = stub.import_file(self.fname, stub.LOADED_ICON + comp_name, + comp = stub.import_file(path, stub.LOADED_ICON + comp_name, import_options) if not comp: self.log.warning( - "Representation id `{}` is failing to load".format(file)) + "Representation `{}` is failing to load".format(path)) self.log.warning("Check host app for alert error.") return diff --git a/openpype/hosts/aftereffects/plugins/publish/closeAE.py b/openpype/hosts/aftereffects/plugins/publish/closeAE.py index eff2573e8f..0be20d9f05 100644 --- a/openpype/hosts/aftereffects/plugins/publish/closeAE.py +++ b/openpype/hosts/aftereffects/plugins/publish/closeAE.py @@ -15,7 +15,7 @@ class CloseAE(pyblish.api.ContextPlugin): active = True hosts = ["aftereffects"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("CloseAE") diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py index aa46461915..49874d6cff 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py @@ -138,7 +138,6 @@ class CollectAERender(publish.AbstractCollectRender): fam = "render.farm" if fam not in instance.families: instance.families.append(fam) - instance.toBeRenderedOn = "deadline" instance.renderer = "aerender" instance.farm = True # to skip integrate if "review" in instance.families: diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py index c21c3623c3..dc557f67fc 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py @@ -1,7 +1,6 @@ import os import pyblish.api -from openpype.pipeline import legacy_io from openpype.pipeline.create import get_subset_name @@ -44,7 +43,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): instance.data["publish"] = instance.data["active"] # for DL def _get_new_instance(self, context, scene_file): - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] version = context.data["version"] asset_entity = context.data["assetEntity"] project_entity = context.data["projectEntity"] diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index c70aa41dbe..bdb48e11f8 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -1,11 +1,5 @@ import os -import sys -import six -from openpype.lib import ( - get_ffmpeg_tool_path, - run_subprocess, -) from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub diff --git a/openpype/hosts/aftereffects/plugins/publish/help/validate_footage_items.xml b/openpype/hosts/aftereffects/plugins/publish/help/validate_footage_items.xml new file mode 100644 index 0000000000..01c8966015 --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/help/validate_footage_items.xml @@ -0,0 +1,14 @@ + + + +Footage item missing + +## Footage item missing + + FootageItem `{name}` contains missing `{path}`. Render will not produce any frames and AE will stop react to any integration +### How to repair? + +Remove `{name}` or provide missing file. + + + diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_footage_items.py b/openpype/hosts/aftereffects/plugins/publish/validate_footage_items.py new file mode 100644 index 0000000000..40a08a2c3f --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/validate_footage_items.py @@ -0,0 +1,49 @@ +# -*- coding: utf-8 -*- +"""Validate presence of footage items in composition +Requires: +""" +import os + +import pyblish.api + +from openpype.pipeline import ( + PublishXmlValidationError +) +from openpype.hosts.aftereffects.api import get_stub + + +class ValidateFootageItems(pyblish.api.InstancePlugin): + """ + Validates if FootageItems contained in composition exist. + + AE fails silently and doesn't render anything if footage item file is + missing. This will result in nonresponsiveness of AE UI as it expects + reaction from user, but it will not provide dialog. + This validator tries to check existence of the files. + It will not protect from missing frame in multiframes though + (as AE api doesn't provide this information and it cannot be told how many + frames should be there easily). Missing frame is replaced by placeholder. + """ + + order = pyblish.api.ValidatorOrder + label = "Validate Footage Items" + families = ["render.farm", "render.local", "render"] + hosts = ["aftereffects"] + optional = True + + def process(self, instance): + """Plugin entry point.""" + + comp_id = instance.data["comp_id"] + for footage_item in get_stub().get_items(comps=False, folders=False, + footages=True): + self.log.info(footage_item) + if comp_id not in footage_item.containing_comps: + continue + + path = footage_item.path + if path and not os.path.exists(path): + msg = f"File {path} not found." + formatting = {"name": footage_item.name, "path": path} + raise PublishXmlValidationError(self, msg, + formatting_data=formatting) diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py index 6c36136b20..36f6035d23 100644 --- a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -30,7 +30,7 @@ class ValidateInstanceAssetRepair(pyblish.api.Action): for instance in instances: data = stub.read(instance[0]) - data["asset"] = legacy_io.Session["AVALON_ASSET"] + data["asset"] = get_current_asset_name() stub.imprint(instance[0].instance_id, data) @@ -54,7 +54,7 @@ class ValidateInstanceAsset(pyblish.api.InstancePlugin): def process(self, instance): instance_asset = instance.data["asset"] - current_asset = legacy_io.Session["AVALON_ASSET"] + current_asset = get_current_asset_name() msg = ( f"Instance asset {instance_asset} is not the same " f"as current context {current_asset}." diff --git a/openpype/hosts/blender/api/__init__.py b/openpype/hosts/blender/api/__init__.py index 75a11affde..e15f1193a5 100644 --- a/openpype/hosts/blender/api/__init__.py +++ b/openpype/hosts/blender/api/__init__.py @@ -38,6 +38,8 @@ from .lib import ( from .capture import capture +from .render_lib import prepare_rendering + __all__ = [ "install", @@ -66,4 +68,5 @@ __all__ = [ "get_selection", "capture", # "unique_name", + "prepare_rendering", ] diff --git a/openpype/hosts/blender/api/colorspace.py b/openpype/hosts/blender/api/colorspace.py new file mode 100644 index 0000000000..4521612b7d --- /dev/null +++ b/openpype/hosts/blender/api/colorspace.py @@ -0,0 +1,51 @@ +import attr + +import bpy + + +@attr.s +class LayerMetadata(object): + """Data class for Render Layer metadata.""" + frameStart = attr.ib() + frameEnd = attr.ib() + + +@attr.s +class RenderProduct(object): + """ + Getting Colorspace as Specific Render Product Parameter for submitting + publish job. + """ + colorspace = attr.ib() # colorspace + view = attr.ib() # OCIO view transform + productName = attr.ib(default=None) + + +class ARenderProduct(object): + def __init__(self): + """Constructor.""" + # Initialize + self.layer_data = self._get_layer_data() + self.layer_data.products = self.get_render_products() + + def _get_layer_data(self): + scene = bpy.context.scene + + return LayerMetadata( + frameStart=int(scene.frame_start), + frameEnd=int(scene.frame_end), + ) + + def get_render_products(self): + """To be implemented by renderer class. + This should return a list of RenderProducts. + Returns: + list: List of RenderProduct + """ + return [ + RenderProduct( + colorspace="sRGB", + view="ACES 1.0", + productName="" + ) + ] diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 91cbfe524f..0eb90eeff9 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -16,10 +16,12 @@ import bpy import bpy.utils.previews from openpype import style -from openpype.pipeline import legacy_io +from openpype import AYON_SERVER_ENABLED +from openpype.pipeline import get_current_asset_name, get_current_task_name from openpype.tools.utils import host_tools from .workio import OpenFileCacher +from . import pipeline PREVIEW_COLLECTIONS: Dict = dict() @@ -283,7 +285,7 @@ class LaunchLoader(LaunchQtApp): def before_window_show(self): self._window.set_context( - {"asset": legacy_io.Session["AVALON_ASSET"]}, + {"asset": get_current_asset_name()}, refresh=True ) @@ -330,10 +332,11 @@ class LaunchWorkFiles(LaunchQtApp): def execute(self, context): result = super().execute(context) - self._window.set_context({ - "asset": legacy_io.Session["AVALON_ASSET"], - "task": legacy_io.Session["AVALON_TASK"] - }) + if not AYON_SERVER_ENABLED: + self._window.set_context({ + "asset": get_current_asset_name(), + "task": get_current_task_name() + }) return result def before_window_show(self): @@ -344,6 +347,26 @@ class LaunchWorkFiles(LaunchQtApp): self._window.refresh() +class SetFrameRange(bpy.types.Operator): + bl_idname = "wm.ayon_set_frame_range" + bl_label = "Set Frame Range" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_frame_range(data) + return {"FINISHED"} + + +class SetResolution(bpy.types.Operator): + bl_idname = "wm.ayon_set_resolution" + bl_label = "Set Resolution" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_resolution(data) + return {"FINISHED"} + + class TOPBAR_MT_avalon(bpy.types.Menu): """Avalon menu.""" @@ -362,8 +385,8 @@ class TOPBAR_MT_avalon(bpy.types.Menu): else: pyblish_menu_icon_id = 0 - asset = legacy_io.Session['AVALON_ASSET'] - task = legacy_io.Session['AVALON_TASK'] + asset = get_current_asset_name() + task = get_current_task_name() context_label = f"{asset}, {task}" context_label_item = layout.row() context_label_item.operator( @@ -381,9 +404,11 @@ class TOPBAR_MT_avalon(bpy.types.Menu): layout.operator(LaunchManager.bl_idname, text="Manage...") layout.operator(LaunchLibrary.bl_idname, text="Library...") layout.separator() + layout.operator(SetFrameRange.bl_idname, text="Set Frame Range") + layout.operator(SetResolution.bl_idname, text="Set Resolution") + layout.separator() layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...") - # TODO (jasper): maybe add 'Reload Pipeline', 'Set Frame Range' and - # 'Set Resolution'? + # TODO (jasper): maybe add 'Reload Pipeline' def draw_avalon_menu(self, context): @@ -399,6 +424,8 @@ classes = [ LaunchManager, LaunchLibrary, LaunchWorkFiles, + SetFrameRange, + SetResolution, TOPBAR_MT_avalon, ] @@ -411,6 +438,7 @@ def register(): pcoll.load("pyblish_menu_icon", str(pyblish_icon_file.absolute()), 'IMAGE') PREVIEW_COLLECTIONS["avalon"] = pcoll + BlenderApplication.get_app() for cls in classes: bpy.utils.register_class(cls) bpy.types.TOPBAR_MT_editor_menus.append(draw_avalon_menu) diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 0f756d8cb6..84af0904f0 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -14,6 +14,8 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( schema, legacy_io, + get_current_project_name, + get_current_asset_name, register_loader_plugin_path, register_creator_plugin_path, deregister_loader_plugin_path, @@ -111,22 +113,21 @@ def message_window(title, message): _process_app_events() -def set_start_end_frames(): - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] +def get_asset_data(): + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) + return asset_doc.get("data") + + +def set_frame_range(data): scene = bpy.context.scene # Default scene settings frameStart = scene.frame_start frameEnd = scene.frame_end fps = scene.render.fps / scene.render.fps_base - resolution_x = scene.render.resolution_x - resolution_y = scene.render.resolution_y - - # Check if settings are set - data = asset_doc.get("data") if not data: return @@ -137,26 +138,47 @@ def set_start_end_frames(): frameEnd = data.get("frameEnd") if data.get("fps"): fps = data.get("fps") - if data.get("resolutionWidth"): - resolution_x = data.get("resolutionWidth") - if data.get("resolutionHeight"): - resolution_y = data.get("resolutionHeight") scene.frame_start = frameStart scene.frame_end = frameEnd scene.render.fps = round(fps) scene.render.fps_base = round(fps) / fps + + +def set_resolution(data): + scene = bpy.context.scene + + # Default scene settings + resolution_x = scene.render.resolution_x + resolution_y = scene.render.resolution_y + + if not data: + return + + if data.get("resolutionWidth"): + resolution_x = data.get("resolutionWidth") + if data.get("resolutionHeight"): + resolution_y = data.get("resolutionHeight") + scene.render.resolution_x = resolution_x scene.render.resolution_y = resolution_y def on_new(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") if unit_scale_enabled: unit_scale = unit_scale_settings.get("base_file_unit_scale") @@ -164,12 +186,20 @@ def on_new(): def on_open(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") apply_on_opening = unit_scale_settings.get("apply_on_opening") if unit_scale_enabled and apply_on_opening: @@ -430,36 +460,6 @@ def ls() -> Iterator: yield parse_container(container) -def update_hierarchy(containers): - """Hierarchical container support - - This is the function to support Scene Inventory to draw hierarchical - view for containers. - - We need both parent and children to visualize the graph. - - """ - - all_containers = set(ls()) # lookup set - - for container in containers: - # Find parent - # FIXME (jasperge): re-evaluate this. How would it be possible - # to 'nest' assets? Collections can have several parents, for - # now assume it has only 1 parent - parent = [ - coll for coll in bpy.data.collections if container in coll.children - ] - for node in parent: - if node in all_containers: - container["parent"] = node - break - - log.debug("Container: %s", container) - - yield container - - def publish(): """Shorthand to publish from within host.""" diff --git a/openpype/hosts/blender/api/plugin.py b/openpype/hosts/blender/api/plugin.py index 1274795c6b..fb87d08cce 100644 --- a/openpype/hosts/blender/api/plugin.py +++ b/openpype/hosts/blender/api/plugin.py @@ -243,7 +243,8 @@ class AssetLoader(LoaderPlugin): """ # TODO (jasper): make it possible to add the asset several times by # just re-using the collection - assert Path(self.fname).exists(), f"{self.fname} doesn't exist." + filepath = self.filepath_from_context(context) + assert Path(filepath).exists(), f"{filepath} doesn't exist." asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/api/render_lib.py b/openpype/hosts/blender/api/render_lib.py new file mode 100644 index 0000000000..d564b5ebcb --- /dev/null +++ b/openpype/hosts/blender/api/render_lib.py @@ -0,0 +1,255 @@ +import os + +import bpy + +from openpype.settings import get_project_settings +from openpype.pipeline import get_current_project_name + + +def get_default_render_folder(settings): + """Get default render folder from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["default_render_image_folder"]) + + +def get_aov_separator(settings): + """Get aov separator from blender settings.""" + + aov_sep = (settings["blender"] + ["RenderSettings"] + ["aov_separator"]) + + if aov_sep == "dash": + return "-" + elif aov_sep == "underscore": + return "_" + elif aov_sep == "dot": + return "." + else: + raise ValueError(f"Invalid aov separator: {aov_sep}") + + +def get_image_format(settings): + """Get image format from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["image_format"]) + + +def get_multilayer(settings): + """Get multilayer from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["multilayer_exr"]) + + +def get_render_product(output_path, name, aov_sep): + """ + Generate the path to the render product. Blender interprets the `#` + as the frame number, when it renders. + + Args: + file_path (str): The path to the blender scene. + render_folder (str): The render folder set in settings. + file_name (str): The name of the blender scene. + instance (pyblish.api.Instance): The instance to publish. + ext (str): The image format to render. + """ + filepath = os.path.join(output_path, name) + render_product = f"{filepath}{aov_sep}beauty.####" + render_product = render_product.replace("\\", "/") + + return render_product + + +def set_render_format(ext, multilayer): + # Set Blender to save the file with the right extension + bpy.context.scene.render.use_file_extension = True + + image_settings = bpy.context.scene.render.image_settings + + if ext == "exr": + image_settings.file_format = ( + "OPEN_EXR_MULTILAYER" if multilayer else "OPEN_EXR") + elif ext == "bmp": + image_settings.file_format = "BMP" + elif ext == "rgb": + image_settings.file_format = "IRIS" + elif ext == "png": + image_settings.file_format = "PNG" + elif ext == "jpeg": + image_settings.file_format = "JPEG" + elif ext == "jp2": + image_settings.file_format = "JPEG2000" + elif ext == "tga": + image_settings.file_format = "TARGA" + elif ext == "tif": + image_settings.file_format = "TIFF" + + +def set_render_passes(settings): + aov_list = (settings["blender"] + ["RenderSettings"] + ["aov_list"]) + + custom_passes = (settings["blender"] + ["RenderSettings"] + ["custom_passes"]) + + vl = bpy.context.view_layer + + vl.use_pass_combined = "combined" in aov_list + vl.use_pass_z = "z" in aov_list + vl.use_pass_mist = "mist" in aov_list + vl.use_pass_normal = "normal" in aov_list + vl.use_pass_diffuse_direct = "diffuse_light" in aov_list + vl.use_pass_diffuse_color = "diffuse_color" in aov_list + vl.use_pass_glossy_direct = "specular_light" in aov_list + vl.use_pass_glossy_color = "specular_color" in aov_list + vl.eevee.use_pass_volume_direct = "volume_light" in aov_list + vl.use_pass_emit = "emission" in aov_list + vl.use_pass_environment = "environment" in aov_list + vl.use_pass_shadow = "shadow" in aov_list + vl.use_pass_ambient_occlusion = "ao" in aov_list + + cycles = vl.cycles + + cycles.denoising_store_passes = "denoising" in aov_list + cycles.use_pass_volume_direct = "volume_direct" in aov_list + cycles.use_pass_volume_indirect = "volume_indirect" in aov_list + + aovs_names = [aov.name for aov in vl.aovs] + for cp in custom_passes: + cp_name = cp[0] + if cp_name not in aovs_names: + aov = vl.aovs.add() + aov.name = cp_name + else: + aov = vl.aovs[cp_name] + aov.type = cp[1].get("type", "VALUE") + + return aov_list, custom_passes + + +def set_node_tree(output_path, name, aov_sep, ext, multilayer): + # Set the scene to use the compositor node tree to render + bpy.context.scene.use_nodes = True + + tree = bpy.context.scene.node_tree + + # Get the Render Layers node + rl_node = None + for node in tree.nodes: + if node.bl_idname == "CompositorNodeRLayers": + rl_node = node + break + + # If there's not a Render Layers node, we create it + if not rl_node: + rl_node = tree.nodes.new("CompositorNodeRLayers") + + # Get the enabled output sockets, that are the active passes for the + # render. + # We also exclude some layers. + exclude_sockets = ["Image", "Alpha", "Noisy Image"] + passes = [ + socket + for socket in rl_node.outputs + if socket.enabled and socket.name not in exclude_sockets + ] + + # Remove all output nodes + for node in tree.nodes: + if node.bl_idname == "CompositorNodeOutputFile": + tree.nodes.remove(node) + + # Create a new output node + output = tree.nodes.new("CompositorNodeOutputFile") + + image_settings = bpy.context.scene.render.image_settings + output.format.file_format = image_settings.file_format + + # In case of a multilayer exr, we don't need to use the output node, + # because the blender render already outputs a multilayer exr. + if ext == "exr" and multilayer: + output.layer_slots.clear() + return [] + + output.file_slots.clear() + output.base_path = output_path + + aov_file_products = [] + + # For each active render pass, we add a new socket to the output node + # and link it + for render_pass in passes: + filepath = f"{name}{aov_sep}{render_pass.name}.####" + + output.file_slots.new(filepath) + + aov_file_products.append( + (render_pass.name, os.path.join(output_path, filepath))) + + node_input = output.inputs[-1] + + tree.links.new(render_pass, node_input) + + return aov_file_products + + +def imprint_render_settings(node, data): + RENDER_DATA = "render_data" + if not node.get(RENDER_DATA): + node[RENDER_DATA] = {} + for key, value in data.items(): + if value is None: + continue + node[RENDER_DATA][key] = value + + +def prepare_rendering(asset_group): + name = asset_group.name + + filepath = bpy.data.filepath + assert filepath, "Workfile not saved. Please save the file first." + + file_path = os.path.dirname(filepath) + file_name = os.path.basename(filepath) + file_name, _ = os.path.splitext(file_name) + + project = get_current_project_name() + settings = get_project_settings(project) + + render_folder = get_default_render_folder(settings) + aov_sep = get_aov_separator(settings) + ext = get_image_format(settings) + multilayer = get_multilayer(settings) + + set_render_format(ext, multilayer) + aov_list, custom_passes = set_render_passes(settings) + + output_path = os.path.join(file_path, render_folder, file_name) + + render_product = get_render_product(output_path, name, aov_sep) + aov_file_product = set_node_tree( + output_path, name, aov_sep, ext, multilayer) + + bpy.context.scene.render.filepath = render_product + + render_settings = { + "render_folder": render_folder, + "aov_separator": aov_sep, + "image_format": ext, + "multilayer_exr": multilayer, + "aov_list": aov_list, + "custom_passes": custom_passes, + "render_product": render_product, + "aov_file_product": aov_file_product, + "review": True, + } + + imprint_render_settings(asset_group, render_settings) diff --git a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py index 559e9ae0ce..68c9bfdd57 100644 --- a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py +++ b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py @@ -1,6 +1,6 @@ from pathlib import Path -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddPythonScriptToLaunchArgs(PreLaunchHook): @@ -8,9 +8,8 @@ class AddPythonScriptToLaunchArgs(PreLaunchHook): # Append after file argument order = 15 - app_groups = [ - "blender", - ] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): if not self.launch_context.data.get("python_scripts"): diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index e5f66d2a26..2aa3a5e49a 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -2,7 +2,7 @@ import os import re import subprocess from platform import system -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InstallPySideToBlender(PreLaunchHook): @@ -16,7 +16,8 @@ class InstallPySideToBlender(PreLaunchHook): blender's python packages. """ - app_groups = ["blender"] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): # Prelaunch hook is not crucial @@ -30,7 +31,7 @@ class InstallPySideToBlender(PreLaunchHook): def inner_execute(self): # Get blender's python directory - version_regex = re.compile(r"^[2-3]\.[0-9]+$") + version_regex = re.compile(r"^[2-4]\.[0-9]+$") platform = system().lower() executable = self.launch_context.executable.executable_path diff --git a/openpype/hosts/blender/hooks/pre_windows_console.py b/openpype/hosts/blender/hooks/pre_windows_console.py index d6be45b225..2161b7a2f5 100644 --- a/openpype/hosts/blender/hooks/pre_windows_console.py +++ b/openpype/hosts/blender/hooks/pre_windows_console.py @@ -1,5 +1,5 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class BlenderConsoleWindows(PreLaunchHook): @@ -13,8 +13,9 @@ class BlenderConsoleWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["blender"] - platforms = ["windows"] + app_groups = {"blender"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE diff --git a/openpype/hosts/blender/plugins/create/create_action.py b/openpype/hosts/blender/plugins/create/create_action.py index 54b3a501a7..0203ba74c0 100644 --- a/openpype/hosts/blender/plugins/create/create_action.py +++ b/openpype/hosts/blender/plugins/create/create_action.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name import openpype.hosts.blender.api.plugin from openpype.hosts.blender.api import lib @@ -22,7 +22,7 @@ class CreateAction(openpype.hosts.blender.api.plugin.Creator): name = openpype.hosts.blender.api.plugin.asset_name(asset, subset) collection = bpy.data.collections.new(name=name) bpy.context.scene.collection.children.link(collection) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(collection, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_animation.py b/openpype/hosts/blender/plugins/create/create_animation.py index a0e9e5e399..bc2840952b 100644 --- a/openpype/hosts/blender/plugins/create/create_animation.py +++ b/openpype/hosts/blender/plugins/create/create_animation.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -37,7 +37,7 @@ class CreateAnimation(plugin.Creator): # asset_group.empty_display_type = 'SINGLE_ARROW' asset_group = bpy.data.collections.new(name=name) instances.children.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_blendScene.py b/openpype/hosts/blender/plugins/create/create_blendScene.py new file mode 100644 index 0000000000..63bcf212ff --- /dev/null +++ b/openpype/hosts/blender/plugins/create/create_blendScene.py @@ -0,0 +1,51 @@ +"""Create a Blender scene asset.""" + +import bpy + +from openpype.pipeline import get_current_task_name +from openpype.hosts.blender.api import plugin, lib, ops +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES + + +class CreateBlendScene(plugin.Creator): + """Generic group of assets""" + + name = "blendScene" + label = "Blender Scene" + family = "blendScene" + icon = "cubes" + + def process(self): + """ Run the creator on Blender main thread""" + mti = ops.MainThreadItem(self._process) + ops.execute_in_main_thread(mti) + + def _process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object + asset = self.data["asset"] + subset = self.data["subset"] + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.objects.new(name=name, object_data=None) + asset_group.empty_display_type = 'SINGLE_ARROW' + instances.objects.link(asset_group) + self.data['task'] = get_current_task_name() + lib.imprint(asset_group, self.data) + + # Add selected objects to instance + if (self.options or {}).get("useSelection"): + bpy.context.view_layer.objects.active = asset_group + selected = lib.get_selection() + for obj in selected: + if obj.parent in selected: + obj.select_set(False) + continue + selected.append(asset_group) + bpy.ops.object.parent_set(keep_transform=True) + + return asset_group diff --git a/openpype/hosts/blender/plugins/create/create_camera.py b/openpype/hosts/blender/plugins/create/create_camera.py index ada512d7ac..7a770a3e77 100644 --- a/openpype/hosts/blender/plugins/create/create_camera.py +++ b/openpype/hosts/blender/plugins/create/create_camera.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -35,7 +35,7 @@ class CreateCamera(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() print(f"self.data: {self.data}") lib.imprint(asset_group, self.data) @@ -43,7 +43,9 @@ class CreateCamera(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) else: diff --git a/openpype/hosts/blender/plugins/create/create_layout.py b/openpype/hosts/blender/plugins/create/create_layout.py index 5949a4b86e..73ed683256 100644 --- a/openpype/hosts/blender/plugins/create/create_layout.py +++ b/openpype/hosts/blender/plugins/create/create_layout.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateLayout(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateLayout(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/create/create_model.py b/openpype/hosts/blender/plugins/create/create_model.py index fedc708943..51fc6683f6 100644 --- a/openpype/hosts/blender/plugins/create/create_model.py +++ b/openpype/hosts/blender/plugins/create/create_model.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateModel(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateModel(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/create/create_pointcache.py b/openpype/hosts/blender/plugins/create/create_pointcache.py index 38707fd3b1..6220f68dc5 100644 --- a/openpype/hosts/blender/plugins/create/create_pointcache.py +++ b/openpype/hosts/blender/plugins/create/create_pointcache.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name import openpype.hosts.blender.api.plugin from openpype.hosts.blender.api import lib @@ -22,7 +22,7 @@ class CreatePointcache(openpype.hosts.blender.api.plugin.Creator): name = openpype.hosts.blender.api.plugin.asset_name(asset, subset) collection = bpy.data.collections.new(name=name) bpy.context.scene.collection.children.link(collection) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(collection, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_render.py b/openpype/hosts/blender/plugins/create/create_render.py new file mode 100644 index 0000000000..f938a21808 --- /dev/null +++ b/openpype/hosts/blender/plugins/create/create_render.py @@ -0,0 +1,53 @@ +"""Create render.""" +import bpy + +from openpype.pipeline import get_current_task_name +from openpype.hosts.blender.api import plugin, lib +from openpype.hosts.blender.api.render_lib import prepare_rendering +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES + + +class CreateRenderlayer(plugin.Creator): + """Single baked camera""" + + name = "renderingMain" + label = "Render" + family = "render" + icon = "eye" + + def process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object + asset = self.data["asset"] + subset = self.data["subset"] + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.collections.new(name=name) + + try: + instances.children.link(asset_group) + self.data['task'] = get_current_task_name() + lib.imprint(asset_group, self.data) + + prepare_rendering(asset_group) + except Exception: + # Remove the instance if there was an error + bpy.data.collections.remove(asset_group) + raise + + # TODO: this is undesiderable, but it's the only way to be sure that + # the file is saved before the render starts. + # Blender, by design, doesn't set the file as dirty if modifications + # happen by script. So, when creating the instance and setting the + # render settings, the file is not marked as dirty. This means that + # there is the risk of sending to deadline a file without the right + # settings. Even the validator to check that the file is saved will + # detect the file as saved, even if it isn't. The only solution for + # now it is to force the file to be saved. + bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath) + + return asset_group diff --git a/openpype/hosts/blender/plugins/create/create_review.py b/openpype/hosts/blender/plugins/create/create_review.py index bf4ea6a7cd..914f249891 100644 --- a/openpype/hosts/blender/plugins/create/create_review.py +++ b/openpype/hosts/blender/plugins/create/create_review.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -33,7 +33,7 @@ class CreateReview(plugin.Creator): name = plugin.asset_name(asset, subset) asset_group = bpy.data.collections.new(name=name) instances.children.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_rig.py b/openpype/hosts/blender/plugins/create/create_rig.py index 0abd306c6b..08cc46ee3e 100644 --- a/openpype/hosts/blender/plugins/create/create_rig.py +++ b/openpype/hosts/blender/plugins/create/create_rig.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateRig(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateRig(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/load/import_workfile.py b/openpype/hosts/blender/plugins/load/import_workfile.py index bbdf1c7ea0..4f5016d422 100644 --- a/openpype/hosts/blender/plugins/load/import_workfile.py +++ b/openpype/hosts/blender/plugins/load/import_workfile.py @@ -52,7 +52,8 @@ class AppendBlendLoader(plugin.AssetLoader): color = "#775555" def load(self, context, name=None, namespace=None, data=None): - append_workfile(context, self.fname, False) + path = self.filepath_from_context(context) + append_workfile(context, path, False) # We do not containerize imported content, it remains unmanaged return @@ -76,7 +77,8 @@ class ImportBlendLoader(plugin.AssetLoader): color = "#775555" def load(self, context, name=None, namespace=None, data=None): - append_workfile(context, self.fname, True) + path = self.filepath_from_context(context) + append_workfile(context, path, True) # We do not containerize imported content, it remains unmanaged return diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py index c1d73eff02..9b3d940536 100644 --- a/openpype/hosts/blender/plugins/load/load_abc.py +++ b/openpype/hosts/blender/plugins/load/load_abc.py @@ -26,8 +26,7 @@ class CacheModelLoader(plugin.AssetLoader): Note: At least for now it only supports Alembic files. """ - - families = ["model", "pointcache"] + families = ["model", "pointcache", "animation"] representations = ["abc"] label = "Load Alembic" @@ -53,16 +52,12 @@ class CacheModelLoader(plugin.AssetLoader): def _process(self, libpath, asset_group, group_name): plugin.deselect_all() - collection = bpy.context.view_layer.active_layer_collection.collection - relative = bpy.context.preferences.filepaths.use_relative_paths bpy.ops.wm.alembic_import( filepath=libpath, relative_path=relative ) - parent = bpy.context.scene.collection - imported = lib.get_selection() # Children must be linked before parents, @@ -79,6 +74,10 @@ class CacheModelLoader(plugin.AssetLoader): objects.reverse() for obj in objects: + # Unlink the object from all collections + collections = obj.users_collection + for collection in collections: + collection.objects.unlink(obj) name = obj.name obj.name = f"{group_name}:{name}" if obj.type != 'EMPTY': @@ -90,7 +89,7 @@ class CacheModelLoader(plugin.AssetLoader): material_slot.material.name = f"{group_name}:{name_mat}" if not obj.get(AVALON_PROPERTY): - obj[AVALON_PROPERTY] = dict() + obj[AVALON_PROPERTY] = {} avalon_info = obj[AVALON_PROPERTY] avalon_info.update({"container_name": group_name}) @@ -99,6 +98,18 @@ class CacheModelLoader(plugin.AssetLoader): return objects + def _link_objects(self, objects, collection, containers, asset_group): + # Link the imported objects to any collection where the asset group is + # linked to, except the AVALON_CONTAINERS collection + group_collections = [ + collection + for collection in asset_group.users_collection + if collection != containers] + + for obj in objects: + for collection in group_collections: + collection.objects.link(obj) + def process_asset( self, context: dict, name: str, namespace: Optional[str] = None, options: Optional[Dict] = None @@ -111,7 +122,7 @@ class CacheModelLoader(plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] @@ -120,18 +131,21 @@ class CacheModelLoader(plugin.AssetLoader): group_name = plugin.asset_name(asset, subset, unique_number) namespace = namespace or f"{asset}_{unique_number}" - avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_containers: - avalon_containers = bpy.data.collections.new( - name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_containers) + containers = bpy.data.collections.get(AVALON_CONTAINERS) + if not containers: + containers = bpy.data.collections.new(name=AVALON_CONTAINERS) + bpy.context.scene.collection.children.link(containers) asset_group = bpy.data.objects.new(group_name, object_data=None) - avalon_containers.objects.link(asset_group) + containers.objects.link(asset_group) objects = self._process(libpath, asset_group, group_name) - bpy.context.scene.collection.objects.link(asset_group) + # Link the asset group to the active collection + collection = bpy.context.view_layer.active_layer_collection.collection + collection.objects.link(asset_group) + + self._link_objects(objects, asset_group, containers, asset_group) asset_group[AVALON_PROPERTY] = { "schema": "openpype:container-2.0", @@ -207,7 +221,11 @@ class CacheModelLoader(plugin.AssetLoader): mat = asset_group.matrix_basis.copy() self._remove(asset_group) - self._process(str(libpath), asset_group, object_name) + objects = self._process(str(libpath), asset_group, object_name) + + containers = bpy.data.collections.get(AVALON_CONTAINERS) + self._link_objects(objects, asset_group, containers, asset_group) + asset_group.matrix_basis = mat metadata["libpath"] = str(libpath) diff --git a/openpype/hosts/blender/plugins/load/load_action.py b/openpype/hosts/blender/plugins/load/load_action.py index 3c8fe988f0..3447e67ebf 100644 --- a/openpype/hosts/blender/plugins/load/load_action.py +++ b/openpype/hosts/blender/plugins/load/load_action.py @@ -43,7 +43,7 @@ class BlendActionLoader(openpype.hosts.blender.api.plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] lib_container = openpype.hosts.blender.api.plugin.asset_name(asset, subset) diff --git a/openpype/hosts/blender/plugins/load/load_animation.py b/openpype/hosts/blender/plugins/load/load_animation.py index 6b8d4abd04..3e7f808903 100644 --- a/openpype/hosts/blender/plugins/load/load_animation.py +++ b/openpype/hosts/blender/plugins/load/load_animation.py @@ -34,7 +34,7 @@ class BlendAnimationLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) with bpy.data.libraries.load( libpath, link=True, relative=False diff --git a/openpype/hosts/blender/plugins/load/load_audio.py b/openpype/hosts/blender/plugins/load/load_audio.py index 3f4fcc17de..ac8f363316 100644 --- a/openpype/hosts/blender/plugins/load/load_audio.py +++ b/openpype/hosts/blender/plugins/load/load_audio.py @@ -38,7 +38,7 @@ class AudioLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_blend.py b/openpype/hosts/blender/plugins/load/load_blend.py new file mode 100644 index 0000000000..25d6568889 --- /dev/null +++ b/openpype/hosts/blender/plugins/load/load_blend.py @@ -0,0 +1,257 @@ +from typing import Dict, List, Optional +from pathlib import Path + +import bpy + +from openpype.pipeline import ( + legacy_create, + get_representation_path, + AVALON_CONTAINER_ID, +) +from openpype.pipeline.create import get_legacy_creator_by_name +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.lib import imprint +from openpype.hosts.blender.api.pipeline import ( + AVALON_CONTAINERS, + AVALON_PROPERTY, +) + + +class BlendLoader(plugin.AssetLoader): + """Load assets from a .blend file.""" + + families = ["model", "rig", "layout", "camera", "blendScene"] + representations = ["blend"] + + label = "Append Blend" + icon = "code-fork" + color = "orange" + + @staticmethod + def _get_asset_container(objects): + empties = [obj for obj in objects if obj.type == 'EMPTY'] + + for empty in empties: + if empty.get(AVALON_PROPERTY): + return empty + + return None + + @staticmethod + def get_all_container_parents(asset_group): + parent_containers = [] + parent = asset_group.parent + while parent: + if parent.get(AVALON_PROPERTY): + parent_containers.append(parent) + parent = parent.parent + + return parent_containers + + def _post_process_layout(self, container, asset, representation): + rigs = [ + obj for obj in container.children_recursive + if ( + obj.type == 'EMPTY' and + obj.get(AVALON_PROPERTY) and + obj.get(AVALON_PROPERTY).get('family') == 'rig' + ) + ] + + for rig in rigs: + creator_plugin = get_legacy_creator_by_name("CreateAnimation") + legacy_create( + creator_plugin, + name=rig.name.split(':')[-1] + "_animation", + asset=asset, + options={ + "useSelection": False, + "asset_group": rig + }, + data={ + "dependencies": representation + } + ) + + def _process_data(self, libpath, group_name): + # Append all the data from the .blend file + with bpy.data.libraries.load( + libpath, link=False, relative=False + ) as (data_from, data_to): + for attr in dir(data_to): + setattr(data_to, attr, getattr(data_from, attr)) + + members = [] + + # Rename the object to add the asset name + for attr in dir(data_to): + for data in getattr(data_to, attr): + data.name = f"{group_name}:{data.name}" + members.append(data) + + container = self._get_asset_container(data_to.objects) + assert container, "No asset group found" + + container.name = group_name + container.empty_display_type = 'SINGLE_ARROW' + + # Link the collection to the scene + bpy.context.scene.collection.objects.link(container) + + # Link all the container children to the collection + for obj in container.children_recursive: + bpy.context.scene.collection.objects.link(obj) + + # Remove the library from the blend file + library = bpy.data.libraries.get(bpy.path.basename(libpath)) + bpy.data.libraries.remove(library) + + return container, members + + def process_asset( + self, context: dict, name: str, namespace: Optional[str] = None, + options: Optional[Dict] = None + ) -> Optional[List]: + """ + Arguments: + name: Use pre-defined name + namespace: Use pre-defined namespace + context: Full parenthood of representation to load + options: Additional settings dictionary + """ + libpath = self.filepath_from_context(context) + asset = context["asset"]["name"] + subset = context["subset"]["name"] + + try: + family = context["representation"]["context"]["family"] + except ValueError: + family = "model" + + representation = str(context["representation"]["_id"]) + + asset_name = plugin.asset_name(asset, subset) + unique_number = plugin.get_unique_number(asset, subset) + group_name = plugin.asset_name(asset, subset, unique_number) + namespace = namespace or f"{asset}_{unique_number}" + + avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) + if not avalon_container: + avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) + bpy.context.scene.collection.children.link(avalon_container) + + container, members = self._process_data(libpath, group_name) + + if family == "layout": + self._post_process_layout(container, asset, representation) + + avalon_container.objects.link(container) + + data = { + "schema": "openpype:container-2.0", + "id": AVALON_CONTAINER_ID, + "name": name, + "namespace": namespace or '', + "loader": str(self.__class__.__name__), + "representation": str(context["representation"]["_id"]), + "libpath": libpath, + "asset_name": asset_name, + "parent": str(context["representation"]["parent"]), + "family": context["representation"]["context"]["family"], + "objectName": group_name, + "members": members, + } + + container[AVALON_PROPERTY] = data + + objects = [ + obj for obj in bpy.data.objects + if obj.name.startswith(f"{group_name}:") + ] + + self[:] = objects + return objects + + def exec_update(self, container: Dict, representation: Dict): + """ + Update the loaded asset. + """ + group_name = container["objectName"] + asset_group = bpy.data.objects.get(group_name) + libpath = Path(get_representation_path(representation)).as_posix() + + assert asset_group, ( + f"The asset is not loaded: {container['objectName']}" + ) + + transform = asset_group.matrix_basis.copy() + old_data = dict(asset_group.get(AVALON_PROPERTY)) + parent = asset_group.parent + + self.exec_remove(container) + + asset_group, members = self._process_data(libpath, group_name) + + avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) + avalon_container.objects.link(asset_group) + + asset_group.matrix_basis = transform + asset_group.parent = parent + + # Restore the old data, but reset memebers, as they don't exist anymore + # This avoids a crash, because the memory addresses of those members + # are not valid anymore + old_data["members"] = [] + asset_group[AVALON_PROPERTY] = old_data + + new_data = { + "libpath": libpath, + "representation": str(representation["_id"]), + "parent": str(representation["parent"]), + "members": members, + } + + imprint(asset_group, new_data) + + # We need to update all the parent container members + parent_containers = self.get_all_container_parents(asset_group) + + for parent_container in parent_containers: + parent_members = parent_container[AVALON_PROPERTY]["members"] + parent_container[AVALON_PROPERTY]["members"] = ( + parent_members + members) + + def exec_remove(self, container: Dict) -> bool: + """ + Remove an existing container from a Blender scene. + """ + group_name = container["objectName"] + asset_group = bpy.data.objects.get(group_name) + + attrs = [ + attr for attr in dir(bpy.data) + if isinstance( + getattr(bpy.data, attr), + bpy.types.bpy_prop_collection + ) + ] + + members = asset_group.get(AVALON_PROPERTY).get("members", []) + + # We need to update all the parent container members + parent_containers = self.get_all_container_parents(asset_group) + + for parent in parent_containers: + parent.get(AVALON_PROPERTY)["members"] = list(filter( + lambda i: i not in members, + parent.get(AVALON_PROPERTY).get("members", []))) + + for attr in attrs: + for data in getattr(bpy.data, attr): + if data in members: + # Skip the asset group + if data == asset_group: + continue + getattr(bpy.data, attr).remove(data) + + bpy.data.objects.remove(asset_group) diff --git a/openpype/hosts/blender/plugins/load/load_camera_abc.py b/openpype/hosts/blender/plugins/load/load_camera_abc.py index 21b48f409f..05d3fb764d 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_abc.py +++ b/openpype/hosts/blender/plugins/load/load_camera_abc.py @@ -81,7 +81,9 @@ class AbcCameraLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + + libpath = self.filepath_from_context(context) + asset = context["asset"]["name"] subset = context["subset"]["name"] @@ -98,7 +100,7 @@ class AbcCameraLoader(plugin.AssetLoader): asset_group = bpy.data.objects.new(group_name, object_data=None) avalon_container.objects.link(asset_group) - objects = self._process(libpath, asset_group, group_name) + self._process(libpath, asset_group, group_name) objects = [] nodes = list(asset_group.children) diff --git a/openpype/hosts/blender/plugins/load/load_camera_blend.py b/openpype/hosts/blender/plugins/load/load_camera_blend.py deleted file mode 100644 index f00027f0b4..0000000000 --- a/openpype/hosts/blender/plugins/load/load_camera_blend.py +++ /dev/null @@ -1,256 +0,0 @@ -"""Load a camera asset in Blender.""" - -import logging -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - -logger = logging.getLogger("openpype").getChild( - "blender").getChild("load_camera") - - -class BlendCameraLoader(plugin.AssetLoader): - """Load a camera from a .blend file. - - Warning: - Loading the same asset more then once is not properly supported at the - moment. - """ - - families = ["camera"] - representations = ["blend"] - - label = "Link Camera (Blend)" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'CAMERA': - bpy.data.cameras.remove(obj.data) - - def _process(self, libpath, asset_group, group_name): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - for obj in nodes: - obj.parent = asset_group - - for obj in nodes: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - - if local_obj.type != 'EMPTY': - plugin.prepare_data(local_obj.data, group_name) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - bpy.data.orphans_purge(do_local_ids=False) - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - objects = self._process(libpath, asset_group, group_name) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all children of the asset group, load the new ones - and add them as children of the group. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_camera_fbx.py b/openpype/hosts/blender/plugins/load/load_camera_fbx.py index 97f844e610..3cca6e7fd3 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_camera_fbx.py @@ -86,7 +86,7 @@ class FbxCameraLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] @@ -103,7 +103,7 @@ class FbxCameraLoader(plugin.AssetLoader): asset_group = bpy.data.objects.new(group_name, object_data=None) avalon_container.objects.link(asset_group) - objects = self._process(libpath, asset_group, group_name) + self._process(libpath, asset_group, group_name) objects = [] nodes = list(asset_group.children) diff --git a/openpype/hosts/blender/plugins/load/load_fbx.py b/openpype/hosts/blender/plugins/load/load_fbx.py index ee2e7d175c..e129ea6754 100644 --- a/openpype/hosts/blender/plugins/load/load_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_fbx.py @@ -130,7 +130,7 @@ class FbxModelLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_layout_blend.py b/openpype/hosts/blender/plugins/load/load_layout_blend.py deleted file mode 100644 index 7d2fd23444..0000000000 --- a/openpype/hosts/blender/plugins/load/load_layout_blend.py +++ /dev/null @@ -1,469 +0,0 @@ -"""Load a layout in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - legacy_create, - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.pipeline.create import get_legacy_creator_by_name -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendLayoutLoader(plugin.AssetLoader): - """Load layout from a .blend file.""" - - families = ["layout"] - representations = ["blend"] - - label = "Link Layout" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - if material_slot.material: - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'ARMATURE': - objects.extend(obj.children) - bpy.data.armatures.remove(obj.data) - elif obj.type == 'CURVE': - bpy.data.curves.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _remove_asset_and_library(self, asset_group): - if not asset_group.get(AVALON_PROPERTY): - return - - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - if not libpath: - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).all_objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - if library: - bpy.data.libraries.remove(library) - - def _process( - self, libpath, asset_group, group_name, asset, representation, - actions, anim_instances - ): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if (empty.get(AVALON_PROPERTY) and - empty.get(AVALON_PROPERTY).get('family') == 'layout'): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - allowed_types = ['ARMATURE', 'MESH', 'EMPTY'] - - for obj in nodes: - if obj.type in allowed_types: - obj.parent = asset_group - - for obj in nodes: - if obj.type in allowed_types: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - constraints = [] - - armatures = [obj for obj in objects if obj.type == 'ARMATURE'] - - for armature in armatures: - for bone in armature.pose.bones: - for constraint in bone.constraints: - if hasattr(constraint, 'target'): - constraints.append(constraint) - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj) - - action = None - - if actions: - action = actions.get(local_obj.name, None) - - if local_obj.type == 'MESH': - plugin.prepare_data(local_obj.data) - - if obj != local_obj: - for constraint in constraints: - if constraint.target == obj: - constraint.target = local_obj - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material) - elif local_obj.type == 'ARMATURE': - plugin.prepare_data(local_obj.data) - - if action: - if local_obj.animation_data is None: - local_obj.animation_data_create() - local_obj.animation_data.action = action - elif (local_obj.animation_data and - local_obj.animation_data.action): - plugin.prepare_data( - local_obj.animation_data.action) - - # Set link the drivers to the local object - if local_obj.data.animation_data: - for d in local_obj.data.animation_data.drivers: - for v in d.driver.variables: - for t in v.targets: - t.id = local_obj - - elif local_obj.type == 'EMPTY': - if (not anim_instances or - (anim_instances and - local_obj.name not in anim_instances.keys())): - avalon = local_obj.get(AVALON_PROPERTY) - if avalon and avalon.get('family') == 'rig': - creator_plugin = get_legacy_creator_by_name( - "CreateAnimation") - if not creator_plugin: - raise ValueError( - "Creator plugin \"CreateAnimation\" was " - "not found.") - - legacy_create( - creator_plugin, - name=local_obj.name.split(':')[-1] + "_animation", - asset=asset, - options={"useSelection": False, - "asset_group": local_obj}, - data={"dependencies": representation} - ) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - armatures = [ - obj for obj in bpy.data.objects - if obj.type == 'ARMATURE' and obj.library is None] - arm_act = {} - - # The armatures with an animation need to be at the center of the - # scene to be hooked correctly by the curves modifiers. - for armature in armatures: - if armature.animation_data and armature.animation_data.action: - arm_act[armature] = armature.animation_data.action - armature.animation_data.action = None - armature.location = (0.0, 0.0, 0.0) - for bone in armature.pose.bones: - bone.location = (0.0, 0.0, 0.0) - bone.rotation_euler = (0.0, 0.0, 0.0) - - curves = [obj for obj in data_to.objects if obj.type == 'CURVE'] - - for curve in curves: - curve_name = curve.name.split(':')[0] - curve_obj = bpy.data.objects.get(curve_name) - - local_obj = plugin.prepare_data(curve) - plugin.prepare_data(local_obj.data) - - # Curves need to reset the hook, but to do that they need to be - # in the view layer. - parent.objects.link(local_obj) - plugin.deselect_all() - local_obj.select_set(True) - bpy.context.view_layer.objects.active = local_obj - if local_obj.library is None: - bpy.ops.object.mode_set(mode='EDIT') - bpy.ops.object.hook_reset() - bpy.ops.object.mode_set(mode='OBJECT') - parent.objects.unlink(local_obj) - - local_obj.use_fake_user = True - - for mod in local_obj.modifiers: - mod.object = bpy.data.objects.get(f"{mod.object.name}") - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - local_obj.parent = curve_obj - objects.append(local_obj) - - for armature in armatures: - if arm_act.get(armature): - armature.animation_data.action = arm_act[armature] - - while bpy.data.orphans_purge(do_local_ids=False): - pass - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - representation = str(context["representation"]["_id"]) - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - objects = self._process( - libpath, asset_group, group_name, asset, representation, - None, None) - - for child in asset_group.children: - if child.get(AVALON_PROPERTY): - avalon_container.objects.link(child) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all objects of the current collection, load the new - ones and add them to the collection. - If the objects of the collection are used in another collection they - will not be removed, only unlinked. Normally this should not be the - case though. - - Warning: - No nested collections are supported at the moment! - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - actions = {} - anim_instances = {} - - for obj in asset_group.children: - obj_meta = obj.get(AVALON_PROPERTY) - if obj_meta.get('family') == 'rig': - # Get animation instance - collections = list(obj.users_collection) - for c in collections: - avalon = c.get(AVALON_PROPERTY) - if avalon and avalon.get('family') == 'animation': - anim_instances[obj.name] = c.name - break - - # Get armature's action - rig = None - for child in obj.children: - if child.type == 'ARMATURE': - rig = child - break - if not rig: - raise Exception("No armature in the rig asset group.") - if rig.animation_data and rig.animation_data.action: - instance_name = obj_meta.get('instance_name') - actions[instance_name] = rig.animation_data.action - - mat = asset_group.matrix_basis.copy() - - # Remove the children of the asset_group first - for child in list(asset_group.children): - self._remove_asset_and_library(child) - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - asset = container.get("asset_name").split("_")[0] - - self._process( - str(libpath), asset_group, object_name, asset, - str(representation.get("_id")), actions, anim_instances - ) - - # Link the new objects to the animation collection - for inst in anim_instances.keys(): - try: - obj = bpy.data.objects[inst] - bpy.data.collections[anim_instances[inst]].objects.link(obj) - except KeyError: - self.log.info(f"Object {inst} does not exist anymore.") - coll = bpy.data.collections.get(anim_instances[inst]) - if (coll): - bpy.data.collections.remove(coll) - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - for child in asset_group.children: - if child.get(AVALON_PROPERTY): - avalon_container.objects.link(child) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - - Warning: - No nested collections are supported at the moment! - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - - if not asset_group: - return False - - # Remove the children of the asset_group first - for child in list(asset_group.children): - self._remove_asset_and_library(child) - - self._remove_asset_and_library(asset_group) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_layout_json.py b/openpype/hosts/blender/plugins/load/load_layout_json.py index eca098627e..81683b8de8 100644 --- a/openpype/hosts/blender/plugins/load/load_layout_json.py +++ b/openpype/hosts/blender/plugins/load/load_layout_json.py @@ -144,7 +144,7 @@ class JsonLayoutLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_look.py b/openpype/hosts/blender/plugins/load/load_look.py index 70d1b95f02..c121f55633 100644 --- a/openpype/hosts/blender/plugins/load/load_look.py +++ b/openpype/hosts/blender/plugins/load/load_look.py @@ -92,7 +92,7 @@ class BlendLookLoader(plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_model.py b/openpype/hosts/blender/plugins/load/load_model.py deleted file mode 100644 index 0a5d98ffa0..0000000000 --- a/openpype/hosts/blender/plugins/load/load_model.py +++ /dev/null @@ -1,296 +0,0 @@ -"""Load a model asset in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendModelLoader(plugin.AssetLoader): - """Load models from a .blend file. - - Because they come from a .blend file we can simply link the collection that - contains the model. There is no further need to 'containerise' it. - """ - - families = ["model"] - representations = ["blend"] - - label = "Link Model" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _process(self, libpath, asset_group, group_name): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - for obj in nodes: - obj.parent = asset_group - - for obj in nodes: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - if local_obj.type != 'EMPTY': - plugin.prepare_data(local_obj.data, group_name) - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material, group_name) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - bpy.data.orphans_purge(do_local_ids=False) - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - plugin.deselect_all() - - if options is not None: - parent = options.get('parent') - transform = options.get('transform') - - if parent and transform: - location = transform.get('translation') - rotation = transform.get('rotation') - scale = transform.get('scale') - - asset_group.location = ( - location.get('x'), - location.get('y'), - location.get('z') - ) - asset_group.rotation_euler = ( - rotation.get('x'), - rotation.get('y'), - rotation.get('z') - ) - asset_group.scale = ( - scale.get('x'), - scale.get('y'), - scale.get('z') - ) - - bpy.context.view_layer.objects.active = parent - asset_group.select_set(True) - - bpy.ops.object.parent_set(keep_transform=True) - - plugin.deselect_all() - - objects = self._process(libpath, asset_group, group_name) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all objects of the current collection, load the new - ones and add them to the collection. - If the objects of the collection are used in another collection they - will not be removed, only unlinked. Normally this should not be the - case though. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_rig.py b/openpype/hosts/blender/plugins/load/load_rig.py deleted file mode 100644 index 1d23a70061..0000000000 --- a/openpype/hosts/blender/plugins/load/load_rig.py +++ /dev/null @@ -1,417 +0,0 @@ -"""Load a rig asset in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - legacy_create, - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.pipeline.create import get_legacy_creator_by_name -from openpype.hosts.blender.api import ( - plugin, - get_selection, -) -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendRigLoader(plugin.AssetLoader): - """Load rigs from a .blend file.""" - - families = ["rig"] - representations = ["blend"] - - label = "Link Rig" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - if material_slot.material: - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'ARMATURE': - objects.extend(obj.children) - bpy.data.armatures.remove(obj.data) - elif obj.type == 'CURVE': - bpy.data.curves.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _process(self, libpath, asset_group, group_name, action): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - allowed_types = ['ARMATURE', 'MESH'] - - for obj in nodes: - if obj.type in allowed_types: - obj.parent = asset_group - - for obj in nodes: - if obj.type in allowed_types: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - constraints = [] - - armatures = [obj for obj in objects if obj.type == 'ARMATURE'] - - for armature in armatures: - for bone in armature.pose.bones: - for constraint in bone.constraints: - if hasattr(constraint, 'target'): - constraints.append(constraint) - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - - if local_obj.type == 'MESH': - plugin.prepare_data(local_obj.data, group_name) - - if obj != local_obj: - for constraint in constraints: - if constraint.target == obj: - constraint.target = local_obj - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material, group_name) - elif local_obj.type == 'ARMATURE': - plugin.prepare_data(local_obj.data, group_name) - - if action is not None: - if local_obj.animation_data is None: - local_obj.animation_data_create() - local_obj.animation_data.action = action - elif (local_obj.animation_data and - local_obj.animation_data.action is not None): - plugin.prepare_data( - local_obj.animation_data.action, group_name) - - # Set link the drivers to the local object - if local_obj.data.animation_data: - for d in local_obj.data.animation_data.drivers: - for v in d.driver.variables: - for t in v.targets: - t.id = local_obj - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - curves = [obj for obj in data_to.objects if obj.type == 'CURVE'] - - for curve in curves: - local_obj = plugin.prepare_data(curve, group_name) - plugin.prepare_data(local_obj.data, group_name) - - local_obj.use_fake_user = True - - for mod in local_obj.modifiers: - mod_target_name = mod.object.name - mod.object = bpy.data.objects.get( - f"{group_name}:{mod_target_name}") - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - local_obj.parent = asset_group - objects.append(local_obj) - - while bpy.data.orphans_purge(do_local_ids=False): - pass - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - action = None - - plugin.deselect_all() - - create_animation = False - anim_file = None - - if options is not None: - parent = options.get('parent') - transform = options.get('transform') - action = options.get('action') - create_animation = options.get('create_animation') - anim_file = options.get('animation_file') - - if parent and transform: - location = transform.get('translation') - rotation = transform.get('rotation') - scale = transform.get('scale') - - asset_group.location = ( - location.get('x'), - location.get('y'), - location.get('z') - ) - asset_group.rotation_euler = ( - rotation.get('x'), - rotation.get('y'), - rotation.get('z') - ) - asset_group.scale = ( - scale.get('x'), - scale.get('y'), - scale.get('z') - ) - - bpy.context.view_layer.objects.active = parent - asset_group.select_set(True) - - bpy.ops.object.parent_set(keep_transform=True) - - plugin.deselect_all() - - objects = self._process(libpath, asset_group, group_name, action) - - if create_animation: - creator_plugin = get_legacy_creator_by_name("CreateAnimation") - if not creator_plugin: - raise ValueError("Creator plugin \"CreateAnimation\" was " - "not found.") - - asset_group.select_set(True) - - animation_asset = options.get('animation_asset') - - legacy_create( - creator_plugin, - name=namespace + "_animation", - # name=f"{unique_number}_{subset}_animation", - asset=animation_asset, - options={"useSelection": False, "asset_group": asset_group}, - data={"dependencies": str(context["representation"]["_id"])} - ) - - plugin.deselect_all() - - if anim_file: - bpy.ops.import_scene.fbx(filepath=anim_file, anim_offset=0.0) - - imported = get_selection() - - armature = [ - o for o in asset_group.children if o.type == 'ARMATURE'][0] - - imported_group = [ - o for o in imported if o.type == 'EMPTY'][0] - - for obj in imported: - if obj.type == 'ARMATURE': - if not armature.animation_data: - armature.animation_data_create() - armature.animation_data.action = obj.animation_data.action - - self._remove(imported_group) - bpy.data.objects.remove(imported_group) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all children of the asset group, load the new ones - and add them as children of the group. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - # Get the armature of the rig - objects = asset_group.children - armature = [obj for obj in objects if obj.type == 'ARMATURE'][0] - - action = None - if armature.animation_data and armature.animation_data.action: - action = armature.animation_data.action - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name, action) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing asset group from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the asset group was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/publish/collect_current_file.py b/openpype/hosts/blender/plugins/publish/collect_current_file.py index c3097a0694..c2d8a96a18 100644 --- a/openpype/hosts/blender/plugins/publish/collect_current_file.py +++ b/openpype/hosts/blender/plugins/publish/collect_current_file.py @@ -2,7 +2,7 @@ import os import bpy import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name, get_current_asset_name from openpype.hosts.blender.api import workio @@ -37,7 +37,7 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): folder, file = os.path.split(current_file) filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] + task = get_current_task_name() data = {} @@ -47,7 +47,7 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): data.update({ "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), + "asset": get_current_asset_name(), "label": subset, "publish": True, "family": "workfile", diff --git a/openpype/hosts/blender/plugins/publish/collect_render.py b/openpype/hosts/blender/plugins/publish/collect_render.py new file mode 100644 index 0000000000..92e2473a95 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/collect_render.py @@ -0,0 +1,123 @@ +# -*- coding: utf-8 -*- +"""Collect render data.""" + +import os +import re + +import bpy + +from openpype.hosts.blender.api import colorspace +import pyblish.api + + +class CollectBlenderRender(pyblish.api.InstancePlugin): + """Gather all publishable render layers from renderSetup.""" + + order = pyblish.api.CollectorOrder + 0.01 + hosts = ["blender"] + families = ["render"] + label = "Collect Render Layers" + sync_workfile_version = False + + @staticmethod + def generate_expected_beauty( + render_product, frame_start, frame_end, frame_step, ext + ): + """ + Generate the expected files for the render product for the beauty + render. This returns a list of files that should be rendered. It + replaces the sequence of `#` with the frame number. + """ + path = os.path.dirname(render_product) + file = os.path.basename(render_product) + + expected_files = [] + + for frame in range(frame_start, frame_end + 1, frame_step): + frame_str = str(frame).rjust(4, "0") + filename = re.sub("#+", frame_str, file) + expected_file = f"{os.path.join(path, filename)}.{ext}" + expected_files.append(expected_file.replace("\\", "/")) + + return { + "beauty": expected_files + } + + @staticmethod + def generate_expected_aovs( + aov_file_product, frame_start, frame_end, frame_step, ext + ): + """ + Generate the expected files for the render product for the beauty + render. This returns a list of files that should be rendered. It + replaces the sequence of `#` with the frame number. + """ + expected_files = {} + + for aov_name, aov_file in aov_file_product: + path = os.path.dirname(aov_file) + file = os.path.basename(aov_file) + + aov_files = [] + + for frame in range(frame_start, frame_end + 1, frame_step): + frame_str = str(frame).rjust(4, "0") + filename = re.sub("#+", frame_str, file) + expected_file = f"{os.path.join(path, filename)}.{ext}" + aov_files.append(expected_file.replace("\\", "/")) + + expected_files[aov_name] = aov_files + + return expected_files + + def process(self, instance): + context = instance.context + + render_data = bpy.data.collections[str(instance)].get("render_data") + + assert render_data, "No render data found." + + self.log.info(f"render_data: {dict(render_data)}") + + render_product = render_data.get("render_product") + aov_file_product = render_data.get("aov_file_product") + ext = render_data.get("image_format") + multilayer = render_data.get("multilayer_exr") + + frame_start = context.data["frameStart"] + frame_end = context.data["frameEnd"] + frame_handle_start = context.data["frameStartHandle"] + frame_handle_end = context.data["frameEndHandle"] + + expected_beauty = self.generate_expected_beauty( + render_product, int(frame_start), int(frame_end), + int(bpy.context.scene.frame_step), ext) + + expected_aovs = self.generate_expected_aovs( + aov_file_product, int(frame_start), int(frame_end), + int(bpy.context.scene.frame_step), ext) + + expected_files = expected_beauty | expected_aovs + + instance.data.update({ + "family": "render.farm", + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_handle_start, + "frameEndHandle": frame_handle_end, + "fps": context.data["fps"], + "byFrameStep": bpy.context.scene.frame_step, + "review": render_data.get("review", False), + "multipartExr": ext == "exr" and multilayer, + "farm": True, + "expectedFiles": [expected_files], + # OCIO not currently implemented in Blender, but the following + # settings are required by the schema, so it is hardcoded. + # TODO: Implement OCIO in Blender + "colorspaceConfig": "", + "colorspaceDisplay": "sRGB", + "colorspaceView": "ACES 1.0 SDR-video", + "renderProducts": colorspace.ARenderProduct(), + }) + + self.log.info(f"data: {instance.data}") diff --git a/openpype/hosts/blender/plugins/publish/collect_review.py b/openpype/hosts/blender/plugins/publish/collect_review.py index d6abd9d967..3bf2e39e24 100644 --- a/openpype/hosts/blender/plugins/publish/collect_review.py +++ b/openpype/hosts/blender/plugins/publish/collect_review.py @@ -1,7 +1,6 @@ import bpy import pyblish.api -from openpype.pipeline import legacy_io class CollectReview(pyblish.api.InstancePlugin): @@ -30,6 +29,8 @@ class CollectReview(pyblish.api.InstancePlugin): camera = cameras[0].name self.log.debug(f"camera: {camera}") + focal_length = cameras[0].data.lens + # get isolate objects list from meshes instance members . isolate_objects = [ obj @@ -38,11 +39,11 @@ class CollectReview(pyblish.api.InstancePlugin): ] if not instance.data.get("remove"): - - task = legacy_io.Session.get("AVALON_TASK") + # Store focal length in `burninDataMembers` + burninData = instance.data.setdefault("burninDataMembers", {}) + burninData["focalLength"] = focal_length instance.data.update({ - "subset": f"{task}Review", "review_camera": camera, "frameStart": instance.context.data["frameStart"], "frameEnd": instance.context.data["frameEnd"], diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index 1cab9d225b..87159e53f0 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -21,34 +21,31 @@ class ExtractABC(publish.Extractor): filename = f"{instance.name}.abc" filepath = os.path.join(stagingdir, filename) - context = bpy.context - scene = context.scene - view_layer = context.view_layer - # Perform extraction self.log.info("Performing extraction..") plugin.deselect_all() selected = [] - asset_group = None + active = None for obj in instance: obj.select_set(True) selected.append(obj) + # Set as active the asset group if obj.get(AVALON_PROPERTY): - asset_group = obj + active = obj context = plugin.create_blender_context( - active=asset_group, selected=selected) + active=active, selected=selected) - # We export the abc - bpy.ops.wm.alembic_export( - context, - filepath=filepath, - selected=True, - flatten=False - ) + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=False + ) plugin.deselect_all() diff --git a/openpype/hosts/blender/plugins/publish/extract_abc_animation.py b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py index e141ccaa44..44b2ba3761 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc_animation.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py @@ -20,8 +20,6 @@ class ExtractAnimationABC(publish.Extractor): filename = f"{instance.name}.abc" filepath = os.path.join(stagingdir, filename) - context = bpy.context - # Perform extraction self.log.info("Performing extraction..") diff --git a/openpype/hosts/blender/plugins/publish/extract_blend.py b/openpype/hosts/blender/plugins/publish/extract_blend.py index 6a001b6f65..d4f26b4f3c 100644 --- a/openpype/hosts/blender/plugins/publish/extract_blend.py +++ b/openpype/hosts/blender/plugins/publish/extract_blend.py @@ -10,7 +10,7 @@ class ExtractBlend(publish.Extractor): label = "Extract Blend" hosts = ["blender"] - families = ["model", "camera", "rig", "action", "layout"] + families = ["model", "camera", "rig", "action", "layout", "blendScene"] optional = True def process(self, instance): diff --git a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py new file mode 100644 index 0000000000..036be7bf3c --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py @@ -0,0 +1,68 @@ +import os + +import bpy + +from openpype.pipeline import publish +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY + + +class ExtractCameraABC(publish.Extractor): + """Extract camera as ABC.""" + + label = "Extract Camera (ABC)" + hosts = ["blender"] + families = ["camera"] + optional = True + + def process(self, instance): + # Define extract output file path + stagingdir = self.staging_dir(instance) + filename = f"{instance.name}.abc" + filepath = os.path.join(stagingdir, filename) + + # Perform extraction + self.log.info("Performing extraction..") + + plugin.deselect_all() + + asset_group = None + for obj in instance: + if obj.get(AVALON_PROPERTY): + asset_group = obj + break + assert asset_group, "No asset group found" + + # Need to cast to list because children is a tuple + selected = list(asset_group.children) + active = selected[0] + + for obj in selected: + obj.select_set(True) + + context = plugin.create_blender_context( + active=active, selected=selected) + + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=True + ) + + plugin.deselect_all() + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance '%s' to: %s", + instance.name, representation) diff --git a/openpype/hosts/blender/plugins/publish/extract_camera.py b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py similarity index 98% rename from openpype/hosts/blender/plugins/publish/extract_camera.py rename to openpype/hosts/blender/plugins/publish/extract_camera_fbx.py index 9fd181825c..315994140e 100644 --- a/openpype/hosts/blender/plugins/publish/extract_camera.py +++ b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py @@ -9,7 +9,7 @@ from openpype.hosts.blender.api import plugin class ExtractCamera(publish.Extractor): """Extract as the camera as FBX.""" - label = "Extract Camera" + label = "Extract Camera (FBX)" hosts = ["blender"] families = ["camera"] optional = True diff --git a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py index 963ca1398f..3d176f9c30 100644 --- a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py +++ b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py @@ -9,7 +9,8 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin): label = "Increment Workfile Version" optional = True hosts = ["blender"] - families = ["animation", "model", "rig", "action", "layout"] + families = ["animation", "model", "rig", "action", "layout", "blendScene", + "render"] def process(self, context): diff --git a/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py b/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py new file mode 100644 index 0000000000..14220b5c9c --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py @@ -0,0 +1,47 @@ +import os + +import bpy + +import pyblish.api +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.hosts.blender.api.render_lib import prepare_rendering + + +class ValidateDeadlinePublish(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates Render File Directory is + not the same in every submission + """ + + order = ValidateContentsOrder + families = ["render.farm"] + hosts = ["blender"] + label = "Validate Render Output for Deadline" + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + return + filepath = bpy.data.filepath + file = os.path.basename(filepath) + filename, ext = os.path.splitext(file) + if filename not in bpy.context.scene.render.filepath: + raise PublishValidationError( + "Render output folder " + "doesn't match the blender scene name! " + "Use Repair action to " + "fix the folder file path.." + ) + + @classmethod + def repair(cls, instance): + container = bpy.data.collections[str(instance)] + prepare_rendering(container) + bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath) + cls.log.debug("Reset the render output folder...") diff --git a/openpype/hosts/blender/plugins/publish/validate_file_saved.py b/openpype/hosts/blender/plugins/publish/validate_file_saved.py new file mode 100644 index 0000000000..e191585c55 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_file_saved.py @@ -0,0 +1,20 @@ +import bpy + +import pyblish.api + + +class ValidateFileSaved(pyblish.api.InstancePlugin): + """Validate that the workfile has been saved.""" + + order = pyblish.api.ValidatorOrder - 0.01 + hosts = ["blender"] + label = "Validate File Saved" + optional = False + exclude_families = [] + + def process(self, instance): + if [ef for ef in self.exclude_families + if instance.data["family"] in ef]: + return + if bpy.data.is_dirty: + raise RuntimeError("Workfile is not saved.") diff --git a/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py b/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py new file mode 100644 index 0000000000..ba3a796f35 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py @@ -0,0 +1,17 @@ +import bpy + +import pyblish.api + + +class ValidateRenderCameraIsSet(pyblish.api.InstancePlugin): + """Validate that there is a camera set as active for rendering.""" + + order = pyblish.api.ValidatorOrder + hosts = ["blender"] + families = ["render"] + label = "Validate Render Camera Is Set" + optional = False + + def process(self, instance): + if not bpy.context.scene.camera: + raise RuntimeError("No camera is active for rendering.") diff --git a/openpype/hosts/celaction/hooks/pre_celaction_setup.py b/openpype/hosts/celaction/hooks/pre_celaction_setup.py index 96e784875c..83aeab7c58 100644 --- a/openpype/hosts/celaction/hooks/pre_celaction_setup.py +++ b/openpype/hosts/celaction/hooks/pre_celaction_setup.py @@ -2,20 +2,18 @@ import os import shutil import winreg import subprocess -from openpype.lib import PreLaunchHook, get_openpype_execute_args -from openpype.hosts.celaction import scripts - -CELACTION_SCRIPTS_DIR = os.path.dirname( - os.path.abspath(scripts.__file__) -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes +from openpype.hosts.celaction import CELACTION_ROOT_DIR class CelactionPrelaunchHook(PreLaunchHook): """ Bootstrap celacion with pype """ - app_groups = ["celaction"] - platforms = ["windows"] + app_groups = {"celaction"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): asset_doc = self.data["asset_doc"] @@ -37,7 +35,9 @@ class CelactionPrelaunchHook(PreLaunchHook): winreg.KEY_ALL_ACCESS ) - path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py") + path_to_cli = os.path.join( + CELACTION_ROOT_DIR, "scripts", "publish_cli.py" + ) subprocess_args = get_openpype_execute_args("run", path_to_cli) openpype_executable = subprocess_args.pop(0) workfile_settings = self.get_workfile_settings() @@ -122,9 +122,8 @@ class CelactionPrelaunchHook(PreLaunchHook): if not os.path.exists(workfile_path): # TODO add ability to set different template workfile path via # settings - openpype_celaction_dir = os.path.dirname(CELACTION_SCRIPTS_DIR) template_path = os.path.join( - openpype_celaction_dir, + CELACTION_ROOT_DIR, "resources", "celaction_template_scene.scn" ) diff --git a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py index 35ac7fc264..c815c1edd4 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py +++ b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py @@ -1,6 +1,5 @@ import os import pyblish.api -from openpype.pipeline import legacy_io class CollectCelactionInstances(pyblish.api.ContextPlugin): @@ -10,7 +9,7 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.1 def process(self, context): - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] current_file = context.data["currentFile"] staging_dir = os.path.dirname(current_file) scene_file = os.path.basename(current_file) diff --git a/openpype/hosts/flame/api/menu.py b/openpype/hosts/flame/api/menu.py index 5f9dc57a61..e8bdf32ebd 100644 --- a/openpype/hosts/flame/api/menu.py +++ b/openpype/hosts/flame/api/menu.py @@ -1,7 +1,9 @@ -import os -from qtpy import QtWidgets from copy import deepcopy from pprint import pformat + +from qtpy import QtWidgets + +from openpype.pipeline import get_current_project_name from openpype.tools.utils.host_tools import HostToolsHelper menu_group_name = 'OpenPype' @@ -61,10 +63,10 @@ class _FlameMenuApp(object): self.framework.prefs_global, self.name) self.mbox = QtWidgets.QMessageBox() - + project_name = get_current_project_name() self.menu = { "actions": [{ - 'name': os.getenv("AVALON_PROJECT", "project"), + 'name': project_name or "project", 'isEnabled': False }], "name": self.menu_group_name diff --git a/openpype/hosts/flame/hooks/pre_flame_setup.py b/openpype/hosts/flame/hooks/pre_flame_setup.py index 83110bb6b5..850569cfdd 100644 --- a/openpype/hosts/flame/hooks/pre_flame_setup.py +++ b/openpype/hosts/flame/hooks/pre_flame_setup.py @@ -6,13 +6,10 @@ import socket from pprint import pformat from openpype.lib import ( - PreLaunchHook, get_openpype_username, run_subprocess, ) -from openpype.lib.applications import ( - ApplicationLaunchFailed -) +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts import flame as opflame @@ -22,11 +19,12 @@ class FlamePrelaunch(PreLaunchHook): Will make sure flame_script_dirs are copied to user's folder defined in environment var FLAME_SCRIPT_DIR. """ - app_groups = ["flame"] + app_groups = {"flame"} permissions = 0o777 wtc_script_path = os.path.join( opflame.HOST_DIR, "api", "scripts", "wiretap_com.py") + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/openpype/hosts/flame/plugins/load/load_clip.py b/openpype/hosts/flame/plugins/load/load_clip.py index dfb2d2b6f0..ca4eab0f63 100644 --- a/openpype/hosts/flame/plugins/load/load_clip.py +++ b/openpype/hosts/flame/plugins/load/load_clip.py @@ -48,7 +48,6 @@ class LoadClip(opfapi.ClipLoader): self.fpd = fproject.current_workspace.desktop # load clip to timeline and get main variables - namespace = namespace version = context['version'] version_data = version.get("data", {}) version_name = version.get("name", None) @@ -82,8 +81,9 @@ class LoadClip(opfapi.ClipLoader): os.makedirs(openclip_dir) # prepare clip data from context ad send it to openClipLoader + path = self.filepath_from_context(context) loading_context = { - "path": self.fname.replace("\\", "/"), + "path": path.replace("\\", "/"), "colorspace": colorspace, "version": "v{:0>3}".format(version_name), "layer_rename_template": self.layer_rename_template, diff --git a/openpype/hosts/flame/plugins/load/load_clip_batch.py b/openpype/hosts/flame/plugins/load/load_clip_batch.py index 5c5a77f0d0..1f3a017d72 100644 --- a/openpype/hosts/flame/plugins/load/load_clip_batch.py +++ b/openpype/hosts/flame/plugins/load/load_clip_batch.py @@ -45,7 +45,6 @@ class LoadClipBatch(opfapi.ClipLoader): self.batch = options.get("batch") or flame.batch # load clip to timeline and get main variables - namespace = namespace version = context['version'] version_data = version.get("data", {}) version_name = version.get("name", None) @@ -81,9 +80,10 @@ class LoadClipBatch(opfapi.ClipLoader): if not os.path.exists(openclip_dir): os.makedirs(openclip_dir) - # prepare clip data from context ad send it to openClipLoader + # prepare clip data from context and send it to openClipLoader + path = self.filepath_from_context(context) loading_context = { - "path": self.fname.replace("\\", "/"), + "path": path.replace("\\", "/"), "colorspace": colorspace, "version": "v{:0>3}".format(version_name), "layer_rename_template": self.layer_rename_template, diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py index 23fdf5e785..e14f960a2b 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py @@ -325,7 +325,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): def _create_shot_instance(self, context, clip_name, **data): master_layer = data.get("heroTrack") hierarchy_data = data.get("hierarchyData") - asset = data.get("asset") if not master_layer: return diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py index 917041e053..f8cfa9e963 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py @@ -2,7 +2,6 @@ import pyblish.api import openpype.hosts.flame.api as opfapi from openpype.hosts.flame.otio import flame_export -from openpype.pipeline import legacy_io from openpype.pipeline.create import get_subset_name @@ -19,7 +18,7 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin): # main asset_doc = context.data["assetEntity"] - task_name = legacy_io.Session["AVALON_TASK"] + task_name = context.data["task"] project = opfapi.get_current_project() sequence = opfapi.get_current_sequence(opfapi.CTX.selection) diff --git a/openpype/hosts/fusion/addon.py b/openpype/hosts/fusion/addon.py index 45683cfbde..8343f3c79d 100644 --- a/openpype/hosts/fusion/addon.py +++ b/openpype/hosts/fusion/addon.py @@ -60,8 +60,9 @@ class FusionAddon(OpenPypeModule, IHostAddon): return [] return [os.path.join(FUSION_HOST_DIR, "hooks")] - def add_implementation_envs(self, env, _app): + def add_implementation_envs(self, env, app): # Set default values if are not already set via settings + defaults = {"OPENPYPE_LOG_NO_COLORS": "Yes"} for key, value in defaults.items(): if not env.get(key): diff --git a/openpype/hosts/fusion/api/__init__.py b/openpype/hosts/fusion/api/__init__.py index dba55a98d9..aabc624016 100644 --- a/openpype/hosts/fusion/api/__init__.py +++ b/openpype/hosts/fusion/api/__init__.py @@ -3,9 +3,7 @@ from .pipeline import ( ls, imprint_container, - parse_container, - list_instances, - remove_instance + parse_container ) from .lib import ( @@ -22,6 +20,7 @@ from .menu import launch_openpype_menu __all__ = [ # pipeline + "FusionHost", "ls", "imprint_container", @@ -32,6 +31,7 @@ __all__ = [ "update_frame_range", "set_asset_framerange", "get_current_comp", + "get_bmd_library", "comp_lock_and_undo_chunk", # menu diff --git a/openpype/hosts/fusion/api/action.py b/openpype/hosts/fusion/api/action.py index 347d552108..66b787c2f1 100644 --- a/openpype/hosts/fusion/api/action.py +++ b/openpype/hosts/fusion/api/action.py @@ -18,8 +18,10 @@ class SelectInvalidAction(pyblish.api.Action): icon = "search" # Icon from Awesome Icon def process(self, context, plugin): - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) + errored_instances = get_errored_instances_from_context( + context, + plugin=plugin, + ) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") @@ -51,6 +53,7 @@ class SelectInvalidAction(pyblish.api.Action): names = set() for tool in invalid: flow.Select(tool, True) + comp.SetActiveTool(tool) names.add(tool.Name) self.log.info( "Selecting invalid tools: %s" % ", ".join(sorted(names)) diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index cba8c38c2f..c4a1488606 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -14,7 +14,7 @@ from openpype.client import ( ) from openpype.pipeline import ( switch_container, - legacy_io, + get_current_project_name, ) from openpype.pipeline.context_tools import get_current_project_asset @@ -181,80 +181,6 @@ def validate_comp_prefs(comp=None, force_repair=False): dialog.setStyleSheet(load_stylesheet()) -def switch_item(container, - asset_name=None, - subset_name=None, - representation_name=None): - """Switch container asset, subset or representation of a container by name. - - It'll always switch to the latest version - of course a different - approach could be implemented. - - Args: - container (dict): data of the item to switch with - asset_name (str): name of the asset - subset_name (str): name of the subset - representation_name (str): name of the representation - - Returns: - dict - - """ - - if all(not x for x in [asset_name, subset_name, representation_name]): - raise ValueError("Must have at least one change provided to switch.") - - # Collect any of current asset, subset and representation if not provided - # so we can use the original name from those. - project_name = legacy_io.active_project() - if any(not x for x in [asset_name, subset_name, representation_name]): - repre_id = container["representation"] - representation = get_representation_by_id(project_name, repre_id) - repre_parent_docs = get_representation_parents( - project_name, representation) - if repre_parent_docs: - version, subset, asset, _ = repre_parent_docs - else: - version = subset = asset = None - - if asset_name is None: - asset_name = asset["name"] - - if subset_name is None: - subset_name = subset["name"] - - if representation_name is None: - representation_name = representation["name"] - - # Find the new one - asset = get_asset_by_name(project_name, asset_name, fields=["_id"]) - assert asset, ("Could not find asset in the database with the name " - "'%s'" % asset_name) - - subset = get_subset_by_name( - project_name, subset_name, asset["_id"], fields=["_id"] - ) - assert subset, ("Could not find subset in the database with the name " - "'%s'" % subset_name) - - version = get_last_version_by_subset_id( - project_name, subset["_id"], fields=["_id"] - ) - assert version, "Could not find a version for {}.{}".format( - asset_name, subset_name - ) - - representation = get_representation_by_name( - project_name, representation_name, version["_id"] - ) - assert representation, ("Could not find representation in the database " - "with the name '%s'" % representation_name) - - switch_container(container, representation) - - return representation - - @contextlib.contextmanager def maintained_selection(comp=None): """Reset comp selection from before the context after the context""" diff --git a/openpype/hosts/fusion/api/menu.py b/openpype/hosts/fusion/api/menu.py index 92f38a64c2..50250a6656 100644 --- a/openpype/hosts/fusion/api/menu.py +++ b/openpype/hosts/fusion/api/menu.py @@ -12,7 +12,7 @@ from openpype.hosts.fusion.api.lib import ( set_asset_framerange, set_asset_resolution, ) -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.resources import get_openpype_icon_filepath from .pipeline import FusionEventHandler @@ -125,7 +125,7 @@ class OpenPypeMenu(QtWidgets.QWidget): def on_task_changed(self): # Update current context label - label = legacy_io.Session["AVALON_ASSET"] + label = get_current_asset_name() self.asset_label.setText(label) def register_callback(self, name, fn): diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py index a768a3f0f8..a886086758 100644 --- a/openpype/hosts/fusion/api/pipeline.py +++ b/openpype/hosts/fusion/api/pipeline.py @@ -287,49 +287,6 @@ def parse_container(tool): return container -# TODO: Function below is currently unused prototypes -def list_instances(creator_id=None): - """Return created instances in current workfile which will be published. - Returns: - (list) of dictionaries matching instances format - """ - - comp = get_current_comp() - tools = comp.GetToolList(False).values() - - instance_signature = { - "id": "pyblish.avalon.instance", - "identifier": creator_id - } - instances = [] - for tool in tools: - - data = tool.GetData('openpype') - if not isinstance(data, dict): - continue - - if data.get("id") != instance_signature["id"]: - continue - - if creator_id and data.get("identifier") != creator_id: - continue - - instances.append(tool) - - return instances - - -# TODO: Function below is currently unused prototypes -def remove_instance(instance): - """Remove instance from current workfile. - - Args: - instance (dict): instance representation from subsetmanager model - """ - # Assume instance is a Fusion tool directly - instance["tool"].Delete() - - class FusionEventThread(QtCore.QThread): """QThread which will periodically ping Fusion app for any events. The fusion.UIManager must be set up to be notified of events before they'll diff --git a/openpype/hosts/fusion/deploy/MenuScripts/openpype_menu.py b/openpype/hosts/fusion/deploy/MenuScripts/openpype_menu.py index 685e58d58f..1c58ee50e4 100644 --- a/openpype/hosts/fusion/deploy/MenuScripts/openpype_menu.py +++ b/openpype/hosts/fusion/deploy/MenuScripts/openpype_menu.py @@ -1,6 +1,19 @@ import os import sys +if sys.version_info < (3, 7): + # hack to handle discrepancy between distributed libraries and Python 3.6 + # mostly because wrong version of urllib3 + # TODO remove when not necessary + from openpype import PACKAGE_DIR + FUSION_HOST_DIR = os.path.join(PACKAGE_DIR, "hosts", "fusion") + + vendor_path = os.path.join(FUSION_HOST_DIR, "vendor") + if vendor_path not in sys.path: + sys.path.insert(0, vendor_path) + + print(f"Added vendorized libraries from {vendor_path}") + from openpype.lib import Logger from openpype.pipeline import ( install_host, diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py deleted file mode 100644 index 1a0a9911ea..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py +++ /dev/null @@ -1,16 +0,0 @@ -from openpype.hosts.fusion.api import ( - comp_lock_and_undo_chunk, - get_current_comp -) - - -def main(): - comp = get_current_comp() - """Set all selected backgrounds to 32 bit""" - with comp_lock_and_undo_chunk(comp, 'Selected Backgrounds to 32bit'): - tools = comp.GetToolList(True, "Background").values() - for tool in tools: - tool.Depth = 5 - - -main() diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py deleted file mode 100644 index c2eea505e5..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py +++ /dev/null @@ -1,16 +0,0 @@ -from openpype.hosts.fusion.api import ( - comp_lock_and_undo_chunk, - get_current_comp -) - - -def main(): - comp = get_current_comp() - """Set all backgrounds to 32 bit""" - with comp_lock_and_undo_chunk(comp, 'Backgrounds to 32bit'): - tools = comp.GetToolList(False, "Background").values() - for tool in tools: - tool.Depth = 5 - - -main() diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py deleted file mode 100644 index 2118767f4d..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py +++ /dev/null @@ -1,16 +0,0 @@ -from openpype.hosts.fusion.api import ( - comp_lock_and_undo_chunk, - get_current_comp -) - - -def main(): - comp = get_current_comp() - """Set all selected loaders to 32 bit""" - with comp_lock_and_undo_chunk(comp, 'Selected Loaders to 32bit'): - tools = comp.GetToolList(True, "Loader").values() - for tool in tools: - tool.Depth = 5 - - -main() diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py deleted file mode 100644 index 7dd1f66a5e..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py +++ /dev/null @@ -1,16 +0,0 @@ -from openpype.hosts.fusion.api import ( - comp_lock_and_undo_chunk, - get_current_comp -) - - -def main(): - comp = get_current_comp() - """Set all loaders to 32 bit""" - with comp_lock_and_undo_chunk(comp, 'Loaders to 32bit'): - tools = comp.GetToolList(False, "Loader").values() - for tool in tools: - tool.Depth = 5 - - -main() diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py deleted file mode 100644 index f08dc0bf2c..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py +++ /dev/null @@ -1,200 +0,0 @@ -import os -import sys -import glob -import logging - -from qtpy import QtWidgets, QtCore - -import qtawesome as qta - -from openpype.client import get_assets -from openpype import style -from openpype.pipeline import ( - install_host, - legacy_io, -) -from openpype.hosts.fusion import api -from openpype.pipeline.context_tools import get_workdir_from_session - -log = logging.getLogger("Fusion Switch Shot") - - -class App(QtWidgets.QWidget): - - def __init__(self, parent=None): - - ################################################ - # |---------------------| |------------------| # - # |Comp | |Asset | # - # |[..][ v]| |[ v]| # - # |---------------------| |------------------| # - # | Update existing comp [ ] | # - # |------------------------------------------| # - # | Switch | # - # |------------------------------------------| # - ################################################ - - QtWidgets.QWidget.__init__(self, parent) - - layout = QtWidgets.QVBoxLayout() - - # Comp related input - comp_hlayout = QtWidgets.QHBoxLayout() - comp_label = QtWidgets.QLabel("Comp file") - comp_label.setFixedWidth(50) - comp_box = QtWidgets.QComboBox() - - button_icon = qta.icon("fa.folder", color="white") - open_from_dir = QtWidgets.QPushButton() - open_from_dir.setIcon(button_icon) - - comp_box.setFixedHeight(25) - open_from_dir.setFixedWidth(25) - open_from_dir.setFixedHeight(25) - - comp_hlayout.addWidget(comp_label) - comp_hlayout.addWidget(comp_box) - comp_hlayout.addWidget(open_from_dir) - - # Asset related input - asset_hlayout = QtWidgets.QHBoxLayout() - asset_label = QtWidgets.QLabel("Shot") - asset_label.setFixedWidth(50) - - asset_box = QtWidgets.QComboBox() - asset_box.setLineEdit(QtWidgets.QLineEdit()) - asset_box.setFixedHeight(25) - - refresh_icon = qta.icon("fa.refresh", color="white") - refresh_btn = QtWidgets.QPushButton() - refresh_btn.setIcon(refresh_icon) - - asset_box.setFixedHeight(25) - refresh_btn.setFixedWidth(25) - refresh_btn.setFixedHeight(25) - - asset_hlayout.addWidget(asset_label) - asset_hlayout.addWidget(asset_box) - asset_hlayout.addWidget(refresh_btn) - - # Options - options = QtWidgets.QHBoxLayout() - options.setAlignment(QtCore.Qt.AlignLeft) - - current_comp_check = QtWidgets.QCheckBox() - current_comp_check.setChecked(True) - current_comp_label = QtWidgets.QLabel("Use current comp") - - options.addWidget(current_comp_label) - options.addWidget(current_comp_check) - - accept_btn = QtWidgets.QPushButton("Switch") - - layout.addLayout(options) - layout.addLayout(comp_hlayout) - layout.addLayout(asset_hlayout) - layout.addWidget(accept_btn) - - self._open_from_dir = open_from_dir - self._comps = comp_box - self._assets = asset_box - self._use_current = current_comp_check - self._accept_btn = accept_btn - self._refresh_btn = refresh_btn - - self.setWindowTitle("Fusion Switch Shot") - self.setLayout(layout) - - self.resize(260, 140) - self.setMinimumWidth(260) - self.setFixedHeight(140) - - self.connections() - - # Update ui to correct state - self._on_use_current_comp() - self._refresh() - - def connections(self): - self._use_current.clicked.connect(self._on_use_current_comp) - self._open_from_dir.clicked.connect(self._on_open_from_dir) - self._refresh_btn.clicked.connect(self._refresh) - self._accept_btn.clicked.connect(self._on_switch) - - def _on_use_current_comp(self): - state = self._use_current.isChecked() - self._open_from_dir.setEnabled(not state) - self._comps.setEnabled(not state) - - def _on_open_from_dir(self): - - start_dir = get_workdir_from_session() - comp_file, _ = QtWidgets.QFileDialog.getOpenFileName( - self, "Choose comp", start_dir) - - if not comp_file: - return - - # Create completer - self.populate_comp_box([comp_file]) - self._refresh() - - def _refresh(self): - # Clear any existing items - self._assets.clear() - - asset_names = self.collect_asset_names() - completer = QtWidgets.QCompleter(asset_names) - - self._assets.setCompleter(completer) - self._assets.addItems(asset_names) - - def _on_switch(self): - - if not self._use_current.isChecked(): - file_name = self._comps.itemData(self._comps.currentIndex()) - else: - comp = api.get_current_comp() - file_name = comp.GetAttrs("COMPS_FileName") - - asset = self._assets.currentText() - - import colorbleed.scripts.fusion_switch_shot as switch_shot - switch_shot.switch(asset_name=asset, filepath=file_name, new=True) - - def collect_slap_comps(self, directory): - items = glob.glob("{}/*.comp".format(directory)) - return items - - def collect_asset_names(self): - project_name = legacy_io.active_project() - asset_docs = get_assets(project_name, fields=["name"]) - asset_names = { - asset_doc["name"] - for asset_doc in asset_docs - } - return list(asset_names) - - def populate_comp_box(self, files): - """Ensure we display the filename only but the path is stored as well - - Args: - files (list): list of full file path [path/to/item/item.ext,] - - Returns: - None - """ - - for f in files: - filename = os.path.basename(f) - self._comps.addItem(filename, userData=f) - - -if __name__ == '__main__': - install_host(api) - - app = QtWidgets.QApplication(sys.argv) - window = App() - window.setStyleSheet(style.load_stylesheet()) - window.show() - sys.exit(app.exec_()) diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py deleted file mode 100644 index 3d2d1ecfa6..0000000000 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Forces Fusion to 'retrigger' the Loader to update. - -Warning: - This might change settings like 'Reverse', 'Loop', trims and other - settings of the Loader. So use this at your own risk. - -""" -from openpype.hosts.fusion.api.pipeline import ( - get_current_comp, - comp_lock_and_undo_chunk -) - - -def update_loader_ranges(): - comp = get_current_comp() - with comp_lock_and_undo_chunk(comp, "Reload clip time ranges"): - tools = comp.GetToolList(True, "Loader").values() - for tool in tools: - - # Get tool attributes - tool_a = tool.GetAttrs() - clipTable = tool_a['TOOLST_Clip_Name'] - altclipTable = tool_a['TOOLST_AltClip_Name'] - startTime = tool_a['TOOLNT_Clip_Start'] - old_global_in = tool.GlobalIn[comp.CurrentTime] - - # Reapply - for index, _ in clipTable.items(): - time = startTime[index] - tool.Clip[time] = tool.Clip[time] - - for index, _ in altclipTable.items(): - time = startTime[index] - tool.ProxyFilename[time] = tool.ProxyFilename[time] - - tool.GlobalIn[comp.CurrentTime] = old_global_in - - -if __name__ == '__main__': - update_loader_ranges() diff --git a/openpype/hosts/fusion/deploy/fusion_shared.prefs b/openpype/hosts/fusion/deploy/fusion_shared.prefs index b379ea7c66..93b08aa886 100644 --- a/openpype/hosts/fusion/deploy/fusion_shared.prefs +++ b/openpype/hosts/fusion/deploy/fusion_shared.prefs @@ -5,7 +5,7 @@ Global = { Map = { ["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy", ["Config:"] = "UserPaths:Config;OpenPype:Config", - ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts", + ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts", }, }, Script = { diff --git a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py index fd726ccda1..66b0f803aa 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py @@ -2,12 +2,16 @@ import os import shutil import platform from pathlib import Path -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, get_fusion_version, ) +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) class FusionCopyPrefsPrelaunch(PreLaunchHook): @@ -21,8 +25,9 @@ class FusionCopyPrefsPrelaunch(PreLaunchHook): Master.prefs is defined in openpype/hosts/fusion/deploy/fusion_shared.prefs """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 2 + launch_types = {LaunchTypes.local} def get_fusion_profile_name(self, profile_version) -> str: # Returns 'Default', unless FUSION16_PROFILE is set diff --git a/openpype/hosts/fusion/hooks/pre_fusion_setup.py b/openpype/hosts/fusion/hooks/pre_fusion_setup.py index f27cd1674b..576628e876 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_setup.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_setup.py @@ -1,5 +1,9 @@ import os -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, @@ -17,8 +21,9 @@ class FusionPrelaunch(PreLaunchHook): Fusion 18 : Python 3.6 - 3.10 """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 1 + launch_types = {LaunchTypes.local} def execute(self): # making sure python 3 is installed at provided path diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index 04898d0a45..4564880b50 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -30,10 +30,6 @@ class CreateSaver(NewCreator): instance_attributes = [ "reviewable" ] - default_variants = [ - "Main", - "Mask" - ] # TODO: This should be renamed together with Nuke so it is aligned temp_rendering_path_template = ( @@ -127,6 +123,9 @@ class CreateSaver(NewCreator): def _imprint(self, tool, data): # Save all data in a "openpype.{key}" = value data + # Instance id is the tool's name so we don't need to imprint as data + data.pop("instance_id", None) + active = data.pop("active", None) if active is not None: # Use active value to set the passthrough state @@ -166,7 +165,8 @@ class CreateSaver(NewCreator): filepath = self.temp_rendering_path_template.format( **formatting_data) - tool["Clip"] = os.path.normpath(filepath) + comp = get_current_comp() + tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath)) # Rename tool if tool.Name != subset: @@ -192,6 +192,10 @@ class CreateSaver(NewCreator): passthrough = attrs["TOOLB_PassThrough"] data["active"] = not passthrough + # Override publisher's UUID generation because tool names are + # already unique in Fusion in a comp + data["instance_id"] = tool.Name + return data def get_pre_create_attr_defs(self): @@ -250,11 +254,7 @@ class CreateSaver(NewCreator): label="Review", ) - def apply_settings( - self, - project_settings, - system_settings - ): + def apply_settings(self, project_settings): """Method called on initialization of plugin to apply settings.""" # plugin settings diff --git a/openpype/hosts/fusion/plugins/create/create_workfile.py b/openpype/hosts/fusion/plugins/create/create_workfile.py index 40721ea88a..8acaaa172f 100644 --- a/openpype/hosts/fusion/plugins/create/create_workfile.py +++ b/openpype/hosts/fusion/plugins/create/create_workfile.py @@ -5,7 +5,6 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, - legacy_io, ) @@ -64,10 +63,10 @@ class FusionWorkfileCreator(AutoCreator): existing_instance = instance break - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + project_name = self.create_context.get_current_project_name() + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name if existing_instance is None: asset_doc = get_asset_by_name(project_name, asset_name) diff --git a/openpype/hosts/fusion/plugins/load/load_alembic.py b/openpype/hosts/fusion/plugins/load/load_alembic.py index 11bf59af12..9b6d1e12b4 100644 --- a/openpype/hosts/fusion/plugins/load/load_alembic.py +++ b/openpype/hosts/fusion/plugins/load/load_alembic.py @@ -32,7 +32,7 @@ class FusionLoadAlembicMesh(load.LoaderPlugin): comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create tool"): - path = self.fname + path = self.filepath_from_context(context) args = (-32768, -32768) tool = comp.AddTool(self.tool_type, *args) diff --git a/openpype/hosts/fusion/plugins/load/load_fbx.py b/openpype/hosts/fusion/plugins/load/load_fbx.py index c73ad78394..d15d2c33d7 100644 --- a/openpype/hosts/fusion/plugins/load/load_fbx.py +++ b/openpype/hosts/fusion/plugins/load/load_fbx.py @@ -45,7 +45,7 @@ class FusionLoadFBXMesh(load.LoaderPlugin): # Create the Loader with the filename path set comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create tool"): - path = self.fname + path = self.filepath_from_context(context) args = (-32768, -32768) tool = comp.AddTool(self.tool_type, *args) diff --git a/openpype/hosts/fusion/plugins/load/load_sequence.py b/openpype/hosts/fusion/plugins/load/load_sequence.py index 552e282587..4401af97eb 100644 --- a/openpype/hosts/fusion/plugins/load/load_sequence.py +++ b/openpype/hosts/fusion/plugins/load/load_sequence.py @@ -1,10 +1,7 @@ import contextlib import openpype.pipeline.load as load -from openpype.pipeline.load import ( - get_representation_context, - get_representation_path_from_context, -) +from openpype.pipeline.load import get_representation_context from openpype.hosts.fusion.api import ( imprint_container, get_current_comp, @@ -157,14 +154,14 @@ class FusionLoadSequence(load.LoaderPlugin): namespace = context["asset"]["name"] # Use the first file for now - path = get_representation_path_from_context(context) + path = self.filepath_from_context(context) # Create the Loader with the filename path set comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create Loader"): args = (-32768, -32768) tool = comp.AddTool("Loader", *args) - tool["Clip"] = path + tool["Clip"] = comp.ReverseMapPath(path) # Set global in point to start frame (if in version.data) start = self._get_start(context["version"], tool) @@ -228,7 +225,7 @@ class FusionLoadSequence(load.LoaderPlugin): comp = tool.Comp() context = get_representation_context(representation) - path = get_representation_path_from_context(context) + path = self.filepath_from_context(context) # Get start frame from version data start = self._get_start(context["version"], tool) @@ -247,7 +244,7 @@ class FusionLoadSequence(load.LoaderPlugin): "TimeCodeOffset", ), ): - tool["Clip"] = path + tool["Clip"] = comp.ReverseMapPath(path) # Set the global in to the start frame of the sequence global_in_changed = loader_shift(tool, start, relative=False) diff --git a/openpype/hosts/fusion/plugins/load/load_workfile.py b/openpype/hosts/fusion/plugins/load/load_workfile.py index b49d104a15..14e36ca8fd 100644 --- a/openpype/hosts/fusion/plugins/load/load_workfile.py +++ b/openpype/hosts/fusion/plugins/load/load_workfile.py @@ -27,6 +27,7 @@ class FusionLoadWorkfile(load.LoaderPlugin): # Get needed elements bmd = get_bmd_library() comp = get_current_comp() + path = self.filepath_from_context(context) # Paste the content of the file into the current comp - comp.Paste(bmd.readfile(self.fname)) + comp.Paste(bmd.readfile(path)) diff --git a/openpype/hosts/fusion/plugins/publish/collect_instances.py b/openpype/hosts/fusion/plugins/publish/collect_instances.py index 6016baa2a9..4d6da79b77 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_instances.py +++ b/openpype/hosts/fusion/plugins/publish/collect_instances.py @@ -85,5 +85,5 @@ class CollectInstanceData(pyblish.api.InstancePlugin): # Add review family if the instance is marked as 'review' # This could be done through a 'review' Creator attribute. if instance.data.get("review", False): - self.log.info("Adding review family..") + self.log.debug("Adding review family..") instance.data["families"].append("review") diff --git a/openpype/hosts/fusion/plugins/publish/collect_render.py b/openpype/hosts/fusion/plugins/publish/collect_render.py index a20a142701..a7daa0b64c 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_render.py +++ b/openpype/hosts/fusion/plugins/publish/collect_render.py @@ -108,7 +108,6 @@ class CollectFusionRender( fam = "render.farm" if fam not in instance.families: instance.families.append(fam) - instance.toBeRenderedOn = "deadline" instance.farm = True # to skip integrate if "review" in instance.families: # to skip ExtractReview locally @@ -146,9 +145,11 @@ class CollectFusionRender( start = render_instance.frameStart - render_instance.handleStart end = render_instance.frameEnd + render_instance.handleEnd - path = ( - render_instance.tool["Clip"] - [render_instance.workfileComp.TIME_UNDEFINED] + comp = render_instance.workfileComp + path = comp.MapPath( + render_instance.tool["Clip"][ + render_instance.workfileComp.TIME_UNDEFINED + ] ) output_dir = os.path.dirname(path) render_instance.outputDir = output_dir diff --git a/openpype/hosts/fusion/vendor/attr/__init__.py b/openpype/hosts/fusion/vendor/attr/__init__.py new file mode 100644 index 0000000000..b1ce7fe248 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/__init__.py @@ -0,0 +1,78 @@ +from __future__ import absolute_import, division, print_function + +import sys + +from functools import partial + +from . import converters, exceptions, filters, setters, validators +from ._cmp import cmp_using +from ._config import get_run_validators, set_run_validators +from ._funcs import asdict, assoc, astuple, evolve, has, resolve_types +from ._make import ( + NOTHING, + Attribute, + Factory, + attrib, + attrs, + fields, + fields_dict, + make_class, + validate, +) +from ._version_info import VersionInfo + + +__version__ = "21.2.0" +__version_info__ = VersionInfo._from_version_string(__version__) + +__title__ = "attrs" +__description__ = "Classes Without Boilerplate" +__url__ = "https://www.attrs.org/" +__uri__ = __url__ +__doc__ = __description__ + " <" + __uri__ + ">" + +__author__ = "Hynek Schlawack" +__email__ = "hs@ox.cx" + +__license__ = "MIT" +__copyright__ = "Copyright (c) 2015 Hynek Schlawack" + + +s = attributes = attrs +ib = attr = attrib +dataclass = partial(attrs, auto_attribs=True) # happy Easter ;) + +__all__ = [ + "Attribute", + "Factory", + "NOTHING", + "asdict", + "assoc", + "astuple", + "attr", + "attrib", + "attributes", + "attrs", + "cmp_using", + "converters", + "evolve", + "exceptions", + "fields", + "fields_dict", + "filters", + "get_run_validators", + "has", + "ib", + "make_class", + "resolve_types", + "s", + "set_run_validators", + "setters", + "validate", + "validators", +] + +if sys.version_info[:2] >= (3, 6): + from ._next_gen import define, field, frozen, mutable + + __all__.extend((define, field, frozen, mutable)) diff --git a/openpype/hosts/fusion/vendor/attr/__init__.pyi b/openpype/hosts/fusion/vendor/attr/__init__.pyi new file mode 100644 index 0000000000..3503b073b4 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/__init__.pyi @@ -0,0 +1,475 @@ +import sys + +from typing import ( + Any, + Callable, + Dict, + Generic, + List, + Mapping, + Optional, + Sequence, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +# `import X as X` is required to make these public +from . import converters as converters +from . import exceptions as exceptions +from . import filters as filters +from . import setters as setters +from . import validators as validators +from ._version_info import VersionInfo + + +__version__: str +__version_info__: VersionInfo +__title__: str +__description__: str +__url__: str +__uri__: str +__author__: str +__email__: str +__license__: str +__copyright__: str + +_T = TypeVar("_T") +_C = TypeVar("_C", bound=type) + +_EqOrderType = Union[bool, Callable[[Any], Any]] +_ValidatorType = Callable[[Any, Attribute[_T], _T], Any] +_ConverterType = Callable[[Any], Any] +_FilterType = Callable[[Attribute[_T], _T], bool] +_ReprType = Callable[[Any], str] +_ReprArgType = Union[bool, _ReprType] +_OnSetAttrType = Callable[[Any, Attribute[Any], Any], Any] +_OnSetAttrArgType = Union[ + _OnSetAttrType, List[_OnSetAttrType], setters._NoOpType +] +_FieldTransformer = Callable[[type, List[Attribute[Any]]], List[Attribute[Any]]] +# FIXME: in reality, if multiple validators are passed they must be in a list +# or tuple, but those are invariant and so would prevent subtypes of +# _ValidatorType from working when passed in a list or tuple. +_ValidatorArgType = Union[_ValidatorType[_T], Sequence[_ValidatorType[_T]]] + +# _make -- + +NOTHING: object + +# NOTE: Factory lies about its return type to make this possible: +# `x: List[int] # = Factory(list)` +# Work around mypy issue #4554 in the common case by using an overload. +if sys.version_info >= (3, 8): + from typing import Literal + + @overload + def Factory(factory: Callable[[], _T]) -> _T: ... + @overload + def Factory( + factory: Callable[[Any], _T], + takes_self: Literal[True], + ) -> _T: ... + @overload + def Factory( + factory: Callable[[], _T], + takes_self: Literal[False], + ) -> _T: ... +else: + @overload + def Factory(factory: Callable[[], _T]) -> _T: ... + @overload + def Factory( + factory: Union[Callable[[Any], _T], Callable[[], _T]], + takes_self: bool = ..., + ) -> _T: ... + +# Static type inference support via __dataclass_transform__ implemented as per: +# https://github.com/microsoft/pyright/blob/1.1.135/specs/dataclass_transforms.md +# This annotation must be applied to all overloads of "define" and "attrs" +# +# NOTE: This is a typing construct and does not exist at runtime. Extensions +# wrapping attrs decorators should declare a separate __dataclass_transform__ +# signature in the extension module using the specification linked above to +# provide pyright support. +def __dataclass_transform__( + *, + eq_default: bool = True, + order_default: bool = False, + kw_only_default: bool = False, + field_descriptors: Tuple[Union[type, Callable[..., Any]], ...] = (()), +) -> Callable[[_T], _T]: ... + +class Attribute(Generic[_T]): + name: str + default: Optional[_T] + validator: Optional[_ValidatorType[_T]] + repr: _ReprArgType + cmp: _EqOrderType + eq: _EqOrderType + order: _EqOrderType + hash: Optional[bool] + init: bool + converter: Optional[_ConverterType] + metadata: Dict[Any, Any] + type: Optional[Type[_T]] + kw_only: bool + on_setattr: _OnSetAttrType + + def evolve(self, **changes: Any) -> "Attribute[Any]": ... + +# NOTE: We had several choices for the annotation to use for type arg: +# 1) Type[_T] +# - Pros: Handles simple cases correctly +# - Cons: Might produce less informative errors in the case of conflicting +# TypeVars e.g. `attr.ib(default='bad', type=int)` +# 2) Callable[..., _T] +# - Pros: Better error messages than #1 for conflicting TypeVars +# - Cons: Terrible error messages for validator checks. +# e.g. attr.ib(type=int, validator=validate_str) +# -> error: Cannot infer function type argument +# 3) type (and do all of the work in the mypy plugin) +# - Pros: Simple here, and we could customize the plugin with our own errors. +# - Cons: Would need to write mypy plugin code to handle all the cases. +# We chose option #1. + +# `attr` lies about its return type to make the following possible: +# attr() -> Any +# attr(8) -> int +# attr(validator=) -> Whatever the callable expects. +# This makes this type of assignments possible: +# x: int = attr(8) +# +# This form catches explicit None or no default but with no other arguments +# returns Any. +@overload +def attrib( + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: None = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def attrib( + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def attrib( + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: Optional[Type[_T]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def attrib( + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + type: object = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +def field( + *, + default: None = ..., + validator: None = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: None = ..., + factory: None = ..., + kw_only: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... + +# This form catches an explicit None or no default and infers the type from the +# other arguments. +@overload +def field( + *, + default: None = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form catches an explicit default argument. +@overload +def field( + *, + default: _T, + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> _T: ... + +# This form covers type=non-Type: e.g. forward references (str), Any +@overload +def field( + *, + default: Optional[_T] = ..., + validator: Optional[_ValidatorArgType[_T]] = ..., + repr: _ReprArgType = ..., + hash: Optional[bool] = ..., + init: bool = ..., + metadata: Optional[Mapping[Any, Any]] = ..., + converter: Optional[_ConverterType] = ..., + factory: Optional[Callable[[], _T]] = ..., + kw_only: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., +) -> Any: ... +@overload +@__dataclass_transform__(order_default=True, field_descriptors=(attrib, field)) +def attrs( + maybe_cls: _C, + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> _C: ... +@overload +@__dataclass_transform__(order_default=True, field_descriptors=(attrib, field)) +def attrs( + maybe_cls: None = ..., + these: Optional[Dict[str, Any]] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + auto_detect: bool = ..., + collect_by_mro: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> Callable[[_C], _C]: ... +@overload +@__dataclass_transform__(field_descriptors=(attrib, field)) +def define( + maybe_cls: _C, + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> _C: ... +@overload +@__dataclass_transform__(field_descriptors=(attrib, field)) +def define( + maybe_cls: None = ..., + *, + these: Optional[Dict[str, Any]] = ..., + repr: bool = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[bool] = ..., + order: Optional[bool] = ..., + auto_detect: bool = ..., + getstate_setstate: Optional[bool] = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> Callable[[_C], _C]: ... + +mutable = define +frozen = define # they differ only in their defaults + +# TODO: add support for returning NamedTuple from the mypy plugin +class _Fields(Tuple[Attribute[Any], ...]): + def __getattr__(self, name: str) -> Attribute[Any]: ... + +def fields(cls: type) -> _Fields: ... +def fields_dict(cls: type) -> Dict[str, Attribute[Any]]: ... +def validate(inst: Any) -> None: ... +def resolve_types( + cls: _C, + globalns: Optional[Dict[str, Any]] = ..., + localns: Optional[Dict[str, Any]] = ..., + attribs: Optional[List[Attribute[Any]]] = ..., +) -> _C: ... + +# TODO: add support for returning a proper attrs class from the mypy plugin +# we use Any instead of _CountingAttr so that e.g. `make_class('Foo', +# [attr.ib()])` is valid +def make_class( + name: str, + attrs: Union[List[str], Tuple[str, ...], Dict[str, Any]], + bases: Tuple[type, ...] = ..., + repr_ns: Optional[str] = ..., + repr: bool = ..., + cmp: Optional[_EqOrderType] = ..., + hash: Optional[bool] = ..., + init: bool = ..., + slots: bool = ..., + frozen: bool = ..., + weakref_slot: bool = ..., + str: bool = ..., + auto_attribs: bool = ..., + kw_only: bool = ..., + cache_hash: bool = ..., + auto_exc: bool = ..., + eq: Optional[_EqOrderType] = ..., + order: Optional[_EqOrderType] = ..., + collect_by_mro: bool = ..., + on_setattr: Optional[_OnSetAttrArgType] = ..., + field_transformer: Optional[_FieldTransformer] = ..., +) -> type: ... + +# _funcs -- + +# TODO: add support for returning TypedDict from the mypy plugin +# FIXME: asdict/astuple do not honor their factory args. Waiting on one of +# these: +# https://github.com/python/mypy/issues/4236 +# https://github.com/python/typing/issues/253 +def asdict( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + dict_factory: Type[Mapping[Any, Any]] = ..., + retain_collection_types: bool = ..., + value_serializer: Optional[Callable[[type, Attribute[Any], Any], Any]] = ..., +) -> Dict[str, Any]: ... + +# TODO: add support for returning NamedTuple from the mypy plugin +def astuple( + inst: Any, + recurse: bool = ..., + filter: Optional[_FilterType[Any]] = ..., + tuple_factory: Type[Sequence[Any]] = ..., + retain_collection_types: bool = ..., +) -> Tuple[Any, ...]: ... +def has(cls: type) -> bool: ... +def assoc(inst: _T, **changes: Any) -> _T: ... +def evolve(inst: _T, **changes: Any) -> _T: ... + +# _config -- + +def set_run_validators(run: bool) -> None: ... +def get_run_validators() -> bool: ... + +# aliases -- + +s = attributes = attrs +ib = attr = attrib +dataclass = attrs # Technically, partial(attrs, auto_attribs=True) ;) diff --git a/openpype/hosts/fusion/vendor/attr/_cmp.py b/openpype/hosts/fusion/vendor/attr/_cmp.py new file mode 100644 index 0000000000..b747b603f1 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_cmp.py @@ -0,0 +1,152 @@ +from __future__ import absolute_import, division, print_function + +import functools + +from ._compat import new_class +from ._make import _make_ne + + +_operation_names = {"eq": "==", "lt": "<", "le": "<=", "gt": ">", "ge": ">="} + + +def cmp_using( + eq=None, + lt=None, + le=None, + gt=None, + ge=None, + require_same_type=True, + class_name="Comparable", +): + """ + Create a class that can be passed into `attr.ib`'s ``eq``, ``order``, and + ``cmp`` arguments to customize field comparison. + + The resulting class will have a full set of ordering methods if + at least one of ``{lt, le, gt, ge}`` and ``eq`` are provided. + + :param Optional[callable] eq: `callable` used to evaluate equality + of two objects. + :param Optional[callable] lt: `callable` used to evaluate whether + one object is less than another object. + :param Optional[callable] le: `callable` used to evaluate whether + one object is less than or equal to another object. + :param Optional[callable] gt: `callable` used to evaluate whether + one object is greater than another object. + :param Optional[callable] ge: `callable` used to evaluate whether + one object is greater than or equal to another object. + + :param bool require_same_type: When `True`, equality and ordering methods + will return `NotImplemented` if objects are not of the same type. + + :param Optional[str] class_name: Name of class. Defaults to 'Comparable'. + + See `comparison` for more details. + + .. versionadded:: 21.1.0 + """ + + body = { + "__slots__": ["value"], + "__init__": _make_init(), + "_requirements": [], + "_is_comparable_to": _is_comparable_to, + } + + # Add operations. + num_order_functions = 0 + has_eq_function = False + + if eq is not None: + has_eq_function = True + body["__eq__"] = _make_operator("eq", eq) + body["__ne__"] = _make_ne() + + if lt is not None: + num_order_functions += 1 + body["__lt__"] = _make_operator("lt", lt) + + if le is not None: + num_order_functions += 1 + body["__le__"] = _make_operator("le", le) + + if gt is not None: + num_order_functions += 1 + body["__gt__"] = _make_operator("gt", gt) + + if ge is not None: + num_order_functions += 1 + body["__ge__"] = _make_operator("ge", ge) + + type_ = new_class(class_name, (object,), {}, lambda ns: ns.update(body)) + + # Add same type requirement. + if require_same_type: + type_._requirements.append(_check_same_type) + + # Add total ordering if at least one operation was defined. + if 0 < num_order_functions < 4: + if not has_eq_function: + # functools.total_ordering requires __eq__ to be defined, + # so raise early error here to keep a nice stack. + raise ValueError( + "eq must be define is order to complete ordering from " + "lt, le, gt, ge." + ) + type_ = functools.total_ordering(type_) + + return type_ + + +def _make_init(): + """ + Create __init__ method. + """ + + def __init__(self, value): + """ + Initialize object with *value*. + """ + self.value = value + + return __init__ + + +def _make_operator(name, func): + """ + Create operator method. + """ + + def method(self, other): + if not self._is_comparable_to(other): + return NotImplemented + + result = func(self.value, other.value) + if result is NotImplemented: + return NotImplemented + + return result + + method.__name__ = "__%s__" % (name,) + method.__doc__ = "Return a %s b. Computed by attrs." % ( + _operation_names[name], + ) + + return method + + +def _is_comparable_to(self, other): + """ + Check whether `other` is comparable to `self`. + """ + for func in self._requirements: + if not func(self, other): + return False + return True + + +def _check_same_type(self, other): + """ + Return True if *self* and *other* are of the same type, False otherwise. + """ + return other.value.__class__ is self.value.__class__ diff --git a/openpype/hosts/fusion/vendor/attr/_cmp.pyi b/openpype/hosts/fusion/vendor/attr/_cmp.pyi new file mode 100644 index 0000000000..7093550f0f --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_cmp.pyi @@ -0,0 +1,14 @@ +from typing import Type + +from . import _CompareWithType + + +def cmp_using( + eq: Optional[_CompareWithType], + lt: Optional[_CompareWithType], + le: Optional[_CompareWithType], + gt: Optional[_CompareWithType], + ge: Optional[_CompareWithType], + require_same_type: bool, + class_name: str, +) -> Type: ... diff --git a/openpype/hosts/fusion/vendor/attr/_compat.py b/openpype/hosts/fusion/vendor/attr/_compat.py new file mode 100644 index 0000000000..6939f338da --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_compat.py @@ -0,0 +1,242 @@ +from __future__ import absolute_import, division, print_function + +import platform +import sys +import types +import warnings + + +PY2 = sys.version_info[0] == 2 +PYPY = platform.python_implementation() == "PyPy" + + +if PYPY or sys.version_info[:2] >= (3, 6): + ordered_dict = dict +else: + from collections import OrderedDict + + ordered_dict = OrderedDict + + +if PY2: + from collections import Mapping, Sequence + + from UserDict import IterableUserDict + + # We 'bundle' isclass instead of using inspect as importing inspect is + # fairly expensive (order of 10-15 ms for a modern machine in 2016) + def isclass(klass): + return isinstance(klass, (type, types.ClassType)) + + def new_class(name, bases, kwds, exec_body): + """ + A minimal stub of types.new_class that we need for make_class. + """ + ns = {} + exec_body(ns) + + return type(name, bases, ns) + + # TYPE is used in exceptions, repr(int) is different on Python 2 and 3. + TYPE = "type" + + def iteritems(d): + return d.iteritems() + + # Python 2 is bereft of a read-only dict proxy, so we make one! + class ReadOnlyDict(IterableUserDict): + """ + Best-effort read-only dict wrapper. + """ + + def __setitem__(self, key, val): + # We gently pretend we're a Python 3 mappingproxy. + raise TypeError( + "'mappingproxy' object does not support item assignment" + ) + + def update(self, _): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'update'" + ) + + def __delitem__(self, _): + # We gently pretend we're a Python 3 mappingproxy. + raise TypeError( + "'mappingproxy' object does not support item deletion" + ) + + def clear(self): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'clear'" + ) + + def pop(self, key, default=None): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'pop'" + ) + + def popitem(self): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'popitem'" + ) + + def setdefault(self, key, default=None): + # We gently pretend we're a Python 3 mappingproxy. + raise AttributeError( + "'mappingproxy' object has no attribute 'setdefault'" + ) + + def __repr__(self): + # Override to be identical to the Python 3 version. + return "mappingproxy(" + repr(self.data) + ")" + + def metadata_proxy(d): + res = ReadOnlyDict() + res.data.update(d) # We blocked update, so we have to do it like this. + return res + + def just_warn(*args, **kw): # pragma: no cover + """ + We only warn on Python 3 because we are not aware of any concrete + consequences of not setting the cell on Python 2. + """ + + +else: # Python 3 and later. + from collections.abc import Mapping, Sequence # noqa + + def just_warn(*args, **kw): + """ + We only warn on Python 3 because we are not aware of any concrete + consequences of not setting the cell on Python 2. + """ + warnings.warn( + "Running interpreter doesn't sufficiently support code object " + "introspection. Some features like bare super() or accessing " + "__class__ will not work with slotted classes.", + RuntimeWarning, + stacklevel=2, + ) + + def isclass(klass): + return isinstance(klass, type) + + TYPE = "class" + + def iteritems(d): + return d.items() + + new_class = types.new_class + + def metadata_proxy(d): + return types.MappingProxyType(dict(d)) + + +def make_set_closure_cell(): + """Return a function of two arguments (cell, value) which sets + the value stored in the closure cell `cell` to `value`. + """ + # pypy makes this easy. (It also supports the logic below, but + # why not do the easy/fast thing?) + if PYPY: + + def set_closure_cell(cell, value): + cell.__setstate__((value,)) + + return set_closure_cell + + # Otherwise gotta do it the hard way. + + # Create a function that will set its first cellvar to `value`. + def set_first_cellvar_to(value): + x = value + return + + # This function will be eliminated as dead code, but + # not before its reference to `x` forces `x` to be + # represented as a closure cell rather than a local. + def force_x_to_be_a_cell(): # pragma: no cover + return x + + try: + # Extract the code object and make sure our assumptions about + # the closure behavior are correct. + if PY2: + co = set_first_cellvar_to.func_code + else: + co = set_first_cellvar_to.__code__ + if co.co_cellvars != ("x",) or co.co_freevars != (): + raise AssertionError # pragma: no cover + + # Convert this code object to a code object that sets the + # function's first _freevar_ (not cellvar) to the argument. + if sys.version_info >= (3, 8): + # CPython 3.8+ has an incompatible CodeType signature + # (added a posonlyargcount argument) but also added + # CodeType.replace() to do this without counting parameters. + set_first_freevar_code = co.replace( + co_cellvars=co.co_freevars, co_freevars=co.co_cellvars + ) + else: + args = [co.co_argcount] + if not PY2: + args.append(co.co_kwonlyargcount) + args.extend( + [ + co.co_nlocals, + co.co_stacksize, + co.co_flags, + co.co_code, + co.co_consts, + co.co_names, + co.co_varnames, + co.co_filename, + co.co_name, + co.co_firstlineno, + co.co_lnotab, + # These two arguments are reversed: + co.co_cellvars, + co.co_freevars, + ] + ) + set_first_freevar_code = types.CodeType(*args) + + def set_closure_cell(cell, value): + # Create a function using the set_first_freevar_code, + # whose first closure cell is `cell`. Calling it will + # change the value of that cell. + setter = types.FunctionType( + set_first_freevar_code, {}, "setter", (), (cell,) + ) + # And call it to set the cell. + setter(value) + + # Make sure it works on this interpreter: + def make_func_with_cell(): + x = None + + def func(): + return x # pragma: no cover + + return func + + if PY2: + cell = make_func_with_cell().func_closure[0] + else: + cell = make_func_with_cell().__closure__[0] + set_closure_cell(cell, 100) + if cell.cell_contents != 100: + raise AssertionError # pragma: no cover + + except Exception: + return just_warn + else: + return set_closure_cell + + +set_closure_cell = make_set_closure_cell() diff --git a/openpype/hosts/fusion/vendor/attr/_config.py b/openpype/hosts/fusion/vendor/attr/_config.py new file mode 100644 index 0000000000..8ec920962d --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_config.py @@ -0,0 +1,23 @@ +from __future__ import absolute_import, division, print_function + + +__all__ = ["set_run_validators", "get_run_validators"] + +_run_validators = True + + +def set_run_validators(run): + """ + Set whether or not validators are run. By default, they are run. + """ + if not isinstance(run, bool): + raise TypeError("'run' must be bool.") + global _run_validators + _run_validators = run + + +def get_run_validators(): + """ + Return whether or not validators are run. + """ + return _run_validators diff --git a/openpype/hosts/fusion/vendor/attr/_funcs.py b/openpype/hosts/fusion/vendor/attr/_funcs.py new file mode 100644 index 0000000000..fda508c5c4 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_funcs.py @@ -0,0 +1,395 @@ +from __future__ import absolute_import, division, print_function + +import copy + +from ._compat import iteritems +from ._make import NOTHING, _obj_setattr, fields +from .exceptions import AttrsAttributeNotFoundError + + +def asdict( + inst, + recurse=True, + filter=None, + dict_factory=dict, + retain_collection_types=False, + value_serializer=None, +): + """ + Return the ``attrs`` attribute values of *inst* as a dict. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attr.Attribute` as the first argument and the + value as the second argument. + :param callable dict_factory: A callable to produce dictionaries from. For + example, to produce ordered dictionaries instead of normal Python + dictionaries, pass in ``collections.OrderedDict``. + :param bool retain_collection_types: Do not convert to ``list`` when + encountering an attribute whose type is ``tuple`` or ``set``. Only + meaningful if ``recurse`` is ``True``. + :param Optional[callable] value_serializer: A hook that is called for every + attribute or dict key/value. It receives the current instance, field + and value and must return the (updated) value. The hook is run *after* + the optional *filter* has been applied. + + :rtype: return type of *dict_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.0.0 *dict_factory* + .. versionadded:: 16.1.0 *retain_collection_types* + .. versionadded:: 20.3.0 *value_serializer* + """ + attrs = fields(inst.__class__) + rv = dict_factory() + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + + if value_serializer is not None: + v = value_serializer(inst, a, v) + + if recurse is True: + if has(v.__class__): + rv[a.name] = asdict( + v, + True, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain_collection_types is True else list + rv[a.name] = cf( + [ + _asdict_anything( + i, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + for i in v + ] + ) + elif isinstance(v, dict): + df = dict_factory + rv[a.name] = df( + ( + _asdict_anything( + kk, + filter, + df, + retain_collection_types, + value_serializer, + ), + _asdict_anything( + vv, + filter, + df, + retain_collection_types, + value_serializer, + ), + ) + for kk, vv in iteritems(v) + ) + else: + rv[a.name] = v + else: + rv[a.name] = v + return rv + + +def _asdict_anything( + val, + filter, + dict_factory, + retain_collection_types, + value_serializer, +): + """ + ``asdict`` only works on attrs instances, this works on anything. + """ + if getattr(val.__class__, "__attrs_attrs__", None) is not None: + # Attrs class. + rv = asdict( + val, + True, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + elif isinstance(val, (tuple, list, set, frozenset)): + cf = val.__class__ if retain_collection_types is True else list + rv = cf( + [ + _asdict_anything( + i, + filter, + dict_factory, + retain_collection_types, + value_serializer, + ) + for i in val + ] + ) + elif isinstance(val, dict): + df = dict_factory + rv = df( + ( + _asdict_anything( + kk, filter, df, retain_collection_types, value_serializer + ), + _asdict_anything( + vv, filter, df, retain_collection_types, value_serializer + ), + ) + for kk, vv in iteritems(val) + ) + else: + rv = val + if value_serializer is not None: + rv = value_serializer(None, None, rv) + + return rv + + +def astuple( + inst, + recurse=True, + filter=None, + tuple_factory=tuple, + retain_collection_types=False, +): + """ + Return the ``attrs`` attribute values of *inst* as a tuple. + + Optionally recurse into other ``attrs``-decorated classes. + + :param inst: Instance of an ``attrs``-decorated class. + :param bool recurse: Recurse into classes that are also + ``attrs``-decorated. + :param callable filter: A callable whose return code determines whether an + attribute or element is included (``True``) or dropped (``False``). Is + called with the `attr.Attribute` as the first argument and the + value as the second argument. + :param callable tuple_factory: A callable to produce tuples from. For + example, to produce lists instead of tuples. + :param bool retain_collection_types: Do not convert to ``list`` + or ``dict`` when encountering an attribute which type is + ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is + ``True``. + + :rtype: return type of *tuple_factory* + + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 16.2.0 + """ + attrs = fields(inst.__class__) + rv = [] + retain = retain_collection_types # Very long. :/ + for a in attrs: + v = getattr(inst, a.name) + if filter is not None and not filter(a, v): + continue + if recurse is True: + if has(v.__class__): + rv.append( + astuple( + v, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + ) + elif isinstance(v, (tuple, list, set, frozenset)): + cf = v.__class__ if retain is True else list + rv.append( + cf( + [ + astuple( + j, + recurse=True, + filter=filter, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(j.__class__) + else j + for j in v + ] + ) + ) + elif isinstance(v, dict): + df = v.__class__ if retain is True else dict + rv.append( + df( + ( + astuple( + kk, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(kk.__class__) + else kk, + astuple( + vv, + tuple_factory=tuple_factory, + retain_collection_types=retain, + ) + if has(vv.__class__) + else vv, + ) + for kk, vv in iteritems(v) + ) + ) + else: + rv.append(v) + else: + rv.append(v) + + return rv if tuple_factory is list else tuple_factory(rv) + + +def has(cls): + """ + Check whether *cls* is a class with ``attrs`` attributes. + + :param type cls: Class to introspect. + :raise TypeError: If *cls* is not a class. + + :rtype: bool + """ + return getattr(cls, "__attrs_attrs__", None) is not None + + +def assoc(inst, **changes): + """ + Copy *inst* and apply *changes*. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise attr.exceptions.AttrsAttributeNotFoundError: If *attr_name* couldn't + be found on *cls*. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. deprecated:: 17.1.0 + Use `evolve` instead. + """ + import warnings + + warnings.warn( + "assoc is deprecated and will be removed after 2018/01.", + DeprecationWarning, + stacklevel=2, + ) + new = copy.copy(inst) + attrs = fields(inst.__class__) + for k, v in iteritems(changes): + a = getattr(attrs, k, NOTHING) + if a is NOTHING: + raise AttrsAttributeNotFoundError( + "{k} is not an attrs attribute on {cl}.".format( + k=k, cl=new.__class__ + ) + ) + _obj_setattr(new, k, v) + return new + + +def evolve(inst, **changes): + """ + Create a new instance, based on *inst* with *changes* applied. + + :param inst: Instance of a class with ``attrs`` attributes. + :param changes: Keyword changes in the new copy. + + :return: A copy of inst with *changes* incorporated. + + :raise TypeError: If *attr_name* couldn't be found in the class + ``__init__``. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + .. versionadded:: 17.1.0 + """ + cls = inst.__class__ + attrs = fields(cls) + for a in attrs: + if not a.init: + continue + attr_name = a.name # To deal with private attributes. + init_name = attr_name if attr_name[0] != "_" else attr_name[1:] + if init_name not in changes: + changes[init_name] = getattr(inst, attr_name) + + return cls(**changes) + + +def resolve_types(cls, globalns=None, localns=None, attribs=None): + """ + Resolve any strings and forward annotations in type annotations. + + This is only required if you need concrete types in `Attribute`'s *type* + field. In other words, you don't need to resolve your types if you only + use them for static type checking. + + With no arguments, names will be looked up in the module in which the class + was created. If this is not what you want, e.g. if the name only exists + inside a method, you may pass *globalns* or *localns* to specify other + dictionaries in which to look up these names. See the docs of + `typing.get_type_hints` for more details. + + :param type cls: Class to resolve. + :param Optional[dict] globalns: Dictionary containing global variables. + :param Optional[dict] localns: Dictionary containing local variables. + :param Optional[list] attribs: List of attribs for the given class. + This is necessary when calling from inside a ``field_transformer`` + since *cls* is not an ``attrs`` class yet. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class and you didn't pass any attribs. + :raise NameError: If types cannot be resolved because of missing variables. + + :returns: *cls* so you can use this function also as a class decorator. + Please note that you have to apply it **after** `attr.s`. That means + the decorator has to come in the line **before** `attr.s`. + + .. versionadded:: 20.1.0 + .. versionadded:: 21.1.0 *attribs* + + """ + try: + # Since calling get_type_hints is expensive we cache whether we've + # done it already. + cls.__attrs_types_resolved__ + except AttributeError: + import typing + + hints = typing.get_type_hints(cls, globalns=globalns, localns=localns) + for field in fields(cls) if attribs is None else attribs: + if field.name in hints: + # Since fields have been frozen we must work around it. + _obj_setattr(field, "type", hints[field.name]) + cls.__attrs_types_resolved__ = True + + # Return the class so you can use it as a decorator too. + return cls diff --git a/openpype/hosts/fusion/vendor/attr/_make.py b/openpype/hosts/fusion/vendor/attr/_make.py new file mode 100644 index 0000000000..a1912b1233 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_make.py @@ -0,0 +1,3052 @@ +from __future__ import absolute_import, division, print_function + +import copy +import inspect +import linecache +import sys +import threading +import uuid +import warnings + +from operator import itemgetter + +from . import _config, setters +from ._compat import ( + PY2, + PYPY, + isclass, + iteritems, + metadata_proxy, + new_class, + ordered_dict, + set_closure_cell, +) +from .exceptions import ( + DefaultAlreadySetError, + FrozenInstanceError, + NotAnAttrsClassError, + PythonTooOldError, + UnannotatedAttributeError, +) + + +if not PY2: + import typing + + +# This is used at least twice, so cache it here. +_obj_setattr = object.__setattr__ +_init_converter_pat = "__attr_converter_%s" +_init_factory_pat = "__attr_factory_{}" +_tuple_property_pat = ( + " {attr_name} = _attrs_property(_attrs_itemgetter({index}))" +) +_classvar_prefixes = ( + "typing.ClassVar", + "t.ClassVar", + "ClassVar", + "typing_extensions.ClassVar", +) +# we don't use a double-underscore prefix because that triggers +# name mangling when trying to create a slot for the field +# (when slots=True) +_hash_cache_field = "_attrs_cached_hash" + +_empty_metadata_singleton = metadata_proxy({}) + +# Unique object for unequivocal getattr() defaults. +_sentinel = object() + + +class _Nothing(object): + """ + Sentinel class to indicate the lack of a value when ``None`` is ambiguous. + + ``_Nothing`` is a singleton. There is only ever one of it. + + .. versionchanged:: 21.1.0 ``bool(NOTHING)`` is now False. + """ + + _singleton = None + + def __new__(cls): + if _Nothing._singleton is None: + _Nothing._singleton = super(_Nothing, cls).__new__(cls) + return _Nothing._singleton + + def __repr__(self): + return "NOTHING" + + def __bool__(self): + return False + + def __len__(self): + return 0 # __bool__ for Python 2 + + +NOTHING = _Nothing() +""" +Sentinel to indicate the lack of a value when ``None`` is ambiguous. +""" + + +class _CacheHashWrapper(int): + """ + An integer subclass that pickles / copies as None + + This is used for non-slots classes with ``cache_hash=True``, to avoid + serializing a potentially (even likely) invalid hash value. Since ``None`` + is the default value for uncalculated hashes, whenever this is copied, + the copy's value for the hash should automatically reset. + + See GH #613 for more details. + """ + + if PY2: + # For some reason `type(None)` isn't callable in Python 2, but we don't + # actually need a constructor for None objects, we just need any + # available function that returns None. + def __reduce__(self, _none_constructor=getattr, _args=(0, "", None)): + return _none_constructor, _args + + else: + + def __reduce__(self, _none_constructor=type(None), _args=()): + return _none_constructor, _args + + +def attrib( + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=None, + init=True, + metadata=None, + type=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Create a new attribute on a class. + + .. warning:: + + Does *not* do anything unless the class is also decorated with + `attr.s`! + + :param default: A value that is used if an ``attrs``-generated ``__init__`` + is used and no value is passed while instantiating or the attribute is + excluded using ``init=False``. + + If the value is an instance of `Factory`, its callable will be + used to construct a new value (useful for mutable data types like lists + or dicts). + + If a default is not set (or set manually to `attr.NOTHING`), a value + *must* be supplied when instantiating; otherwise a `TypeError` + will be raised. + + The default can also be set using decorator notation as shown below. + + :type default: Any value + + :param callable factory: Syntactic sugar for + ``default=attr.Factory(factory)``. + + :param validator: `callable` that is called by ``attrs``-generated + ``__init__`` methods after the instance has been initialized. They + receive the initialized instance, the `Attribute`, and the + passed value. + + The return value is *not* inspected so the validator has to throw an + exception itself. + + If a `list` is passed, its items are treated as validators and must + all pass. + + Validators can be globally disabled and re-enabled using + `get_run_validators`. + + The validator can also be set using decorator notation as shown below. + + :type validator: `callable` or a `list` of `callable`\\ s. + + :param repr: Include this attribute in the generated ``__repr__`` + method. If ``True``, include the attribute; if ``False``, omit it. By + default, the built-in ``repr()`` function is used. To override how the + attribute value is formatted, pass a ``callable`` that takes a single + value and returns a string. Note that the resulting string is used + as-is, i.e. it will be used directly *instead* of calling ``repr()`` + (the default). + :type repr: a `bool` or a `callable` to use a custom function. + + :param eq: If ``True`` (default), include this attribute in the + generated ``__eq__`` and ``__ne__`` methods that check two instances + for equality. To override how the attribute value is compared, + pass a ``callable`` that takes a single value and returns the value + to be compared. + :type eq: a `bool` or a `callable`. + + :param order: If ``True`` (default), include this attributes in the + generated ``__lt__``, ``__le__``, ``__gt__`` and ``__ge__`` methods. + To override how the attribute value is ordered, + pass a ``callable`` that takes a single value and returns the value + to be ordered. + :type order: a `bool` or a `callable`. + + :param cmp: Setting *cmp* is equivalent to setting *eq* and *order* to the + same value. Must not be mixed with *eq* or *order*. + :type cmp: a `bool` or a `callable`. + + :param Optional[bool] hash: Include this attribute in the generated + ``__hash__`` method. If ``None`` (default), mirror *eq*'s value. This + is the correct behavior according the Python spec. Setting this value + to anything else than ``None`` is *discouraged*. + :param bool init: Include this attribute in the generated ``__init__`` + method. It is possible to set this to ``False`` and set a default + value. In that case this attributed is unconditionally initialized + with the specified default value or factory. + :param callable converter: `callable` that is called by + ``attrs``-generated ``__init__`` methods to convert attribute's value + to the desired format. It is given the passed-in value, and the + returned value will be used as the new value of the attribute. The + value is converted before being passed to the validator, if any. + :param metadata: An arbitrary mapping, to be used by third-party + components. See `extending_metadata`. + :param type: The type of the attribute. In Python 3.6 or greater, the + preferred method to specify the type is using a variable annotation + (see `PEP 526 `_). + This argument is provided for backward compatibility. + Regardless of the approach used, the type will be stored on + ``Attribute.type``. + + Please note that ``attrs`` doesn't do anything with this metadata by + itself. You can use it as part of your own code or for + `static type checking `. + :param kw_only: Make this attribute keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param on_setattr: Allows to overwrite the *on_setattr* setting from + `attr.s`. If left `None`, the *on_setattr* value from `attr.s` is used. + Set to `attr.setters.NO_OP` to run **no** `setattr` hooks for this + attribute -- regardless of the setting in `attr.s`. + :type on_setattr: `callable`, or a list of callables, or `None`, or + `attr.setters.NO_OP` + + .. versionadded:: 15.2.0 *convert* + .. versionadded:: 16.3.0 *metadata* + .. versionchanged:: 17.1.0 *validator* can be a ``list`` now. + .. versionchanged:: 17.1.0 + *hash* is ``None`` and therefore mirrors *eq* by default. + .. versionadded:: 17.3.0 *type* + .. deprecated:: 17.4.0 *convert* + .. versionadded:: 17.4.0 *converter* as a replacement for the deprecated + *convert* to achieve consistency with other noun-based arguments. + .. versionadded:: 18.1.0 + ``factory=f`` is syntactic sugar for ``default=attr.Factory(f)``. + .. versionadded:: 18.2.0 *kw_only* + .. versionchanged:: 19.2.0 *convert* keyword argument removed. + .. versionchanged:: 19.2.0 *repr* also accepts a custom callable. + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.3.0 *kw_only* backported to Python 2 + .. versionchanged:: 21.1.0 + *eq*, *order*, and *cmp* also accept a custom callable + .. versionchanged:: 21.1.0 *cmp* undeprecated + """ + eq, eq_key, order, order_key = _determine_attrib_eq_order( + cmp, eq, order, True + ) + + if hash is not None and hash is not True and hash is not False: + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + + if factory is not None: + if default is not NOTHING: + raise ValueError( + "The `default` and `factory` arguments are mutually " + "exclusive." + ) + if not callable(factory): + raise ValueError("The `factory` argument must be a callable.") + default = Factory(factory) + + if metadata is None: + metadata = {} + + # Apply syntactic sugar by auto-wrapping. + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + if validator and isinstance(validator, (list, tuple)): + validator = and_(*validator) + + if converter and isinstance(converter, (list, tuple)): + converter = pipe(*converter) + + return _CountingAttr( + default=default, + validator=validator, + repr=repr, + cmp=None, + hash=hash, + init=init, + converter=converter, + metadata=metadata, + type=type, + kw_only=kw_only, + eq=eq, + eq_key=eq_key, + order=order, + order_key=order_key, + on_setattr=on_setattr, + ) + + +def _compile_and_eval(script, globs, locs=None, filename=""): + """ + "Exec" the script with the given global (globs) and local (locs) variables. + """ + bytecode = compile(script, filename, "exec") + eval(bytecode, globs, locs) + + +def _make_method(name, script, filename, globs=None): + """ + Create the method with the script given and return the method object. + """ + locs = {} + if globs is None: + globs = {} + + _compile_and_eval(script, globs, locs, filename) + + # In order of debuggers like PDB being able to step through the code, + # we add a fake linecache entry. + linecache.cache[filename] = ( + len(script), + None, + script.splitlines(True), + filename, + ) + + return locs[name] + + +def _make_attr_tuple_class(cls_name, attr_names): + """ + Create a tuple subclass to hold `Attribute`s for an `attrs` class. + + The subclass is a bare tuple with properties for names. + + class MyClassAttributes(tuple): + __slots__ = () + x = property(itemgetter(0)) + """ + attr_class_name = "{}Attributes".format(cls_name) + attr_class_template = [ + "class {}(tuple):".format(attr_class_name), + " __slots__ = ()", + ] + if attr_names: + for i, attr_name in enumerate(attr_names): + attr_class_template.append( + _tuple_property_pat.format(index=i, attr_name=attr_name) + ) + else: + attr_class_template.append(" pass") + globs = {"_attrs_itemgetter": itemgetter, "_attrs_property": property} + _compile_and_eval("\n".join(attr_class_template), globs) + return globs[attr_class_name] + + +# Tuple class for extracted attributes from a class definition. +# `base_attrs` is a subset of `attrs`. +_Attributes = _make_attr_tuple_class( + "_Attributes", + [ + # all attributes to build dunder methods for + "attrs", + # attributes that have been inherited + "base_attrs", + # map inherited attributes to their originating classes + "base_attrs_map", + ], +) + + +def _is_class_var(annot): + """ + Check whether *annot* is a typing.ClassVar. + + The string comparison hack is used to avoid evaluating all string + annotations which would put attrs-based classes at a performance + disadvantage compared to plain old classes. + """ + annot = str(annot) + + # Annotation can be quoted. + if annot.startswith(("'", '"')) and annot.endswith(("'", '"')): + annot = annot[1:-1] + + return annot.startswith(_classvar_prefixes) + + +def _has_own_attribute(cls, attrib_name): + """ + Check whether *cls* defines *attrib_name* (and doesn't just inherit it). + + Requires Python 3. + """ + attr = getattr(cls, attrib_name, _sentinel) + if attr is _sentinel: + return False + + for base_cls in cls.__mro__[1:]: + a = getattr(base_cls, attrib_name, None) + if attr is a: + return False + + return True + + +def _get_annotations(cls): + """ + Get annotations for *cls*. + """ + if _has_own_attribute(cls, "__annotations__"): + return cls.__annotations__ + + return {} + + +def _counter_getter(e): + """ + Key function for sorting to avoid re-creating a lambda for every class. + """ + return e[1].counter + + +def _collect_base_attrs(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in reversed(cls.__mro__[1:-1]): + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.inherited or a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + # For each name, only keep the freshest definition i.e. the furthest at the + # back. base_attr_map is fine because it gets overwritten with every new + # instance. + filtered = [] + seen = set() + for a in reversed(base_attrs): + if a.name in seen: + continue + filtered.insert(0, a) + seen.add(a.name) + + return filtered, base_attr_map + + +def _collect_base_attrs_broken(cls, taken_attr_names): + """ + Collect attr.ibs from base classes of *cls*, except *taken_attr_names*. + + N.B. *taken_attr_names* will be mutated. + + Adhere to the old incorrect behavior. + + Notably it collects from the front and considers inherited attributes which + leads to the buggy behavior reported in #428. + """ + base_attrs = [] + base_attr_map = {} # A dictionary of base attrs to their classes. + + # Traverse the MRO and collect attributes. + for base_cls in cls.__mro__[1:-1]: + for a in getattr(base_cls, "__attrs_attrs__", []): + if a.name in taken_attr_names: + continue + + a = a.evolve(inherited=True) + taken_attr_names.add(a.name) + base_attrs.append(a) + base_attr_map[a.name] = base_cls + + return base_attrs, base_attr_map + + +def _transform_attrs( + cls, these, auto_attribs, kw_only, collect_by_mro, field_transformer +): + """ + Transform all `_CountingAttr`s on a class into `Attribute`s. + + If *these* is passed, use that and don't look for them on the class. + + *collect_by_mro* is True, collect them in the correct MRO order, otherwise + use the old -- incorrect -- order. See #428. + + Return an `_Attributes`. + """ + cd = cls.__dict__ + anns = _get_annotations(cls) + + if these is not None: + ca_list = [(name, ca) for name, ca in iteritems(these)] + + if not isinstance(these, ordered_dict): + ca_list.sort(key=_counter_getter) + elif auto_attribs is True: + ca_names = { + name + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + } + ca_list = [] + annot_names = set() + for attr_name, type in anns.items(): + if _is_class_var(type): + continue + annot_names.add(attr_name) + a = cd.get(attr_name, NOTHING) + + if not isinstance(a, _CountingAttr): + if a is NOTHING: + a = attrib() + else: + a = attrib(default=a) + ca_list.append((attr_name, a)) + + unannotated = ca_names - annot_names + if len(unannotated) > 0: + raise UnannotatedAttributeError( + "The following `attr.ib`s lack a type annotation: " + + ", ".join( + sorted(unannotated, key=lambda n: cd.get(n).counter) + ) + + "." + ) + else: + ca_list = sorted( + ( + (name, attr) + for name, attr in cd.items() + if isinstance(attr, _CountingAttr) + ), + key=lambda e: e[1].counter, + ) + + own_attrs = [ + Attribute.from_counting_attr( + name=attr_name, ca=ca, type=anns.get(attr_name) + ) + for attr_name, ca in ca_list + ] + + if collect_by_mro: + base_attrs, base_attr_map = _collect_base_attrs( + cls, {a.name for a in own_attrs} + ) + else: + base_attrs, base_attr_map = _collect_base_attrs_broken( + cls, {a.name for a in own_attrs} + ) + + attr_names = [a.name for a in base_attrs + own_attrs] + + AttrsClass = _make_attr_tuple_class(cls.__name__, attr_names) + + if kw_only: + own_attrs = [a.evolve(kw_only=True) for a in own_attrs] + base_attrs = [a.evolve(kw_only=True) for a in base_attrs] + + attrs = AttrsClass(base_attrs + own_attrs) + + # Mandatory vs non-mandatory attr order only matters when they are part of + # the __init__ signature and when they aren't kw_only (which are moved to + # the end and can be mandatory or non-mandatory in any order, as they will + # be specified as keyword args anyway). Check the order of those attrs: + had_default = False + for a in (a for a in attrs if a.init is not False and a.kw_only is False): + if had_default is True and a.default is NOTHING: + raise ValueError( + "No mandatory attributes allowed after an attribute with a " + "default value or factory. Attribute in question: %r" % (a,) + ) + + if had_default is False and a.default is not NOTHING: + had_default = True + + if field_transformer is not None: + attrs = field_transformer(cls, attrs) + return _Attributes((attrs, base_attrs, base_attr_map)) + + +if PYPY: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + if isinstance(self, BaseException) and name in ( + "__cause__", + "__context__", + ): + BaseException.__setattr__(self, name, value) + return + + raise FrozenInstanceError() + + +else: + + def _frozen_setattrs(self, name, value): + """ + Attached to frozen classes as __setattr__. + """ + raise FrozenInstanceError() + + +def _frozen_delattrs(self, name): + """ + Attached to frozen classes as __delattr__. + """ + raise FrozenInstanceError() + + +class _ClassBuilder(object): + """ + Iteratively build *one* class. + """ + + __slots__ = ( + "_attr_names", + "_attrs", + "_base_attr_map", + "_base_names", + "_cache_hash", + "_cls", + "_cls_dict", + "_delete_attribs", + "_frozen", + "_has_pre_init", + "_has_post_init", + "_is_exc", + "_on_setattr", + "_slots", + "_weakref_slot", + "_has_own_setattr", + "_has_custom_setattr", + ) + + def __init__( + self, + cls, + these, + slots, + frozen, + weakref_slot, + getstate_setstate, + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_custom_setattr, + field_transformer, + ): + attrs, base_attrs, base_map = _transform_attrs( + cls, + these, + auto_attribs, + kw_only, + collect_by_mro, + field_transformer, + ) + + self._cls = cls + self._cls_dict = dict(cls.__dict__) if slots else {} + self._attrs = attrs + self._base_names = set(a.name for a in base_attrs) + self._base_attr_map = base_map + self._attr_names = tuple(a.name for a in attrs) + self._slots = slots + self._frozen = frozen + self._weakref_slot = weakref_slot + self._cache_hash = cache_hash + self._has_pre_init = bool(getattr(cls, "__attrs_pre_init__", False)) + self._has_post_init = bool(getattr(cls, "__attrs_post_init__", False)) + self._delete_attribs = not bool(these) + self._is_exc = is_exc + self._on_setattr = on_setattr + + self._has_custom_setattr = has_custom_setattr + self._has_own_setattr = False + + self._cls_dict["__attrs_attrs__"] = self._attrs + + if frozen: + self._cls_dict["__setattr__"] = _frozen_setattrs + self._cls_dict["__delattr__"] = _frozen_delattrs + + self._has_own_setattr = True + + if getstate_setstate: + ( + self._cls_dict["__getstate__"], + self._cls_dict["__setstate__"], + ) = self._make_getstate_setstate() + + def __repr__(self): + return "<_ClassBuilder(cls={cls})>".format(cls=self._cls.__name__) + + def build_class(self): + """ + Finalize class based on the accumulated configuration. + + Builder cannot be used after calling this method. + """ + if self._slots is True: + return self._create_slots_class() + else: + return self._patch_original_class() + + def _patch_original_class(self): + """ + Apply accumulated methods and return the class. + """ + cls = self._cls + base_names = self._base_names + + # Clean class of attribute definitions (`attr.ib()`s). + if self._delete_attribs: + for name in self._attr_names: + if ( + name not in base_names + and getattr(cls, name, _sentinel) is not _sentinel + ): + try: + delattr(cls, name) + except AttributeError: + # This can happen if a base class defines a class + # variable and we want to set an attribute with the + # same name by using only a type annotation. + pass + + # Attach our dunder methods. + for name, value in self._cls_dict.items(): + setattr(cls, name, value) + + # If we've inherited an attrs __setattr__ and don't write our own, + # reset it to object's. + if not self._has_own_setattr and getattr( + cls, "__attrs_own_setattr__", False + ): + cls.__attrs_own_setattr__ = False + + if not self._has_custom_setattr: + cls.__setattr__ = object.__setattr__ + + return cls + + def _create_slots_class(self): + """ + Build and return a new class with a `__slots__` attribute. + """ + cd = { + k: v + for k, v in iteritems(self._cls_dict) + if k not in tuple(self._attr_names) + ("__dict__", "__weakref__") + } + + # If our class doesn't have its own implementation of __setattr__ + # (either from the user or by us), check the bases, if one of them has + # an attrs-made __setattr__, that needs to be reset. We don't walk the + # MRO because we only care about our immediate base classes. + # XXX: This can be confused by subclassing a slotted attrs class with + # XXX: a non-attrs class and subclass the resulting class with an attrs + # XXX: class. See `test_slotted_confused` for details. For now that's + # XXX: OK with us. + if not self._has_own_setattr: + cd["__attrs_own_setattr__"] = False + + if not self._has_custom_setattr: + for base_cls in self._cls.__bases__: + if base_cls.__dict__.get("__attrs_own_setattr__", False): + cd["__setattr__"] = object.__setattr__ + break + + # Traverse the MRO to collect existing slots + # and check for an existing __weakref__. + existing_slots = dict() + weakref_inherited = False + for base_cls in self._cls.__mro__[1:-1]: + if base_cls.__dict__.get("__weakref__", None) is not None: + weakref_inherited = True + existing_slots.update( + { + name: getattr(base_cls, name) + for name in getattr(base_cls, "__slots__", []) + } + ) + + base_names = set(self._base_names) + + names = self._attr_names + if ( + self._weakref_slot + and "__weakref__" not in getattr(self._cls, "__slots__", ()) + and "__weakref__" not in names + and not weakref_inherited + ): + names += ("__weakref__",) + + # We only add the names of attributes that aren't inherited. + # Setting __slots__ to inherited attributes wastes memory. + slot_names = [name for name in names if name not in base_names] + # There are slots for attributes from current class + # that are defined in parent classes. + # As their descriptors may be overriden by a child class, + # we collect them here and update the class dict + reused_slots = { + slot: slot_descriptor + for slot, slot_descriptor in iteritems(existing_slots) + if slot in slot_names + } + slot_names = [name for name in slot_names if name not in reused_slots] + cd.update(reused_slots) + if self._cache_hash: + slot_names.append(_hash_cache_field) + cd["__slots__"] = tuple(slot_names) + + qualname = getattr(self._cls, "__qualname__", None) + if qualname is not None: + cd["__qualname__"] = qualname + + # Create new class based on old class and our methods. + cls = type(self._cls)(self._cls.__name__, self._cls.__bases__, cd) + + # The following is a fix for + # https://github.com/python-attrs/attrs/issues/102. On Python 3, + # if a method mentions `__class__` or uses the no-arg super(), the + # compiler will bake a reference to the class in the method itself + # as `method.__closure__`. Since we replace the class with a + # clone, we rewrite these references so it keeps working. + for item in cls.__dict__.values(): + if isinstance(item, (classmethod, staticmethod)): + # Class- and staticmethods hide their functions inside. + # These might need to be rewritten as well. + closure_cells = getattr(item.__func__, "__closure__", None) + elif isinstance(item, property): + # Workaround for property `super()` shortcut (PY3-only). + # There is no universal way for other descriptors. + closure_cells = getattr(item.fget, "__closure__", None) + else: + closure_cells = getattr(item, "__closure__", None) + + if not closure_cells: # Catch None or the empty list. + continue + for cell in closure_cells: + try: + match = cell.cell_contents is self._cls + except ValueError: # ValueError: Cell is empty + pass + else: + if match: + set_closure_cell(cell, cls) + + return cls + + def add_repr(self, ns): + self._cls_dict["__repr__"] = self._add_method_dunders( + _make_repr(self._attrs, ns=ns) + ) + return self + + def add_str(self): + repr = self._cls_dict.get("__repr__") + if repr is None: + raise ValueError( + "__str__ can only be generated if a __repr__ exists." + ) + + def __str__(self): + return self.__repr__() + + self._cls_dict["__str__"] = self._add_method_dunders(__str__) + return self + + def _make_getstate_setstate(self): + """ + Create custom __setstate__ and __getstate__ methods. + """ + # __weakref__ is not writable. + state_attr_names = tuple( + an for an in self._attr_names if an != "__weakref__" + ) + + def slots_getstate(self): + """ + Automatically created by attrs. + """ + return tuple(getattr(self, name) for name in state_attr_names) + + hash_caching_enabled = self._cache_hash + + def slots_setstate(self, state): + """ + Automatically created by attrs. + """ + __bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in zip(state_attr_names, state): + __bound_setattr(name, value) + + # The hash code cache is not included when the object is + # serialized, but it still needs to be initialized to None to + # indicate that the first call to __hash__ should be a cache + # miss. + if hash_caching_enabled: + __bound_setattr(_hash_cache_field, None) + + return slots_getstate, slots_setstate + + def make_unhashable(self): + self._cls_dict["__hash__"] = None + return self + + def add_hash(self): + self._cls_dict["__hash__"] = self._add_method_dunders( + _make_hash( + self._cls, + self._attrs, + frozen=self._frozen, + cache_hash=self._cache_hash, + ) + ) + + return self + + def add_init(self): + self._cls_dict["__init__"] = self._add_method_dunders( + _make_init( + self._cls, + self._attrs, + self._has_pre_init, + self._has_post_init, + self._frozen, + self._slots, + self._cache_hash, + self._base_attr_map, + self._is_exc, + self._on_setattr is not None + and self._on_setattr is not setters.NO_OP, + attrs_init=False, + ) + ) + + return self + + def add_attrs_init(self): + self._cls_dict["__attrs_init__"] = self._add_method_dunders( + _make_init( + self._cls, + self._attrs, + self._has_pre_init, + self._has_post_init, + self._frozen, + self._slots, + self._cache_hash, + self._base_attr_map, + self._is_exc, + self._on_setattr is not None + and self._on_setattr is not setters.NO_OP, + attrs_init=True, + ) + ) + + return self + + def add_eq(self): + cd = self._cls_dict + + cd["__eq__"] = self._add_method_dunders( + _make_eq(self._cls, self._attrs) + ) + cd["__ne__"] = self._add_method_dunders(_make_ne()) + + return self + + def add_order(self): + cd = self._cls_dict + + cd["__lt__"], cd["__le__"], cd["__gt__"], cd["__ge__"] = ( + self._add_method_dunders(meth) + for meth in _make_order(self._cls, self._attrs) + ) + + return self + + def add_setattr(self): + if self._frozen: + return self + + sa_attrs = {} + for a in self._attrs: + on_setattr = a.on_setattr or self._on_setattr + if on_setattr and on_setattr is not setters.NO_OP: + sa_attrs[a.name] = a, on_setattr + + if not sa_attrs: + return self + + if self._has_custom_setattr: + # We need to write a __setattr__ but there already is one! + raise ValueError( + "Can't combine custom __setattr__ with on_setattr hooks." + ) + + # docstring comes from _add_method_dunders + def __setattr__(self, name, val): + try: + a, hook = sa_attrs[name] + except KeyError: + nval = val + else: + nval = hook(self, a, val) + + _obj_setattr(self, name, nval) + + self._cls_dict["__attrs_own_setattr__"] = True + self._cls_dict["__setattr__"] = self._add_method_dunders(__setattr__) + self._has_own_setattr = True + + return self + + def _add_method_dunders(self, method): + """ + Add __module__ and __qualname__ to a *method* if possible. + """ + try: + method.__module__ = self._cls.__module__ + except AttributeError: + pass + + try: + method.__qualname__ = ".".join( + (self._cls.__qualname__, method.__name__) + ) + except AttributeError: + pass + + try: + method.__doc__ = "Method generated by attrs for class %s." % ( + self._cls.__qualname__, + ) + except AttributeError: + pass + + return method + + +_CMP_DEPRECATION = ( + "The usage of `cmp` is deprecated and will be removed on or after " + "2021-06-01. Please use `eq` and `order` instead." +) + + +def _determine_attrs_eq_order(cmp, eq, order, default_eq): + """ + Validate the combination of *cmp*, *eq*, and *order*. Derive the effective + values of eq and order. If *eq* is None, set it to *default_eq*. + """ + if cmp is not None and any((eq is not None, order is not None)): + raise ValueError("Don't mix `cmp` with `eq' and `order`.") + + # cmp takes precedence due to bw-compatibility. + if cmp is not None: + return cmp, cmp + + # If left None, equality is set to the specified default and ordering + # mirrors equality. + if eq is None: + eq = default_eq + + if order is None: + order = eq + + if eq is False and order is True: + raise ValueError("`order` can only be True if `eq` is True too.") + + return eq, order + + +def _determine_attrib_eq_order(cmp, eq, order, default_eq): + """ + Validate the combination of *cmp*, *eq*, and *order*. Derive the effective + values of eq and order. If *eq* is None, set it to *default_eq*. + """ + if cmp is not None and any((eq is not None, order is not None)): + raise ValueError("Don't mix `cmp` with `eq' and `order`.") + + def decide_callable_or_boolean(value): + """ + Decide whether a key function is used. + """ + if callable(value): + value, key = True, value + else: + key = None + return value, key + + # cmp takes precedence due to bw-compatibility. + if cmp is not None: + cmp, cmp_key = decide_callable_or_boolean(cmp) + return cmp, cmp_key, cmp, cmp_key + + # If left None, equality is set to the specified default and ordering + # mirrors equality. + if eq is None: + eq, eq_key = default_eq, None + else: + eq, eq_key = decide_callable_or_boolean(eq) + + if order is None: + order, order_key = eq, eq_key + else: + order, order_key = decide_callable_or_boolean(order) + + if eq is False and order is True: + raise ValueError("`order` can only be True if `eq` is True too.") + + return eq, eq_key, order, order_key + + +def _determine_whether_to_implement( + cls, flag, auto_detect, dunders, default=True +): + """ + Check whether we should implement a set of methods for *cls*. + + *flag* is the argument passed into @attr.s like 'init', *auto_detect* the + same as passed into @attr.s and *dunders* is a tuple of attribute names + whose presence signal that the user has implemented it themselves. + + Return *default* if no reason for either for or against is found. + + auto_detect must be False on Python 2. + """ + if flag is True or flag is False: + return flag + + if flag is None and auto_detect is False: + return default + + # Logically, flag is None and auto_detect is True here. + for dunder in dunders: + if _has_own_attribute(cls, dunder): + return False + + return default + + +def attrs( + maybe_cls=None, + these=None, + repr_ns=None, + repr=None, + cmp=None, + hash=None, + init=None, + slots=False, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=False, + kw_only=False, + cache_hash=False, + auto_exc=False, + eq=None, + order=None, + auto_detect=False, + collect_by_mro=False, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, +): + r""" + A class decorator that adds `dunder + `_\ -methods according to the + specified attributes using `attr.ib` or the *these* argument. + + :param these: A dictionary of name to `attr.ib` mappings. This is + useful to avoid the definition of your attributes within the class body + because you can't (e.g. if you want to add ``__repr__`` methods to + Django models) or don't want to. + + If *these* is not ``None``, ``attrs`` will *not* search the class body + for attributes and will *not* remove any attributes from it. + + If *these* is an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the attributes inside *these*. Otherwise the order + of the definition of the attributes is used. + + :type these: `dict` of `str` to `attr.ib` + + :param str repr_ns: When using nested classes, there's no way in Python 2 + to automatically detect that. Therefore it's possible to set the + namespace explicitly for a more meaningful ``repr`` output. + :param bool auto_detect: Instead of setting the *init*, *repr*, *eq*, + *order*, and *hash* arguments explicitly, assume they are set to + ``True`` **unless any** of the involved methods for one of the + arguments is implemented in the *current* class (i.e. it is *not* + inherited from some base class). + + So for example by implementing ``__eq__`` on a class yourself, + ``attrs`` will deduce ``eq=False`` and will create *neither* + ``__eq__`` *nor* ``__ne__`` (but Python classes come with a sensible + ``__ne__`` by default, so it *should* be enough to only implement + ``__eq__`` in most cases). + + .. warning:: + + If you prevent ``attrs`` from creating the ordering methods for you + (``order=False``, e.g. by implementing ``__le__``), it becomes + *your* responsibility to make sure its ordering is sound. The best + way is to use the `functools.total_ordering` decorator. + + + Passing ``True`` or ``False`` to *init*, *repr*, *eq*, *order*, + *cmp*, or *hash* overrides whatever *auto_detect* would determine. + + *auto_detect* requires Python 3. Setting it ``True`` on Python 2 raises + a `PythonTooOldError`. + + :param bool repr: Create a ``__repr__`` method with a human readable + representation of ``attrs`` attributes.. + :param bool str: Create a ``__str__`` method that is identical to + ``__repr__``. This is usually not necessary except for + `Exception`\ s. + :param Optional[bool] eq: If ``True`` or ``None`` (default), add ``__eq__`` + and ``__ne__`` methods that check two instances for equality. + + They compare the instances as if they were tuples of their ``attrs`` + attributes if and only if the types of both classes are *identical*! + :param Optional[bool] order: If ``True``, add ``__lt__``, ``__le__``, + ``__gt__``, and ``__ge__`` methods that behave like *eq* above and + allow instances to be ordered. If ``None`` (default) mirror value of + *eq*. + :param Optional[bool] cmp: Setting *cmp* is equivalent to setting *eq* + and *order* to the same value. Must not be mixed with *eq* or *order*. + :param Optional[bool] hash: If ``None`` (default), the ``__hash__`` method + is generated according how *eq* and *frozen* are set. + + 1. If *both* are True, ``attrs`` will generate a ``__hash__`` for you. + 2. If *eq* is True and *frozen* is False, ``__hash__`` will be set to + None, marking it unhashable (which it is). + 3. If *eq* is False, ``__hash__`` will be left untouched meaning the + ``__hash__`` method of the base class will be used (if base class is + ``object``, this means it will fall back to id-based hashing.). + + Although not recommended, you can decide for yourself and force + ``attrs`` to create one (e.g. if the class is immutable even though you + didn't freeze it programmatically) by passing ``True`` or not. Both of + these cases are rather special and should be used carefully. + + See our documentation on `hashing`, Python's documentation on + `object.__hash__`, and the `GitHub issue that led to the default \ + behavior `_ for more + details. + :param bool init: Create a ``__init__`` method that initializes the + ``attrs`` attributes. Leading underscores are stripped for the argument + name. If a ``__attrs_pre_init__`` method exists on the class, it will + be called before the class is initialized. If a ``__attrs_post_init__`` + method exists on the class, it will be called after the class is fully + initialized. + + If ``init`` is ``False``, an ``__attrs_init__`` method will be + injected instead. This allows you to define a custom ``__init__`` + method that can do pre-init work such as ``super().__init__()``, + and then call ``__attrs_init__()`` and ``__attrs_post_init__()``. + :param bool slots: Create a `slotted class ` that's more + memory-efficient. Slotted classes are generally superior to the default + dict classes, but have some gotchas you should know about, so we + encourage you to read the `glossary entry `. + :param bool frozen: Make instances immutable after initialization. If + someone attempts to modify a frozen instance, + `attr.exceptions.FrozenInstanceError` is raised. + + .. note:: + + 1. This is achieved by installing a custom ``__setattr__`` method + on your class, so you can't implement your own. + + 2. True immutability is impossible in Python. + + 3. This *does* have a minor a runtime performance `impact + ` when initializing new instances. In other words: + ``__init__`` is slightly slower with ``frozen=True``. + + 4. If a class is frozen, you cannot modify ``self`` in + ``__attrs_post_init__`` or a self-written ``__init__``. You can + circumvent that limitation by using + ``object.__setattr__(self, "attribute_name", value)``. + + 5. Subclasses of a frozen class are frozen too. + + :param bool weakref_slot: Make instances weak-referenceable. This has no + effect unless ``slots`` is also enabled. + :param bool auto_attribs: If ``True``, collect `PEP 526`_-annotated + attributes (Python 3.6 and later only) from the class body. + + In this case, you **must** annotate every field. If ``attrs`` + encounters a field that is set to an `attr.ib` but lacks a type + annotation, an `attr.exceptions.UnannotatedAttributeError` is + raised. Use ``field_name: typing.Any = attr.ib(...)`` if you don't + want to set a type. + + If you assign a value to those attributes (e.g. ``x: int = 42``), that + value becomes the default value like if it were passed using + ``attr.ib(default=42)``. Passing an instance of `Factory` also + works as expected in most cases (see warning below). + + Attributes annotated as `typing.ClassVar`, and attributes that are + neither annotated nor set to an `attr.ib` are **ignored**. + + .. warning:: + For features that use the attribute name to create decorators (e.g. + `validators `), you still *must* assign `attr.ib` to + them. Otherwise Python will either not find the name or try to use + the default value to call e.g. ``validator`` on it. + + These errors can be quite confusing and probably the most common bug + report on our bug tracker. + + .. _`PEP 526`: https://www.python.org/dev/peps/pep-0526/ + :param bool kw_only: Make all attributes keyword-only (Python 3+) + in the generated ``__init__`` (if ``init`` is ``False``, this + parameter is ignored). + :param bool cache_hash: Ensure that the object's hash code is computed + only once and stored on the object. If this is set to ``True``, + hashing must be either explicitly or implicitly enabled for this + class. If the hash code is cached, avoid any reassignments of + fields involved in hash code computation or mutations of the objects + those fields point to after object creation. If such changes occur, + the behavior of the object's hash code is undefined. + :param bool auto_exc: If the class subclasses `BaseException` + (which implicitly includes any subclass of any exception), the + following happens to behave like a well-behaved Python exceptions + class: + + - the values for *eq*, *order*, and *hash* are ignored and the + instances compare and hash by the instance's ids (N.B. ``attrs`` will + *not* remove existing implementations of ``__hash__`` or the equality + methods. It just won't add own ones.), + - all attributes that are either passed into ``__init__`` or have a + default value are additionally available as a tuple in the ``args`` + attribute, + - the value of *str* is ignored leaving ``__str__`` to base classes. + :param bool collect_by_mro: Setting this to `True` fixes the way ``attrs`` + collects attributes from base classes. The default behavior is + incorrect in certain cases of multiple inheritance. It should be on by + default but is kept off for backward-compatability. + + See issue `#428 `_ for + more details. + + :param Optional[bool] getstate_setstate: + .. note:: + This is usually only interesting for slotted classes and you should + probably just set *auto_detect* to `True`. + + If `True`, ``__getstate__`` and + ``__setstate__`` are generated and attached to the class. This is + necessary for slotted classes to be pickleable. If left `None`, it's + `True` by default for slotted classes and ``False`` for dict classes. + + If *auto_detect* is `True`, and *getstate_setstate* is left `None`, + and **either** ``__getstate__`` or ``__setstate__`` is detected directly + on the class (i.e. not inherited), it is set to `False` (this is usually + what you want). + + :param on_setattr: A callable that is run whenever the user attempts to set + an attribute (either by assignment like ``i.x = 42`` or by using + `setattr` like ``setattr(i, "x", 42)``). It receives the same arguments + as validators: the instance, the attribute that is being modified, and + the new value. + + If no exception is raised, the attribute is set to the return value of + the callable. + + If a list of callables is passed, they're automatically wrapped in an + `attr.setters.pipe`. + + :param Optional[callable] field_transformer: + A function that is called with the original class object and all + fields right before ``attrs`` finalizes the class. You can use + this, e.g., to automatically add converters or validators to + fields based on their types. See `transform-fields` for more details. + + .. versionadded:: 16.0.0 *slots* + .. versionadded:: 16.1.0 *frozen* + .. versionadded:: 16.3.0 *str* + .. versionadded:: 16.3.0 Support for ``__attrs_post_init__``. + .. versionchanged:: 17.1.0 + *hash* supports ``None`` as value which is also the default now. + .. versionadded:: 17.3.0 *auto_attribs* + .. versionchanged:: 18.1.0 + If *these* is passed, no attributes are deleted from the class body. + .. versionchanged:: 18.1.0 If *these* is ordered, the order is retained. + .. versionadded:: 18.2.0 *weakref_slot* + .. deprecated:: 18.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now raise a + `DeprecationWarning` if the classes compared are subclasses of + each other. ``__eq`` and ``__ne__`` never tried to compared subclasses + to each other. + .. versionchanged:: 19.2.0 + ``__lt__``, ``__le__``, ``__gt__``, and ``__ge__`` now do not consider + subclasses comparable anymore. + .. versionadded:: 18.2.0 *kw_only* + .. versionadded:: 18.2.0 *cache_hash* + .. versionadded:: 19.1.0 *auto_exc* + .. deprecated:: 19.2.0 *cmp* Removal on or after 2021-06-01. + .. versionadded:: 19.2.0 *eq* and *order* + .. versionadded:: 20.1.0 *auto_detect* + .. versionadded:: 20.1.0 *collect_by_mro* + .. versionadded:: 20.1.0 *getstate_setstate* + .. versionadded:: 20.1.0 *on_setattr* + .. versionadded:: 20.3.0 *field_transformer* + .. versionchanged:: 21.1.0 + ``init=False`` injects ``__attrs_init__`` + .. versionchanged:: 21.1.0 Support for ``__attrs_pre_init__`` + .. versionchanged:: 21.1.0 *cmp* undeprecated + """ + if auto_detect and PY2: + raise PythonTooOldError( + "auto_detect only works on Python 3 and later." + ) + + eq_, order_ = _determine_attrs_eq_order(cmp, eq, order, None) + hash_ = hash # work around the lack of nonlocal + + if isinstance(on_setattr, (list, tuple)): + on_setattr = setters.pipe(*on_setattr) + + def wrap(cls): + + if getattr(cls, "__class__", None) is None: + raise TypeError("attrs only works with new-style classes.") + + is_frozen = frozen or _has_frozen_base_class(cls) + is_exc = auto_exc is True and issubclass(cls, BaseException) + has_own_setattr = auto_detect and _has_own_attribute( + cls, "__setattr__" + ) + + if has_own_setattr and is_frozen: + raise ValueError("Can't freeze a class with a custom __setattr__.") + + builder = _ClassBuilder( + cls, + these, + slots, + is_frozen, + weakref_slot, + _determine_whether_to_implement( + cls, + getstate_setstate, + auto_detect, + ("__getstate__", "__setstate__"), + default=slots, + ), + auto_attribs, + kw_only, + cache_hash, + is_exc, + collect_by_mro, + on_setattr, + has_own_setattr, + field_transformer, + ) + if _determine_whether_to_implement( + cls, repr, auto_detect, ("__repr__",) + ): + builder.add_repr(repr_ns) + if str is True: + builder.add_str() + + eq = _determine_whether_to_implement( + cls, eq_, auto_detect, ("__eq__", "__ne__") + ) + if not is_exc and eq is True: + builder.add_eq() + if not is_exc and _determine_whether_to_implement( + cls, order_, auto_detect, ("__lt__", "__le__", "__gt__", "__ge__") + ): + builder.add_order() + + builder.add_setattr() + + if ( + hash_ is None + and auto_detect is True + and _has_own_attribute(cls, "__hash__") + ): + hash = False + else: + hash = hash_ + if hash is not True and hash is not False and hash is not None: + # Can't use `hash in` because 1 == True for example. + raise TypeError( + "Invalid value for hash. Must be True, False, or None." + ) + elif hash is False or (hash is None and eq is False) or is_exc: + # Don't do anything. Should fall back to __object__'s __hash__ + # which is by id. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + elif hash is True or ( + hash is None and eq is True and is_frozen is True + ): + # Build a __hash__ if told so, or if it's safe. + builder.add_hash() + else: + # Raise TypeError on attempts to hash. + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " hashing must be either explicitly or implicitly " + "enabled." + ) + builder.make_unhashable() + + if _determine_whether_to_implement( + cls, init, auto_detect, ("__init__",) + ): + builder.add_init() + else: + builder.add_attrs_init() + if cache_hash: + raise TypeError( + "Invalid value for cache_hash. To use hash caching," + " init must be True." + ) + + return builder.build_class() + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +_attrs = attrs +""" +Internal alias so we can use it in functions that take an argument called +*attrs*. +""" + + +if PY2: + + def _has_frozen_base_class(cls): + """ + Check whether *cls* has a frozen ancestor by looking at its + __setattr__. + """ + return ( + getattr(cls.__setattr__, "__module__", None) + == _frozen_setattrs.__module__ + and cls.__setattr__.__name__ == _frozen_setattrs.__name__ + ) + + +else: + + def _has_frozen_base_class(cls): + """ + Check whether *cls* has a frozen ancestor by looking at its + __setattr__. + """ + return cls.__setattr__ == _frozen_setattrs + + +def _generate_unique_filename(cls, func_name): + """ + Create a "filename" suitable for a function being generated. + """ + unique_id = uuid.uuid4() + extra = "" + count = 1 + + while True: + unique_filename = "".format( + func_name, + cls.__module__, + getattr(cls, "__qualname__", cls.__name__), + extra, + ) + # To handle concurrency we essentially "reserve" our spot in + # the linecache with a dummy line. The caller can then + # set this value correctly. + cache_line = (1, None, (str(unique_id),), unique_filename) + if ( + linecache.cache.setdefault(unique_filename, cache_line) + == cache_line + ): + return unique_filename + + # Looks like this spot is taken. Try again. + count += 1 + extra = "-{0}".format(count) + + +def _make_hash(cls, attrs, frozen, cache_hash): + attrs = tuple( + a for a in attrs if a.hash is True or (a.hash is None and a.eq is True) + ) + + tab = " " + + unique_filename = _generate_unique_filename(cls, "hash") + type_hash = hash(unique_filename) + + hash_def = "def __hash__(self" + hash_func = "hash((" + closing_braces = "))" + if not cache_hash: + hash_def += "):" + else: + if not PY2: + hash_def += ", *" + + hash_def += ( + ", _cache_wrapper=" + + "__import__('attr._make')._make._CacheHashWrapper):" + ) + hash_func = "_cache_wrapper(" + hash_func + closing_braces += ")" + + method_lines = [hash_def] + + def append_hash_computation_lines(prefix, indent): + """ + Generate the code for actually computing the hash code. + Below this will either be returned directly or used to compute + a value which is then cached, depending on the value of cache_hash + """ + + method_lines.extend( + [ + indent + prefix + hash_func, + indent + " %d," % (type_hash,), + ] + ) + + for a in attrs: + method_lines.append(indent + " self.%s," % a.name) + + method_lines.append(indent + " " + closing_braces) + + if cache_hash: + method_lines.append(tab + "if self.%s is None:" % _hash_cache_field) + if frozen: + append_hash_computation_lines( + "object.__setattr__(self, '%s', " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab * 2 + ")") # close __setattr__ + else: + append_hash_computation_lines( + "self.%s = " % _hash_cache_field, tab * 2 + ) + method_lines.append(tab + "return self.%s" % _hash_cache_field) + else: + append_hash_computation_lines("return ", tab) + + script = "\n".join(method_lines) + return _make_method("__hash__", script, unique_filename) + + +def _add_hash(cls, attrs): + """ + Add a hash method to *cls*. + """ + cls.__hash__ = _make_hash(cls, attrs, frozen=False, cache_hash=False) + return cls + + +def _make_ne(): + """ + Create __ne__ method. + """ + + def __ne__(self, other): + """ + Check equality and either forward a NotImplemented or + return the result negated. + """ + result = self.__eq__(other) + if result is NotImplemented: + return NotImplemented + + return not result + + return __ne__ + + +def _make_eq(cls, attrs): + """ + Create __eq__ method for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.eq] + + unique_filename = _generate_unique_filename(cls, "eq") + lines = [ + "def __eq__(self, other):", + " if other.__class__ is not self.__class__:", + " return NotImplemented", + ] + + # We can't just do a big self.x = other.x and... clause due to + # irregularities like nan == nan is false but (nan,) == (nan,) is true. + globs = {} + if attrs: + lines.append(" return (") + others = [" ) == ("] + for a in attrs: + if a.eq_key: + cmp_name = "_%s_key" % (a.name,) + # Add the key function to the global namespace + # of the evaluated function. + globs[cmp_name] = a.eq_key + lines.append( + " %s(self.%s)," + % ( + cmp_name, + a.name, + ) + ) + others.append( + " %s(other.%s)," + % ( + cmp_name, + a.name, + ) + ) + else: + lines.append(" self.%s," % (a.name,)) + others.append(" other.%s," % (a.name,)) + + lines += others + [" )"] + else: + lines.append(" return True") + + script = "\n".join(lines) + + return _make_method("__eq__", script, unique_filename, globs) + + +def _make_order(cls, attrs): + """ + Create ordering methods for *cls* with *attrs*. + """ + attrs = [a for a in attrs if a.order] + + def attrs_to_tuple(obj): + """ + Save us some typing. + """ + return tuple( + key(value) if key else value + for value, key in ( + (getattr(obj, a.name), a.order_key) for a in attrs + ) + ) + + def __lt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) < attrs_to_tuple(other) + + return NotImplemented + + def __le__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) <= attrs_to_tuple(other) + + return NotImplemented + + def __gt__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) > attrs_to_tuple(other) + + return NotImplemented + + def __ge__(self, other): + """ + Automatically created by attrs. + """ + if other.__class__ is self.__class__: + return attrs_to_tuple(self) >= attrs_to_tuple(other) + + return NotImplemented + + return __lt__, __le__, __gt__, __ge__ + + +def _add_eq(cls, attrs=None): + """ + Add equality methods to *cls* with *attrs*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__eq__ = _make_eq(cls, attrs) + cls.__ne__ = _make_ne() + + return cls + + +_already_repring = threading.local() + + +def _make_repr(attrs, ns): + """ + Make a repr method that includes relevant *attrs*, adding *ns* to the full + name. + """ + + # Figure out which attributes to include, and which function to use to + # format them. The a.repr value can be either bool or a custom callable. + attr_names_with_reprs = tuple( + (a.name, repr if a.repr is True else a.repr) + for a in attrs + if a.repr is not False + ) + + def __repr__(self): + """ + Automatically created by attrs. + """ + try: + working_set = _already_repring.working_set + except AttributeError: + working_set = set() + _already_repring.working_set = working_set + + if id(self) in working_set: + return "..." + real_cls = self.__class__ + if ns is None: + qualname = getattr(real_cls, "__qualname__", None) + if qualname is not None: + class_name = qualname.rsplit(">.", 1)[-1] + else: + class_name = real_cls.__name__ + else: + class_name = ns + "." + real_cls.__name__ + + # Since 'self' remains on the stack (i.e.: strongly referenced) for the + # duration of this call, it's safe to depend on id(...) stability, and + # not need to track the instance and therefore worry about properties + # like weakref- or hash-ability. + working_set.add(id(self)) + try: + result = [class_name, "("] + first = True + for name, attr_repr in attr_names_with_reprs: + if first: + first = False + else: + result.append(", ") + result.extend( + (name, "=", attr_repr(getattr(self, name, NOTHING))) + ) + return "".join(result) + ")" + finally: + working_set.remove(id(self)) + + return __repr__ + + +def _add_repr(cls, ns=None, attrs=None): + """ + Add a repr method to *cls*. + """ + if attrs is None: + attrs = cls.__attrs_attrs__ + + cls.__repr__ = _make_repr(attrs, ns) + return cls + + +def fields(cls): + """ + Return the tuple of ``attrs`` attributes for a class. + + The tuple also allows accessing the fields by their names (see below for + examples). + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: tuple (with name accessors) of `attr.Attribute` + + .. versionchanged:: 16.2.0 Returned tuple allows accessing the fields + by name. + """ + if not isclass(cls): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return attrs + + +def fields_dict(cls): + """ + Return an ordered dictionary of ``attrs`` attributes for a class, whose + keys are the attribute names. + + :param type cls: Class to introspect. + + :raise TypeError: If *cls* is not a class. + :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` + class. + + :rtype: an ordered dict where keys are attribute names and values are + `attr.Attribute`\\ s. This will be a `dict` if it's + naturally ordered like on Python 3.6+ or an + :class:`~collections.OrderedDict` otherwise. + + .. versionadded:: 18.1.0 + """ + if not isclass(cls): + raise TypeError("Passed object must be a class.") + attrs = getattr(cls, "__attrs_attrs__", None) + if attrs is None: + raise NotAnAttrsClassError( + "{cls!r} is not an attrs-decorated class.".format(cls=cls) + ) + return ordered_dict(((a.name, a) for a in attrs)) + + +def validate(inst): + """ + Validate all attributes on *inst* that have a validator. + + Leaves all exceptions through. + + :param inst: Instance of a class with ``attrs`` attributes. + """ + if _config._run_validators is False: + return + + for a in fields(inst.__class__): + v = a.validator + if v is not None: + v(inst, a, getattr(inst, a.name)) + + +def _is_slot_cls(cls): + return "__slots__" in cls.__dict__ + + +def _is_slot_attr(a_name, base_attr_map): + """ + Check if the attribute name comes from a slot class. + """ + return a_name in base_attr_map and _is_slot_cls(base_attr_map[a_name]) + + +def _make_init( + cls, + attrs, + pre_init, + post_init, + frozen, + slots, + cache_hash, + base_attr_map, + is_exc, + has_global_on_setattr, + attrs_init, +): + if frozen and has_global_on_setattr: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = cache_hash or frozen + filtered_attrs = [] + attr_dict = {} + for a in attrs: + if not a.init and a.default is NOTHING: + continue + + filtered_attrs.append(a) + attr_dict[a.name] = a + + if a.on_setattr is not None: + if frozen is True: + raise ValueError("Frozen classes can't use on_setattr.") + + needs_cached_setattr = True + elif ( + has_global_on_setattr and a.on_setattr is not setters.NO_OP + ) or _is_slot_attr(a.name, base_attr_map): + needs_cached_setattr = True + + unique_filename = _generate_unique_filename(cls, "init") + + script, globs, annotations = _attrs_to_init_script( + filtered_attrs, + frozen, + slots, + pre_init, + post_init, + cache_hash, + base_attr_map, + is_exc, + needs_cached_setattr, + has_global_on_setattr, + attrs_init, + ) + if cls.__module__ in sys.modules: + # This makes typing.get_type_hints(CLS.__init__) resolve string types. + globs.update(sys.modules[cls.__module__].__dict__) + + globs.update({"NOTHING": NOTHING, "attr_dict": attr_dict}) + + if needs_cached_setattr: + # Save the lookup overhead in __init__ if we need to circumvent + # setattr hooks. + globs["_cached_setattr"] = _obj_setattr + + init = _make_method( + "__attrs_init__" if attrs_init else "__init__", + script, + unique_filename, + globs, + ) + init.__annotations__ = annotations + + return init + + +def _setattr(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*. + """ + return "_setattr('%s', %s)" % (attr_name, value_var) + + +def _setattr_with_converter(attr_name, value_var, has_on_setattr): + """ + Use the cached object.setattr to set *attr_name* to *value_var*, but run + its converter first. + """ + return "_setattr('%s', %s(%s))" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +def _assign(attr_name, value, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment. Otherwise + relegate to _setattr. + """ + if has_on_setattr: + return _setattr(attr_name, value, True) + + return "self.%s = %s" % (attr_name, value) + + +def _assign_with_converter(attr_name, value_var, has_on_setattr): + """ + Unless *attr_name* has an on_setattr hook, use normal assignment after + conversion. Otherwise relegate to _setattr_with_converter. + """ + if has_on_setattr: + return _setattr_with_converter(attr_name, value_var, True) + + return "self.%s = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + +if PY2: + + def _unpack_kw_only_py2(attr_name, default=None): + """ + Unpack *attr_name* from _kw_only dict. + """ + if default is not None: + arg_default = ", %s" % default + else: + arg_default = "" + return "%s = _kw_only.pop('%s'%s)" % ( + attr_name, + attr_name, + arg_default, + ) + + def _unpack_kw_only_lines_py2(kw_only_args): + """ + Unpack all *kw_only_args* from _kw_only dict and handle errors. + + Given a list of strings "{attr_name}" and "{attr_name}={default}" + generates list of lines of code that pop attrs from _kw_only dict and + raise TypeError similar to builtin if required attr is missing or + extra key is passed. + + >>> print("\n".join(_unpack_kw_only_lines_py2(["a", "b=42"]))) + try: + a = _kw_only.pop('a') + b = _kw_only.pop('b', 42) + except KeyError as _key_error: + raise TypeError( + ... + if _kw_only: + raise TypeError( + ... + """ + lines = ["try:"] + lines.extend( + " " + _unpack_kw_only_py2(*arg.split("=")) + for arg in kw_only_args + ) + lines += """\ +except KeyError as _key_error: + raise TypeError( + '__init__() missing required keyword-only argument: %s' % _key_error + ) +if _kw_only: + raise TypeError( + '__init__() got an unexpected keyword argument %r' + % next(iter(_kw_only)) + ) +""".split( + "\n" + ) + return lines + + +def _attrs_to_init_script( + attrs, + frozen, + slots, + pre_init, + post_init, + cache_hash, + base_attr_map, + is_exc, + needs_cached_setattr, + has_global_on_setattr, + attrs_init, +): + """ + Return a script of an initializer for *attrs* and a dict of globals. + + The globals are expected by the generated script. + + If *frozen* is True, we cannot set the attributes directly so we use + a cached ``object.__setattr__``. + """ + lines = [] + if pre_init: + lines.append("self.__attrs_pre_init__()") + + if needs_cached_setattr: + lines.append( + # Circumvent the __setattr__ descriptor to save one lookup per + # assignment. + # Note _setattr will be used again below if cache_hash is True + "_setattr = _cached_setattr.__get__(self, self.__class__)" + ) + + if frozen is True: + if slots is True: + fmt_setter = _setattr + fmt_setter_with_converter = _setattr_with_converter + else: + # Dict frozen classes assign directly to __dict__. + # But only if the attribute doesn't come from an ancestor slot + # class. + # Note _inst_dict will be used again below if cache_hash is True + lines.append("_inst_dict = self.__dict__") + + def fmt_setter(attr_name, value_var, has_on_setattr): + if _is_slot_attr(attr_name, base_attr_map): + return _setattr(attr_name, value_var, has_on_setattr) + + return "_inst_dict['%s'] = %s" % (attr_name, value_var) + + def fmt_setter_with_converter( + attr_name, value_var, has_on_setattr + ): + if has_on_setattr or _is_slot_attr(attr_name, base_attr_map): + return _setattr_with_converter( + attr_name, value_var, has_on_setattr + ) + + return "_inst_dict['%s'] = %s(%s)" % ( + attr_name, + _init_converter_pat % (attr_name,), + value_var, + ) + + else: + # Not frozen. + fmt_setter = _assign + fmt_setter_with_converter = _assign_with_converter + + args = [] + kw_only_args = [] + attrs_to_validate = [] + + # This is a dictionary of names to validator and converter callables. + # Injecting this into __init__ globals lets us avoid lookups. + names_for_globals = {} + annotations = {"return": None} + + for a in attrs: + if a.validator: + attrs_to_validate.append(a) + + attr_name = a.name + has_on_setattr = a.on_setattr is not None or ( + a.on_setattr is not setters.NO_OP and has_global_on_setattr + ) + arg_name = a.name.lstrip("_") + + has_factory = isinstance(a.default, Factory) + if has_factory and a.default.takes_self: + maybe_self = "self" + else: + maybe_self = "" + + if a.init is False: + if has_factory: + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + init_factory_name + "(%s)" % (maybe_self,), + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + conv_name = _init_converter_pat % (a.name,) + names_for_globals[conv_name] = a.converter + else: + lines.append( + fmt_setter( + attr_name, + "attr_dict['%s'].default" % (attr_name,), + has_on_setattr, + ) + ) + elif a.default is not NOTHING and not has_factory: + arg = "%s=attr_dict['%s'].default" % (arg_name, attr_name) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + elif has_factory: + arg = "%s=NOTHING" % (arg_name,) + if a.kw_only: + kw_only_args.append(arg) + else: + args.append(arg) + lines.append("if %s is not NOTHING:" % (arg_name,)) + + init_factory_name = _init_factory_pat.format(a.name) + if a.converter is not None: + lines.append( + " " + + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter_with_converter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append( + " " + fmt_setter(attr_name, arg_name, has_on_setattr) + ) + lines.append("else:") + lines.append( + " " + + fmt_setter( + attr_name, + init_factory_name + "(" + maybe_self + ")", + has_on_setattr, + ) + ) + names_for_globals[init_factory_name] = a.default.factory + else: + if a.kw_only: + kw_only_args.append(arg_name) + else: + args.append(arg_name) + + if a.converter is not None: + lines.append( + fmt_setter_with_converter( + attr_name, arg_name, has_on_setattr + ) + ) + names_for_globals[ + _init_converter_pat % (a.name,) + ] = a.converter + else: + lines.append(fmt_setter(attr_name, arg_name, has_on_setattr)) + + if a.init is True: + if a.type is not None and a.converter is None: + annotations[arg_name] = a.type + elif a.converter is not None and not PY2: + # Try to get the type from the converter. + sig = None + try: + sig = inspect.signature(a.converter) + except (ValueError, TypeError): # inspect failed + pass + if sig: + sig_params = list(sig.parameters.values()) + if ( + sig_params + and sig_params[0].annotation + is not inspect.Parameter.empty + ): + annotations[arg_name] = sig_params[0].annotation + + if attrs_to_validate: # we can skip this if there are no validators. + names_for_globals["_config"] = _config + lines.append("if _config._run_validators is True:") + for a in attrs_to_validate: + val_name = "__attr_validator_" + a.name + attr_name = "__attr_" + a.name + lines.append( + " %s(self, %s, self.%s)" % (val_name, attr_name, a.name) + ) + names_for_globals[val_name] = a.validator + names_for_globals[attr_name] = a + + if post_init: + lines.append("self.__attrs_post_init__()") + + # because this is set only after __attrs_post_init is called, a crash + # will result if post-init tries to access the hash code. This seemed + # preferable to setting this beforehand, in which case alteration to + # field values during post-init combined with post-init accessing the + # hash code would result in silent bugs. + if cache_hash: + if frozen: + if slots: + # if frozen and slots, then _setattr defined above + init_hash_cache = "_setattr('%s', %s)" + else: + # if frozen and not slots, then _inst_dict defined above + init_hash_cache = "_inst_dict['%s'] = %s" + else: + init_hash_cache = "self.%s = %s" + lines.append(init_hash_cache % (_hash_cache_field, "None")) + + # For exceptions we rely on BaseException.__init__ for proper + # initialization. + if is_exc: + vals = ",".join("self." + a.name for a in attrs if a.init) + + lines.append("BaseException.__init__(self, %s)" % (vals,)) + + args = ", ".join(args) + if kw_only_args: + if PY2: + lines = _unpack_kw_only_lines_py2(kw_only_args) + lines + + args += "%s**_kw_only" % (", " if args else "",) # leading comma + else: + args += "%s*, %s" % ( + ", " if args else "", # leading comma + ", ".join(kw_only_args), # kw_only args + ) + return ( + """\ +def {init_name}(self, {args}): + {lines} +""".format( + init_name=("__attrs_init__" if attrs_init else "__init__"), + args=args, + lines="\n ".join(lines) if lines else "pass", + ), + names_for_globals, + annotations, + ) + + +class Attribute(object): + """ + *Read-only* representation of an attribute. + + Instances of this class are frequently used for introspection purposes + like: + + - `fields` returns a tuple of them. + - Validators get them passed as the first argument. + - The *field transformer* hook receives a list of them. + + :attribute name: The name of the attribute. + :attribute inherited: Whether or not that attribute has been inherited from + a base class. + + Plus *all* arguments of `attr.ib` (except for ``factory`` + which is only syntactic sugar for ``default=Factory(...)``. + + .. versionadded:: 20.1.0 *inherited* + .. versionadded:: 20.1.0 *on_setattr* + .. versionchanged:: 20.2.0 *inherited* is not taken into account for + equality checks and hashing anymore. + .. versionadded:: 21.1.0 *eq_key* and *order_key* + + For the full version history of the fields, see `attr.ib`. + """ + + __slots__ = ( + "name", + "default", + "validator", + "repr", + "eq", + "eq_key", + "order", + "order_key", + "hash", + "init", + "metadata", + "type", + "converter", + "kw_only", + "inherited", + "on_setattr", + ) + + def __init__( + self, + name, + default, + validator, + repr, + cmp, # XXX: unused, remove along with other cmp code. + hash, + init, + inherited, + metadata=None, + type=None, + converter=None, + kw_only=False, + eq=None, + eq_key=None, + order=None, + order_key=None, + on_setattr=None, + ): + eq, eq_key, order, order_key = _determine_attrib_eq_order( + cmp, eq_key or eq, order_key or order, True + ) + + # Cache this descriptor here to speed things up later. + bound_setattr = _obj_setattr.__get__(self, Attribute) + + # Despite the big red warning, people *do* instantiate `Attribute` + # themselves. + bound_setattr("name", name) + bound_setattr("default", default) + bound_setattr("validator", validator) + bound_setattr("repr", repr) + bound_setattr("eq", eq) + bound_setattr("eq_key", eq_key) + bound_setattr("order", order) + bound_setattr("order_key", order_key) + bound_setattr("hash", hash) + bound_setattr("init", init) + bound_setattr("converter", converter) + bound_setattr( + "metadata", + ( + metadata_proxy(metadata) + if metadata + else _empty_metadata_singleton + ), + ) + bound_setattr("type", type) + bound_setattr("kw_only", kw_only) + bound_setattr("inherited", inherited) + bound_setattr("on_setattr", on_setattr) + + def __setattr__(self, name, value): + raise FrozenInstanceError() + + @classmethod + def from_counting_attr(cls, name, ca, type=None): + # type holds the annotated value. deal with conflicts: + if type is None: + type = ca.type + elif ca.type is not None: + raise ValueError( + "Type annotation and type argument cannot both be present" + ) + inst_dict = { + k: getattr(ca, k) + for k in Attribute.__slots__ + if k + not in ( + "name", + "validator", + "default", + "type", + "inherited", + ) # exclude methods and deprecated alias + } + return cls( + name=name, + validator=ca._validator, + default=ca._default, + type=type, + cmp=None, + inherited=False, + **inst_dict + ) + + @property + def cmp(self): + """ + Simulate the presence of a cmp attribute and warn. + """ + warnings.warn(_CMP_DEPRECATION, DeprecationWarning, stacklevel=2) + + return self.eq and self.order + + # Don't use attr.evolve since fields(Attribute) doesn't work + def evolve(self, **changes): + """ + Copy *self* and apply *changes*. + + This works similarly to `attr.evolve` but that function does not work + with ``Attribute``. + + It is mainly meant to be used for `transform-fields`. + + .. versionadded:: 20.3.0 + """ + new = copy.copy(self) + + new._setattrs(changes.items()) + + return new + + # Don't use _add_pickle since fields(Attribute) doesn't work + def __getstate__(self): + """ + Play nice with pickle. + """ + return tuple( + getattr(self, name) if name != "metadata" else dict(self.metadata) + for name in self.__slots__ + ) + + def __setstate__(self, state): + """ + Play nice with pickle. + """ + self._setattrs(zip(self.__slots__, state)) + + def _setattrs(self, name_values_pairs): + bound_setattr = _obj_setattr.__get__(self, Attribute) + for name, value in name_values_pairs: + if name != "metadata": + bound_setattr(name, value) + else: + bound_setattr( + name, + metadata_proxy(value) + if value + else _empty_metadata_singleton, + ) + + +_a = [ + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + eq=True, + order=False, + hash=(name != "metadata"), + init=True, + inherited=False, + ) + for name in Attribute.__slots__ +] + +Attribute = _add_hash( + _add_eq( + _add_repr(Attribute, attrs=_a), + attrs=[a for a in _a if a.name != "inherited"], + ), + attrs=[a for a in _a if a.hash and a.name != "inherited"], +) + + +class _CountingAttr(object): + """ + Intermediate representation of attributes that uses a counter to preserve + the order in which the attributes have been defined. + + *Internal* data structure of the attrs library. Running into is most + likely the result of a bug like a forgotten `@attr.s` decorator. + """ + + __slots__ = ( + "counter", + "_default", + "repr", + "eq", + "eq_key", + "order", + "order_key", + "hash", + "init", + "metadata", + "_validator", + "converter", + "type", + "kw_only", + "on_setattr", + ) + __attrs_attrs__ = tuple( + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + hash=True, + init=True, + kw_only=False, + eq=True, + eq_key=None, + order=False, + order_key=None, + inherited=False, + on_setattr=None, + ) + for name in ( + "counter", + "_default", + "repr", + "eq", + "order", + "hash", + "init", + "on_setattr", + ) + ) + ( + Attribute( + name="metadata", + default=None, + validator=None, + repr=True, + cmp=None, + hash=False, + init=True, + kw_only=False, + eq=True, + eq_key=None, + order=False, + order_key=None, + inherited=False, + on_setattr=None, + ), + ) + cls_counter = 0 + + def __init__( + self, + default, + validator, + repr, + cmp, + hash, + init, + converter, + metadata, + type, + kw_only, + eq, + eq_key, + order, + order_key, + on_setattr, + ): + _CountingAttr.cls_counter += 1 + self.counter = _CountingAttr.cls_counter + self._default = default + self._validator = validator + self.converter = converter + self.repr = repr + self.eq = eq + self.eq_key = eq_key + self.order = order + self.order_key = order_key + self.hash = hash + self.init = init + self.metadata = metadata + self.type = type + self.kw_only = kw_only + self.on_setattr = on_setattr + + def validator(self, meth): + """ + Decorator that adds *meth* to the list of validators. + + Returns *meth* unchanged. + + .. versionadded:: 17.1.0 + """ + if self._validator is None: + self._validator = meth + else: + self._validator = and_(self._validator, meth) + return meth + + def default(self, meth): + """ + Decorator that allows to set the default for an attribute. + + Returns *meth* unchanged. + + :raises DefaultAlreadySetError: If default has been set before. + + .. versionadded:: 17.1.0 + """ + if self._default is not NOTHING: + raise DefaultAlreadySetError() + + self._default = Factory(meth, takes_self=True) + + return meth + + +_CountingAttr = _add_eq(_add_repr(_CountingAttr)) + + +class Factory(object): + """ + Stores a factory callable. + + If passed as the default value to `attr.ib`, the factory is used to + generate a new value. + + :param callable factory: A callable that takes either none or exactly one + mandatory positional argument depending on *takes_self*. + :param bool takes_self: Pass the partially initialized instance that is + being initialized as a positional argument. + + .. versionadded:: 17.1.0 *takes_self* + """ + + __slots__ = ("factory", "takes_self") + + def __init__(self, factory, takes_self=False): + """ + `Factory` is part of the default machinery so if we want a default + value here, we have to implement it ourselves. + """ + self.factory = factory + self.takes_self = takes_self + + def __getstate__(self): + """ + Play nice with pickle. + """ + return tuple(getattr(self, name) for name in self.__slots__) + + def __setstate__(self, state): + """ + Play nice with pickle. + """ + for name, value in zip(self.__slots__, state): + setattr(self, name, value) + + +_f = [ + Attribute( + name=name, + default=NOTHING, + validator=None, + repr=True, + cmp=None, + eq=True, + order=False, + hash=True, + init=True, + inherited=False, + ) + for name in Factory.__slots__ +] + +Factory = _add_hash(_add_eq(_add_repr(Factory, attrs=_f), attrs=_f), attrs=_f) + + +def make_class(name, attrs, bases=(object,), **attributes_arguments): + """ + A quick way to create a new class called *name* with *attrs*. + + :param str name: The name for the new class. + + :param attrs: A list of names or a dictionary of mappings of names to + attributes. + + If *attrs* is a list or an ordered dict (`dict` on Python 3.6+, + `collections.OrderedDict` otherwise), the order is deduced from + the order of the names or attributes inside *attrs*. Otherwise the + order of the definition of the attributes is used. + :type attrs: `list` or `dict` + + :param tuple bases: Classes that the new class will subclass. + + :param attributes_arguments: Passed unmodified to `attr.s`. + + :return: A new class with *attrs*. + :rtype: type + + .. versionadded:: 17.1.0 *bases* + .. versionchanged:: 18.1.0 If *attrs* is ordered, the order is retained. + """ + if isinstance(attrs, dict): + cls_dict = attrs + elif isinstance(attrs, (list, tuple)): + cls_dict = dict((a, attrib()) for a in attrs) + else: + raise TypeError("attrs argument must be a dict or a list.") + + pre_init = cls_dict.pop("__attrs_pre_init__", None) + post_init = cls_dict.pop("__attrs_post_init__", None) + user_init = cls_dict.pop("__init__", None) + + body = {} + if pre_init is not None: + body["__attrs_pre_init__"] = pre_init + if post_init is not None: + body["__attrs_post_init__"] = post_init + if user_init is not None: + body["__init__"] = user_init + + type_ = new_class(name, bases, {}, lambda ns: ns.update(body)) + + # For pickling to work, the __module__ variable needs to be set to the + # frame where the class is created. Bypass this step in environments where + # sys._getframe is not defined (Jython for example) or sys._getframe is not + # defined for arguments greater than 0 (IronPython). + try: + type_.__module__ = sys._getframe(1).f_globals.get( + "__name__", "__main__" + ) + except (AttributeError, ValueError): + pass + + # We do it here for proper warnings with meaningful stacklevel. + cmp = attributes_arguments.pop("cmp", None) + ( + attributes_arguments["eq"], + attributes_arguments["order"], + ) = _determine_attrs_eq_order( + cmp, + attributes_arguments.get("eq"), + attributes_arguments.get("order"), + True, + ) + + return _attrs(these=cls_dict, **attributes_arguments)(type_) + + +# These are required by within this module so we define them here and merely +# import into .validators / .converters. + + +@attrs(slots=True, hash=True) +class _AndValidator(object): + """ + Compose many validators to a single one. + """ + + _validators = attrib() + + def __call__(self, inst, attr, value): + for v in self._validators: + v(inst, attr, value) + + +def and_(*validators): + """ + A validator that composes multiple validators into one. + + When called on a value, it runs all wrapped validators. + + :param callables validators: Arbitrary number of validators. + + .. versionadded:: 17.1.0 + """ + vals = [] + for validator in validators: + vals.extend( + validator._validators + if isinstance(validator, _AndValidator) + else [validator] + ) + + return _AndValidator(tuple(vals)) + + +def pipe(*converters): + """ + A converter that composes multiple converters into one. + + When called on a value, it runs all wrapped converters, returning the + *last* value. + + Type annotations will be inferred from the wrapped converters', if + they have any. + + :param callables converters: Arbitrary number of converters. + + .. versionadded:: 20.1.0 + """ + + def pipe_converter(val): + for converter in converters: + val = converter(val) + + return val + + if not PY2: + if not converters: + # If the converter list is empty, pipe_converter is the identity. + A = typing.TypeVar("A") + pipe_converter.__annotations__ = {"val": A, "return": A} + else: + # Get parameter type. + sig = None + try: + sig = inspect.signature(converters[0]) + except (ValueError, TypeError): # inspect failed + pass + if sig: + params = list(sig.parameters.values()) + if ( + params + and params[0].annotation is not inspect.Parameter.empty + ): + pipe_converter.__annotations__["val"] = params[ + 0 + ].annotation + # Get return type. + sig = None + try: + sig = inspect.signature(converters[-1]) + except (ValueError, TypeError): # inspect failed + pass + if sig and sig.return_annotation is not inspect.Signature().empty: + pipe_converter.__annotations__[ + "return" + ] = sig.return_annotation + + return pipe_converter diff --git a/openpype/hosts/fusion/vendor/attr/_next_gen.py b/openpype/hosts/fusion/vendor/attr/_next_gen.py new file mode 100644 index 0000000000..fab0af966a --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_next_gen.py @@ -0,0 +1,158 @@ +""" +These are Python 3.6+-only and keyword-only APIs that call `attr.s` and +`attr.ib` with different default values. +""" + +from functools import partial + +from attr.exceptions import UnannotatedAttributeError + +from . import setters +from ._make import NOTHING, _frozen_setattrs, attrib, attrs + + +def define( + maybe_cls=None, + *, + these=None, + repr=None, + hash=None, + init=None, + slots=True, + frozen=False, + weakref_slot=True, + str=False, + auto_attribs=None, + kw_only=False, + cache_hash=False, + auto_exc=True, + eq=None, + order=False, + auto_detect=True, + getstate_setstate=None, + on_setattr=None, + field_transformer=None, +): + r""" + The only behavioral differences are the handling of the *auto_attribs* + option: + + :param Optional[bool] auto_attribs: If set to `True` or `False`, it behaves + exactly like `attr.s`. If left `None`, `attr.s` will try to guess: + + 1. If any attributes are annotated and no unannotated `attr.ib`\ s + are found, it assumes *auto_attribs=True*. + 2. Otherwise it assumes *auto_attribs=False* and tries to collect + `attr.ib`\ s. + + and that mutable classes (``frozen=False``) validate on ``__setattr__``. + + .. versionadded:: 20.1.0 + """ + + def do_it(cls, auto_attribs): + return attrs( + maybe_cls=cls, + these=these, + repr=repr, + hash=hash, + init=init, + slots=slots, + frozen=frozen, + weakref_slot=weakref_slot, + str=str, + auto_attribs=auto_attribs, + kw_only=kw_only, + cache_hash=cache_hash, + auto_exc=auto_exc, + eq=eq, + order=order, + auto_detect=auto_detect, + collect_by_mro=True, + getstate_setstate=getstate_setstate, + on_setattr=on_setattr, + field_transformer=field_transformer, + ) + + def wrap(cls): + """ + Making this a wrapper ensures this code runs during class creation. + + We also ensure that frozen-ness of classes is inherited. + """ + nonlocal frozen, on_setattr + + had_on_setattr = on_setattr not in (None, setters.NO_OP) + + # By default, mutable classes validate on setattr. + if frozen is False and on_setattr is None: + on_setattr = setters.validate + + # However, if we subclass a frozen class, we inherit the immutability + # and disable on_setattr. + for base_cls in cls.__bases__: + if base_cls.__setattr__ is _frozen_setattrs: + if had_on_setattr: + raise ValueError( + "Frozen classes can't use on_setattr " + "(frozen-ness was inherited)." + ) + + on_setattr = setters.NO_OP + break + + if auto_attribs is not None: + return do_it(cls, auto_attribs) + + try: + return do_it(cls, True) + except UnannotatedAttributeError: + return do_it(cls, False) + + # maybe_cls's type depends on the usage of the decorator. It's a class + # if it's used as `@attrs` but ``None`` if used as `@attrs()`. + if maybe_cls is None: + return wrap + else: + return wrap(maybe_cls) + + +mutable = define +frozen = partial(define, frozen=True, on_setattr=None) + + +def field( + *, + default=NOTHING, + validator=None, + repr=True, + hash=None, + init=True, + metadata=None, + converter=None, + factory=None, + kw_only=False, + eq=None, + order=None, + on_setattr=None, +): + """ + Identical to `attr.ib`, except keyword-only and with some arguments + removed. + + .. versionadded:: 20.1.0 + """ + return attrib( + default=default, + validator=validator, + repr=repr, + hash=hash, + init=init, + metadata=metadata, + converter=converter, + factory=factory, + kw_only=kw_only, + eq=eq, + order=order, + on_setattr=on_setattr, + ) diff --git a/openpype/hosts/fusion/vendor/attr/_version_info.py b/openpype/hosts/fusion/vendor/attr/_version_info.py new file mode 100644 index 0000000000..014e78a1b4 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_version_info.py @@ -0,0 +1,85 @@ +from __future__ import absolute_import, division, print_function + +from functools import total_ordering + +from ._funcs import astuple +from ._make import attrib, attrs + + +@total_ordering +@attrs(eq=False, order=False, slots=True, frozen=True) +class VersionInfo(object): + """ + A version object that can be compared to tuple of length 1--4: + + >>> attr.VersionInfo(19, 1, 0, "final") <= (19, 2) + True + >>> attr.VersionInfo(19, 1, 0, "final") < (19, 1, 1) + True + >>> vi = attr.VersionInfo(19, 2, 0, "final") + >>> vi < (19, 1, 1) + False + >>> vi < (19,) + False + >>> vi == (19, 2,) + True + >>> vi == (19, 2, 1) + False + + .. versionadded:: 19.2 + """ + + year = attrib(type=int) + minor = attrib(type=int) + micro = attrib(type=int) + releaselevel = attrib(type=str) + + @classmethod + def _from_version_string(cls, s): + """ + Parse *s* and return a _VersionInfo. + """ + v = s.split(".") + if len(v) == 3: + v.append("final") + + return cls( + year=int(v[0]), minor=int(v[1]), micro=int(v[2]), releaselevel=v[3] + ) + + def _ensure_tuple(self, other): + """ + Ensure *other* is a tuple of a valid length. + + Returns a possibly transformed *other* and ourselves as a tuple of + the same length as *other*. + """ + + if self.__class__ is other.__class__: + other = astuple(other) + + if not isinstance(other, tuple): + raise NotImplementedError + + if not (1 <= len(other) <= 4): + raise NotImplementedError + + return astuple(self)[: len(other)], other + + def __eq__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + return us == them + + def __lt__(self, other): + try: + us, them = self._ensure_tuple(other) + except NotImplementedError: + return NotImplemented + + # Since alphabetically "dev0" < "final" < "post1" < "post2", we don't + # have to do anything special with releaselevel for now. + return us < them diff --git a/openpype/hosts/fusion/vendor/attr/_version_info.pyi b/openpype/hosts/fusion/vendor/attr/_version_info.pyi new file mode 100644 index 0000000000..45ced08633 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/_version_info.pyi @@ -0,0 +1,9 @@ +class VersionInfo: + @property + def year(self) -> int: ... + @property + def minor(self) -> int: ... + @property + def micro(self) -> int: ... + @property + def releaselevel(self) -> str: ... diff --git a/openpype/hosts/fusion/vendor/attr/converters.py b/openpype/hosts/fusion/vendor/attr/converters.py new file mode 100644 index 0000000000..2777db6d0a --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/converters.py @@ -0,0 +1,111 @@ +""" +Commonly useful converters. +""" + +from __future__ import absolute_import, division, print_function + +from ._compat import PY2 +from ._make import NOTHING, Factory, pipe + + +if not PY2: + import inspect + import typing + + +__all__ = [ + "pipe", + "optional", + "default_if_none", +] + + +def optional(converter): + """ + A converter that allows an attribute to be optional. An optional attribute + is one which can be set to ``None``. + + Type annotations will be inferred from the wrapped converter's, if it + has any. + + :param callable converter: the converter that is used for non-``None`` + values. + + .. versionadded:: 17.1.0 + """ + + def optional_converter(val): + if val is None: + return None + return converter(val) + + if not PY2: + sig = None + try: + sig = inspect.signature(converter) + except (ValueError, TypeError): # inspect failed + pass + if sig: + params = list(sig.parameters.values()) + if params and params[0].annotation is not inspect.Parameter.empty: + optional_converter.__annotations__["val"] = typing.Optional[ + params[0].annotation + ] + if sig.return_annotation is not inspect.Signature.empty: + optional_converter.__annotations__["return"] = typing.Optional[ + sig.return_annotation + ] + + return optional_converter + + +def default_if_none(default=NOTHING, factory=None): + """ + A converter that allows to replace ``None`` values by *default* or the + result of *factory*. + + :param default: Value to be used if ``None`` is passed. Passing an instance + of `attr.Factory` is supported, however the ``takes_self`` option + is *not*. + :param callable factory: A callable that takes no parameters whose result + is used if ``None`` is passed. + + :raises TypeError: If **neither** *default* or *factory* is passed. + :raises TypeError: If **both** *default* and *factory* are passed. + :raises ValueError: If an instance of `attr.Factory` is passed with + ``takes_self=True``. + + .. versionadded:: 18.2.0 + """ + if default is NOTHING and factory is None: + raise TypeError("Must pass either `default` or `factory`.") + + if default is not NOTHING and factory is not None: + raise TypeError( + "Must pass either `default` or `factory` but not both." + ) + + if factory is not None: + default = Factory(factory) + + if isinstance(default, Factory): + if default.takes_self: + raise ValueError( + "`takes_self` is not supported by default_if_none." + ) + + def default_if_none_converter(val): + if val is not None: + return val + + return default.factory() + + else: + + def default_if_none_converter(val): + if val is not None: + return val + + return default + + return default_if_none_converter diff --git a/openpype/hosts/fusion/vendor/attr/converters.pyi b/openpype/hosts/fusion/vendor/attr/converters.pyi new file mode 100644 index 0000000000..84a57590b0 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/converters.pyi @@ -0,0 +1,13 @@ +from typing import Callable, Optional, TypeVar, overload + +from . import _ConverterType + + +_T = TypeVar("_T") + +def pipe(*validators: _ConverterType) -> _ConverterType: ... +def optional(converter: _ConverterType) -> _ConverterType: ... +@overload +def default_if_none(default: _T) -> _ConverterType: ... +@overload +def default_if_none(*, factory: Callable[[], _T]) -> _ConverterType: ... diff --git a/openpype/hosts/fusion/vendor/attr/exceptions.py b/openpype/hosts/fusion/vendor/attr/exceptions.py new file mode 100644 index 0000000000..f6f9861bea --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/exceptions.py @@ -0,0 +1,92 @@ +from __future__ import absolute_import, division, print_function + + +class FrozenError(AttributeError): + """ + A frozen/immutable instance or attribute have been attempted to be + modified. + + It mirrors the behavior of ``namedtuples`` by using the same error message + and subclassing `AttributeError`. + + .. versionadded:: 20.1.0 + """ + + msg = "can't set attribute" + args = [msg] + + +class FrozenInstanceError(FrozenError): + """ + A frozen instance has been attempted to be modified. + + .. versionadded:: 16.1.0 + """ + + +class FrozenAttributeError(FrozenError): + """ + A frozen attribute has been attempted to be modified. + + .. versionadded:: 20.1.0 + """ + + +class AttrsAttributeNotFoundError(ValueError): + """ + An ``attrs`` function couldn't find an attribute that the user asked for. + + .. versionadded:: 16.2.0 + """ + + +class NotAnAttrsClassError(ValueError): + """ + A non-``attrs`` class has been passed into an ``attrs`` function. + + .. versionadded:: 16.2.0 + """ + + +class DefaultAlreadySetError(RuntimeError): + """ + A default has been set using ``attr.ib()`` and is attempted to be reset + using the decorator. + + .. versionadded:: 17.1.0 + """ + + +class UnannotatedAttributeError(RuntimeError): + """ + A class with ``auto_attribs=True`` has an ``attr.ib()`` without a type + annotation. + + .. versionadded:: 17.3.0 + """ + + +class PythonTooOldError(RuntimeError): + """ + It was attempted to use an ``attrs`` feature that requires a newer Python + version. + + .. versionadded:: 18.2.0 + """ + + +class NotCallableError(TypeError): + """ + A ``attr.ib()`` requiring a callable has been set with a value + that is not callable. + + .. versionadded:: 19.2.0 + """ + + def __init__(self, msg, value): + super(TypeError, self).__init__(msg, value) + self.msg = msg + self.value = value + + def __str__(self): + return str(self.msg) diff --git a/openpype/hosts/fusion/vendor/attr/exceptions.pyi b/openpype/hosts/fusion/vendor/attr/exceptions.pyi new file mode 100644 index 0000000000..a800fb26bb --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/exceptions.pyi @@ -0,0 +1,18 @@ +from typing import Any + + +class FrozenError(AttributeError): + msg: str = ... + +class FrozenInstanceError(FrozenError): ... +class FrozenAttributeError(FrozenError): ... +class AttrsAttributeNotFoundError(ValueError): ... +class NotAnAttrsClassError(ValueError): ... +class DefaultAlreadySetError(RuntimeError): ... +class UnannotatedAttributeError(RuntimeError): ... +class PythonTooOldError(RuntimeError): ... + +class NotCallableError(TypeError): + msg: str = ... + value: Any = ... + def __init__(self, msg: str, value: Any) -> None: ... diff --git a/openpype/hosts/fusion/vendor/attr/filters.py b/openpype/hosts/fusion/vendor/attr/filters.py new file mode 100644 index 0000000000..dc47e8fa38 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/filters.py @@ -0,0 +1,52 @@ +""" +Commonly useful filters for `attr.asdict`. +""" + +from __future__ import absolute_import, division, print_function + +from ._compat import isclass +from ._make import Attribute + + +def _split_what(what): + """ + Returns a tuple of `frozenset`s of classes and attributes. + """ + return ( + frozenset(cls for cls in what if isclass(cls)), + frozenset(cls for cls in what if isinstance(cls, Attribute)), + ) + + +def include(*what): + """ + Whitelist *what*. + + :param what: What to whitelist. + :type what: `list` of `type` or `attr.Attribute`\\ s + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def include_(attribute, value): + return value.__class__ in cls or attribute in attrs + + return include_ + + +def exclude(*what): + """ + Blacklist *what*. + + :param what: What to blacklist. + :type what: `list` of classes or `attr.Attribute`\\ s. + + :rtype: `callable` + """ + cls, attrs = _split_what(what) + + def exclude_(attribute, value): + return value.__class__ not in cls and attribute not in attrs + + return exclude_ diff --git a/openpype/hosts/fusion/vendor/attr/filters.pyi b/openpype/hosts/fusion/vendor/attr/filters.pyi new file mode 100644 index 0000000000..f7b63f1bb4 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/filters.pyi @@ -0,0 +1,7 @@ +from typing import Any, Union + +from . import Attribute, _FilterType + + +def include(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... +def exclude(*what: Union[type, Attribute[Any]]) -> _FilterType[Any]: ... diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/__init__.py b/openpype/hosts/fusion/vendor/attr/py.typed similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/tests/__init__.py rename to openpype/hosts/fusion/vendor/attr/py.typed diff --git a/openpype/hosts/fusion/vendor/attr/setters.py b/openpype/hosts/fusion/vendor/attr/setters.py new file mode 100644 index 0000000000..240014b3c1 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/setters.py @@ -0,0 +1,77 @@ +""" +Commonly used hooks for on_setattr. +""" + +from __future__ import absolute_import, division, print_function + +from . import _config +from .exceptions import FrozenAttributeError + + +def pipe(*setters): + """ + Run all *setters* and return the return value of the last one. + + .. versionadded:: 20.1.0 + """ + + def wrapped_pipe(instance, attrib, new_value): + rv = new_value + + for setter in setters: + rv = setter(instance, attrib, rv) + + return rv + + return wrapped_pipe + + +def frozen(_, __, ___): + """ + Prevent an attribute to be modified. + + .. versionadded:: 20.1.0 + """ + raise FrozenAttributeError() + + +def validate(instance, attrib, new_value): + """ + Run *attrib*'s validator on *new_value* if it has one. + + .. versionadded:: 20.1.0 + """ + if _config._run_validators is False: + return new_value + + v = attrib.validator + if not v: + return new_value + + v(instance, attrib, new_value) + + return new_value + + +def convert(instance, attrib, new_value): + """ + Run *attrib*'s converter -- if it has one -- on *new_value* and return the + result. + + .. versionadded:: 20.1.0 + """ + c = attrib.converter + if c: + return c(new_value) + + return new_value + + +NO_OP = object() +""" +Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. + +Does not work in `pipe` or within lists. + +.. versionadded:: 20.1.0 +""" diff --git a/openpype/hosts/fusion/vendor/attr/setters.pyi b/openpype/hosts/fusion/vendor/attr/setters.pyi new file mode 100644 index 0000000000..a921e07deb --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/setters.pyi @@ -0,0 +1,20 @@ +from typing import Any, NewType, NoReturn, TypeVar, cast + +from . import Attribute, _OnSetAttrType + + +_T = TypeVar("_T") + +def frozen( + instance: Any, attribute: Attribute[Any], new_value: Any +) -> NoReturn: ... +def pipe(*setters: _OnSetAttrType) -> _OnSetAttrType: ... +def validate(instance: Any, attribute: Attribute[_T], new_value: _T) -> _T: ... + +# convert is allowed to return Any, because they can be chained using pipe. +def convert( + instance: Any, attribute: Attribute[Any], new_value: Any +) -> Any: ... + +_NoOpType = NewType("_NoOpType", object) +NO_OP: _NoOpType diff --git a/openpype/hosts/fusion/vendor/attr/validators.py b/openpype/hosts/fusion/vendor/attr/validators.py new file mode 100644 index 0000000000..b9a73054e9 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/validators.py @@ -0,0 +1,379 @@ +""" +Commonly useful validators. +""" + +from __future__ import absolute_import, division, print_function + +import re + +from ._make import _AndValidator, and_, attrib, attrs +from .exceptions import NotCallableError + + +__all__ = [ + "and_", + "deep_iterable", + "deep_mapping", + "in_", + "instance_of", + "is_callable", + "matches_re", + "optional", + "provides", +] + + +@attrs(repr=False, slots=True, hash=True) +class _InstanceOfValidator(object): + type = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not isinstance(value, self.type): + raise TypeError( + "'{name}' must be {type!r} (got {value!r} that is a " + "{actual!r}).".format( + name=attr.name, + type=self.type, + actual=value.__class__, + value=value, + ), + attr, + self.type, + value, + ) + + def __repr__(self): + return "".format( + type=self.type + ) + + +def instance_of(type): + """ + A validator that raises a `TypeError` if the initializer is called + with a wrong type for this particular attribute (checks are performed using + `isinstance` therefore it's also valid to pass a tuple of types). + + :param type: The type to check for. + :type type: type or tuple of types + + :raises TypeError: With a human readable error message, the attribute + (of type `attr.Attribute`), the expected type, and the value it + got. + """ + return _InstanceOfValidator(type) + + +@attrs(repr=False, frozen=True, slots=True) +class _MatchesReValidator(object): + regex = attrib() + flags = attrib() + match_func = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.match_func(value): + raise ValueError( + "'{name}' must match regex {regex!r}" + " ({value!r} doesn't)".format( + name=attr.name, regex=self.regex.pattern, value=value + ), + attr, + self.regex, + value, + ) + + def __repr__(self): + return "".format( + regex=self.regex + ) + + +def matches_re(regex, flags=0, func=None): + r""" + A validator that raises `ValueError` if the initializer is called + with a string that doesn't match *regex*. + + :param str regex: a regex string to match against + :param int flags: flags that will be passed to the underlying re function + (default 0) + :param callable func: which underlying `re` function to call (options + are `re.fullmatch`, `re.search`, `re.match`, default + is ``None`` which means either `re.fullmatch` or an emulation of + it on Python 2). For performance reasons, they won't be used directly + but on a pre-`re.compile`\ ed pattern. + + .. versionadded:: 19.2.0 + """ + fullmatch = getattr(re, "fullmatch", None) + valid_funcs = (fullmatch, None, re.search, re.match) + if func not in valid_funcs: + raise ValueError( + "'func' must be one of %s." + % ( + ", ".join( + sorted( + e and e.__name__ or "None" for e in set(valid_funcs) + ) + ), + ) + ) + + pattern = re.compile(regex, flags) + if func is re.match: + match_func = pattern.match + elif func is re.search: + match_func = pattern.search + else: + if fullmatch: + match_func = pattern.fullmatch + else: + pattern = re.compile(r"(?:{})\Z".format(regex), flags) + match_func = pattern.match + + return _MatchesReValidator(pattern, flags, match_func) + + +@attrs(repr=False, slots=True, hash=True) +class _ProvidesValidator(object): + interface = attrib() + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not self.interface.providedBy(value): + raise TypeError( + "'{name}' must provide {interface!r} which {value!r} " + "doesn't.".format( + name=attr.name, interface=self.interface, value=value + ), + attr, + self.interface, + value, + ) + + def __repr__(self): + return "".format( + interface=self.interface + ) + + +def provides(interface): + """ + A validator that raises a `TypeError` if the initializer is called + with an object that does not provide the requested *interface* (checks are + performed using ``interface.providedBy(value)`` (see `zope.interface + `_). + + :param interface: The interface to check for. + :type interface: ``zope.interface.Interface`` + + :raises TypeError: With a human readable error message, the attribute + (of type `attr.Attribute`), the expected interface, and the + value it got. + """ + return _ProvidesValidator(interface) + + +@attrs(repr=False, slots=True, hash=True) +class _OptionalValidator(object): + validator = attrib() + + def __call__(self, inst, attr, value): + if value is None: + return + + self.validator(inst, attr, value) + + def __repr__(self): + return "".format( + what=repr(self.validator) + ) + + +def optional(validator): + """ + A validator that makes an attribute optional. An optional attribute is one + which can be set to ``None`` in addition to satisfying the requirements of + the sub-validator. + + :param validator: A validator (or a list of validators) that is used for + non-``None`` values. + :type validator: callable or `list` of callables. + + .. versionadded:: 15.1.0 + .. versionchanged:: 17.1.0 *validator* can be a list of validators. + """ + if isinstance(validator, list): + return _OptionalValidator(_AndValidator(validator)) + return _OptionalValidator(validator) + + +@attrs(repr=False, slots=True, hash=True) +class _InValidator(object): + options = attrib() + + def __call__(self, inst, attr, value): + try: + in_options = value in self.options + except TypeError: # e.g. `1 in "abc"` + in_options = False + + if not in_options: + raise ValueError( + "'{name}' must be in {options!r} (got {value!r})".format( + name=attr.name, options=self.options, value=value + ) + ) + + def __repr__(self): + return "".format( + options=self.options + ) + + +def in_(options): + """ + A validator that raises a `ValueError` if the initializer is called + with a value that does not belong in the options provided. The check is + performed using ``value in options``. + + :param options: Allowed options. + :type options: list, tuple, `enum.Enum`, ... + + :raises ValueError: With a human readable error message, the attribute (of + type `attr.Attribute`), the expected options, and the value it + got. + + .. versionadded:: 17.1.0 + """ + return _InValidator(options) + + +@attrs(repr=False, slots=False, hash=True) +class _IsCallableValidator(object): + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if not callable(value): + message = ( + "'{name}' must be callable " + "(got {value!r} that is a {actual!r})." + ) + raise NotCallableError( + msg=message.format( + name=attr.name, value=value, actual=value.__class__ + ), + value=value, + ) + + def __repr__(self): + return "" + + +def is_callable(): + """ + A validator that raises a `attr.exceptions.NotCallableError` if the + initializer is called with a value for this particular attribute + that is not callable. + + .. versionadded:: 19.1.0 + + :raises `attr.exceptions.NotCallableError`: With a human readable error + message containing the attribute (`attr.Attribute`) name, + and the value it got. + """ + return _IsCallableValidator() + + +@attrs(repr=False, slots=True, hash=True) +class _DeepIterable(object): + member_validator = attrib(validator=is_callable()) + iterable_validator = attrib( + default=None, validator=optional(is_callable()) + ) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.iterable_validator is not None: + self.iterable_validator(inst, attr, value) + + for member in value: + self.member_validator(inst, attr, member) + + def __repr__(self): + iterable_identifier = ( + "" + if self.iterable_validator is None + else " {iterable!r}".format(iterable=self.iterable_validator) + ) + return ( + "" + ).format( + iterable_identifier=iterable_identifier, + member=self.member_validator, + ) + + +def deep_iterable(member_validator, iterable_validator=None): + """ + A validator that performs deep validation of an iterable. + + :param member_validator: Validator to apply to iterable members + :param iterable_validator: Validator to apply to iterable itself + (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + return _DeepIterable(member_validator, iterable_validator) + + +@attrs(repr=False, slots=True, hash=True) +class _DeepMapping(object): + key_validator = attrib(validator=is_callable()) + value_validator = attrib(validator=is_callable()) + mapping_validator = attrib(default=None, validator=optional(is_callable())) + + def __call__(self, inst, attr, value): + """ + We use a callable class to be able to change the ``__repr__``. + """ + if self.mapping_validator is not None: + self.mapping_validator(inst, attr, value) + + for key in value: + self.key_validator(inst, attr, key) + self.value_validator(inst, attr, value[key]) + + def __repr__(self): + return ( + "" + ).format(key=self.key_validator, value=self.value_validator) + + +def deep_mapping(key_validator, value_validator, mapping_validator=None): + """ + A validator that performs deep validation of a dictionary. + + :param key_validator: Validator to apply to dictionary keys + :param value_validator: Validator to apply to dictionary values + :param mapping_validator: Validator to apply to top-level mapping + attribute (optional) + + .. versionadded:: 19.1.0 + + :raises TypeError: if any sub-validators fail + """ + return _DeepMapping(key_validator, value_validator, mapping_validator) diff --git a/openpype/hosts/fusion/vendor/attr/validators.pyi b/openpype/hosts/fusion/vendor/attr/validators.pyi new file mode 100644 index 0000000000..fe92aac421 --- /dev/null +++ b/openpype/hosts/fusion/vendor/attr/validators.pyi @@ -0,0 +1,68 @@ +from typing import ( + Any, + AnyStr, + Callable, + Container, + Iterable, + List, + Mapping, + Match, + Optional, + Tuple, + Type, + TypeVar, + Union, + overload, +) + +from . import _ValidatorType + + +_T = TypeVar("_T") +_T1 = TypeVar("_T1") +_T2 = TypeVar("_T2") +_T3 = TypeVar("_T3") +_I = TypeVar("_I", bound=Iterable) +_K = TypeVar("_K") +_V = TypeVar("_V") +_M = TypeVar("_M", bound=Mapping) + +# To be more precise on instance_of use some overloads. +# If there are more than 3 items in the tuple then we fall back to Any +@overload +def instance_of(type: Type[_T]) -> _ValidatorType[_T]: ... +@overload +def instance_of(type: Tuple[Type[_T]]) -> _ValidatorType[_T]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2]] +) -> _ValidatorType[Union[_T1, _T2]]: ... +@overload +def instance_of( + type: Tuple[Type[_T1], Type[_T2], Type[_T3]] +) -> _ValidatorType[Union[_T1, _T2, _T3]]: ... +@overload +def instance_of(type: Tuple[type, ...]) -> _ValidatorType[Any]: ... +def provides(interface: Any) -> _ValidatorType[Any]: ... +def optional( + validator: Union[_ValidatorType[_T], List[_ValidatorType[_T]]] +) -> _ValidatorType[Optional[_T]]: ... +def in_(options: Container[_T]) -> _ValidatorType[_T]: ... +def and_(*validators: _ValidatorType[_T]) -> _ValidatorType[_T]: ... +def matches_re( + regex: AnyStr, + flags: int = ..., + func: Optional[ + Callable[[AnyStr, AnyStr, int], Optional[Match[AnyStr]]] + ] = ..., +) -> _ValidatorType[AnyStr]: ... +def deep_iterable( + member_validator: _ValidatorType[_T], + iterable_validator: Optional[_ValidatorType[_I]] = ..., +) -> _ValidatorType[_I]: ... +def deep_mapping( + key_validator: _ValidatorType[_K], + value_validator: _ValidatorType[_V], + mapping_validator: Optional[_ValidatorType[_M]] = ..., +) -> _ValidatorType[_M]: ... +def is_callable() -> _ValidatorType[_T]: ... diff --git a/openpype/hosts/fusion/vendor/urllib3/__init__.py b/openpype/hosts/fusion/vendor/urllib3/__init__.py new file mode 100644 index 0000000000..fe86b59d78 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/__init__.py @@ -0,0 +1,85 @@ +""" +Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more +""" +from __future__ import absolute_import + +# Set default logging handler to avoid "No handler found" warnings. +import logging +import warnings +from logging import NullHandler + +from . import exceptions +from ._version import __version__ +from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url +from .filepost import encode_multipart_formdata +from .poolmanager import PoolManager, ProxyManager, proxy_from_url +from .response import HTTPResponse +from .util.request import make_headers +from .util.retry import Retry +from .util.timeout import Timeout +from .util.url import get_host + +__author__ = "Andrey Petrov (andrey.petrov@shazow.net)" +__license__ = "MIT" +__version__ = __version__ + +__all__ = ( + "HTTPConnectionPool", + "HTTPSConnectionPool", + "PoolManager", + "ProxyManager", + "HTTPResponse", + "Retry", + "Timeout", + "add_stderr_logger", + "connection_from_url", + "disable_warnings", + "encode_multipart_formdata", + "get_host", + "make_headers", + "proxy_from_url", +) + +logging.getLogger(__name__).addHandler(NullHandler()) + + +def add_stderr_logger(level=logging.DEBUG): + """ + Helper for quickly adding a StreamHandler to the logger. Useful for + debugging. + + Returns the handler after adding it. + """ + # This method needs to be in this __init__.py to get the __name__ correct + # even if urllib3 is vendored within another package. + logger = logging.getLogger(__name__) + handler = logging.StreamHandler() + handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s")) + logger.addHandler(handler) + logger.setLevel(level) + logger.debug("Added a stderr logging handler to logger: %s", __name__) + return handler + + +# ... Clean up. +del NullHandler + + +# All warning filters *must* be appended unless you're really certain that they +# shouldn't be: otherwise, it's very hard for users to use most Python +# mechanisms to silence them. +# SecurityWarning's always go off by default. +warnings.simplefilter("always", exceptions.SecurityWarning, append=True) +# SubjectAltNameWarning's should go off once per host +warnings.simplefilter("default", exceptions.SubjectAltNameWarning, append=True) +# InsecurePlatformWarning's don't vary between requests, so we keep it default. +warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True) +# SNIMissingWarnings should go off only once. +warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True) + + +def disable_warnings(category=exceptions.HTTPWarning): + """ + Helper for quickly disabling all urllib3 warnings. + """ + warnings.simplefilter("ignore", category) diff --git a/openpype/hosts/fusion/vendor/urllib3/_collections.py b/openpype/hosts/fusion/vendor/urllib3/_collections.py new file mode 100644 index 0000000000..da9857e986 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/_collections.py @@ -0,0 +1,337 @@ +from __future__ import absolute_import + +try: + from collections.abc import Mapping, MutableMapping +except ImportError: + from collections import Mapping, MutableMapping +try: + from threading import RLock +except ImportError: # Platform-specific: No threads available + + class RLock: + def __enter__(self): + pass + + def __exit__(self, exc_type, exc_value, traceback): + pass + + +from collections import OrderedDict + +from .exceptions import InvalidHeader +from .packages import six +from .packages.six import iterkeys, itervalues + +__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] + + +_Null = object() + + +class RecentlyUsedContainer(MutableMapping): + """ + Provides a thread-safe dict-like container which maintains up to + ``maxsize`` keys while throwing away the least-recently-used keys beyond + ``maxsize``. + + :param maxsize: + Maximum number of recent elements to retain. + + :param dispose_func: + Every time an item is evicted from the container, + ``dispose_func(value)`` is called. Callback which will get called + """ + + ContainerCls = OrderedDict + + def __init__(self, maxsize=10, dispose_func=None): + self._maxsize = maxsize + self.dispose_func = dispose_func + + self._container = self.ContainerCls() + self.lock = RLock() + + def __getitem__(self, key): + # Re-insert the item, moving it to the end of the eviction line. + with self.lock: + item = self._container.pop(key) + self._container[key] = item + return item + + def __setitem__(self, key, value): + evicted_value = _Null + with self.lock: + # Possibly evict the existing value of 'key' + evicted_value = self._container.get(key, _Null) + self._container[key] = value + + # If we didn't evict an existing value, we might have to evict the + # least recently used item from the beginning of the container. + if len(self._container) > self._maxsize: + _key, evicted_value = self._container.popitem(last=False) + + if self.dispose_func and evicted_value is not _Null: + self.dispose_func(evicted_value) + + def __delitem__(self, key): + with self.lock: + value = self._container.pop(key) + + if self.dispose_func: + self.dispose_func(value) + + def __len__(self): + with self.lock: + return len(self._container) + + def __iter__(self): + raise NotImplementedError( + "Iteration over this class is unlikely to be threadsafe." + ) + + def clear(self): + with self.lock: + # Copy pointers to all values, then wipe the mapping + values = list(itervalues(self._container)) + self._container.clear() + + if self.dispose_func: + for value in values: + self.dispose_func(value) + + def keys(self): + with self.lock: + return list(iterkeys(self._container)) + + +class HTTPHeaderDict(MutableMapping): + """ + :param headers: + An iterable of field-value pairs. Must not contain multiple field names + when compared case-insensitively. + + :param kwargs: + Additional field-value pairs to pass in to ``dict.update``. + + A ``dict`` like container for storing HTTP Headers. + + Field names are stored and compared case-insensitively in compliance with + RFC 7230. Iteration provides the first case-sensitive key seen for each + case-insensitive pair. + + Using ``__setitem__`` syntax overwrites fields that compare equal + case-insensitively in order to maintain ``dict``'s api. For fields that + compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` + in a loop. + + If multiple fields that are equal case-insensitively are passed to the + constructor or ``.update``, the behavior is undefined and some will be + lost. + + >>> headers = HTTPHeaderDict() + >>> headers.add('Set-Cookie', 'foo=bar') + >>> headers.add('set-cookie', 'baz=quxx') + >>> headers['content-length'] = '7' + >>> headers['SET-cookie'] + 'foo=bar, baz=quxx' + >>> headers['Content-Length'] + '7' + """ + + def __init__(self, headers=None, **kwargs): + super(HTTPHeaderDict, self).__init__() + self._container = OrderedDict() + if headers is not None: + if isinstance(headers, HTTPHeaderDict): + self._copy_from(headers) + else: + self.extend(headers) + if kwargs: + self.extend(kwargs) + + def __setitem__(self, key, val): + self._container[key.lower()] = [key, val] + return self._container[key.lower()] + + def __getitem__(self, key): + val = self._container[key.lower()] + return ", ".join(val[1:]) + + def __delitem__(self, key): + del self._container[key.lower()] + + def __contains__(self, key): + return key.lower() in self._container + + def __eq__(self, other): + if not isinstance(other, Mapping) and not hasattr(other, "keys"): + return False + if not isinstance(other, type(self)): + other = type(self)(other) + return dict((k.lower(), v) for k, v in self.itermerged()) == dict( + (k.lower(), v) for k, v in other.itermerged() + ) + + def __ne__(self, other): + return not self.__eq__(other) + + if six.PY2: # Python 2 + iterkeys = MutableMapping.iterkeys + itervalues = MutableMapping.itervalues + + __marker = object() + + def __len__(self): + return len(self._container) + + def __iter__(self): + # Only provide the originally cased names + for vals in self._container.values(): + yield vals[0] + + def pop(self, key, default=__marker): + """D.pop(k[,d]) -> v, remove specified key and return the corresponding value. + If key is not found, d is returned if given, otherwise KeyError is raised. + """ + # Using the MutableMapping function directly fails due to the private marker. + # Using ordinary dict.pop would expose the internal structures. + # So let's reinvent the wheel. + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def discard(self, key): + try: + del self[key] + except KeyError: + pass + + def add(self, key, val): + """Adds a (name, value) pair, doesn't overwrite the value if it already + exists. + + >>> headers = HTTPHeaderDict(foo='bar') + >>> headers.add('Foo', 'baz') + >>> headers['foo'] + 'bar, baz' + """ + key_lower = key.lower() + new_vals = [key, val] + # Keep the common case aka no item present as fast as possible + vals = self._container.setdefault(key_lower, new_vals) + if new_vals is not vals: + vals.append(val) + + def extend(self, *args, **kwargs): + """Generic import function for any type of header-like object. + Adapted version of MutableMapping.update in order to insert items + with self.add instead of self.__setitem__ + """ + if len(args) > 1: + raise TypeError( + "extend() takes at most 1 positional " + "arguments ({0} given)".format(len(args)) + ) + other = args[0] if len(args) >= 1 else () + + if isinstance(other, HTTPHeaderDict): + for key, val in other.iteritems(): + self.add(key, val) + elif isinstance(other, Mapping): + for key in other: + self.add(key, other[key]) + elif hasattr(other, "keys"): + for key in other.keys(): + self.add(key, other[key]) + else: + for key, value in other: + self.add(key, value) + + for key, value in kwargs.items(): + self.add(key, value) + + def getlist(self, key, default=__marker): + """Returns a list of all the values for the named field. Returns an + empty list if the key doesn't exist.""" + try: + vals = self._container[key.lower()] + except KeyError: + if default is self.__marker: + return [] + return default + else: + return vals[1:] + + # Backwards compatibility for httplib + getheaders = getlist + getallmatchingheaders = getlist + iget = getlist + + # Backwards compatibility for http.cookiejar + get_all = getlist + + def __repr__(self): + return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) + + def _copy_from(self, other): + for key in other: + val = other.getlist(key) + if isinstance(val, list): + # Don't need to convert tuples + val = list(val) + self._container[key.lower()] = [key] + val + + def copy(self): + clone = type(self)() + clone._copy_from(self) + return clone + + def iteritems(self): + """Iterate over all header lines, including duplicate ones.""" + for key in self: + vals = self._container[key.lower()] + for val in vals[1:]: + yield vals[0], val + + def itermerged(self): + """Iterate over all headers, merging duplicate ones together.""" + for key in self: + val = self._container[key.lower()] + yield val[0], ", ".join(val[1:]) + + def items(self): + return list(self.iteritems()) + + @classmethod + def from_httplib(cls, message): # Python 2 + """Read headers from a Python 2 httplib message object.""" + # python2.7 does not expose a proper API for exporting multiheaders + # efficiently. This function re-reads raw lines from the message + # object and extracts the multiheaders properly. + obs_fold_continued_leaders = (" ", "\t") + headers = [] + + for line in message.headers: + if line.startswith(obs_fold_continued_leaders): + if not headers: + # We received a header line that starts with OWS as described + # in RFC-7230 S3.2.4. This indicates a multiline header, but + # there exists no previous header to which we can attach it. + raise InvalidHeader( + "Header continuation with no previous header: %s" % line + ) + else: + key, value = headers[-1] + headers[-1] = (key, value + " " + line.strip()) + continue + + key, value = line.split(":", 1) + headers.append((key, value.strip())) + + return cls(headers) diff --git a/openpype/hosts/fusion/vendor/urllib3/_version.py b/openpype/hosts/fusion/vendor/urllib3/_version.py new file mode 100644 index 0000000000..e8ebee957f --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/_version.py @@ -0,0 +1,2 @@ +# This file is protected via CODEOWNERS +__version__ = "1.26.6" diff --git a/openpype/hosts/fusion/vendor/urllib3/connection.py b/openpype/hosts/fusion/vendor/urllib3/connection.py new file mode 100644 index 0000000000..4c996659c8 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/connection.py @@ -0,0 +1,539 @@ +from __future__ import absolute_import + +import datetime +import logging +import os +import re +import socket +import warnings +from socket import error as SocketError +from socket import timeout as SocketTimeout + +from .packages import six +from .packages.six.moves.http_client import HTTPConnection as _HTTPConnection +from .packages.six.moves.http_client import HTTPException # noqa: F401 +from .util.proxy import create_proxy_ssl_context + +try: # Compiled with SSL? + import ssl + + BaseSSLError = ssl.SSLError +except (ImportError, AttributeError): # Platform-specific: No SSL. + ssl = None + + class BaseSSLError(BaseException): + pass + + +try: + # Python 3: not a no-op, we're adding this to the namespace so it can be imported. + ConnectionError = ConnectionError +except NameError: + # Python 2 + class ConnectionError(Exception): + pass + + +try: # Python 3: + # Not a no-op, we're adding this to the namespace so it can be imported. + BrokenPipeError = BrokenPipeError +except NameError: # Python 2: + + class BrokenPipeError(Exception): + pass + + +from ._collections import HTTPHeaderDict # noqa (historical, removed in v2) +from ._version import __version__ +from .exceptions import ( + ConnectTimeoutError, + NewConnectionError, + SubjectAltNameWarning, + SystemTimeWarning, +) +from .packages.ssl_match_hostname import CertificateError, match_hostname +from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection +from .util.ssl_ import ( + assert_fingerprint, + create_urllib3_context, + resolve_cert_reqs, + resolve_ssl_version, + ssl_wrap_socket, +) + +log = logging.getLogger(__name__) + +port_by_scheme = {"http": 80, "https": 443} + +# When it comes time to update this value as a part of regular maintenance +# (ie test_recent_date is failing) update it to ~6 months before the current date. +RECENT_DATE = datetime.date(2020, 7, 1) + +_CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") + + +class HTTPConnection(_HTTPConnection, object): + """ + Based on :class:`http.client.HTTPConnection` but provides an extra constructor + backwards-compatibility layer between older and newer Pythons. + + Additional keyword parameters are used to configure attributes of the connection. + Accepted parameters include: + + - ``strict``: See the documentation on :class:`urllib3.connectionpool.HTTPConnectionPool` + - ``source_address``: Set the source address for the current connection. + - ``socket_options``: Set specific options on the underlying socket. If not specified, then + defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling + Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. + + For example, if you wish to enable TCP Keep Alive in addition to the defaults, + you might pass: + + .. code-block:: python + + HTTPConnection.default_socket_options + [ + (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), + ] + + Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). + """ + + default_port = port_by_scheme["http"] + + #: Disable Nagle's algorithm by default. + #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` + default_socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)] + + #: Whether this connection verifies the host's certificate. + is_verified = False + + def __init__(self, *args, **kw): + if not six.PY2: + kw.pop("strict", None) + + # Pre-set source_address. + self.source_address = kw.get("source_address") + + #: The socket options provided by the user. If no options are + #: provided, we use the default options. + self.socket_options = kw.pop("socket_options", self.default_socket_options) + + # Proxy options provided by the user. + self.proxy = kw.pop("proxy", None) + self.proxy_config = kw.pop("proxy_config", None) + + _HTTPConnection.__init__(self, *args, **kw) + + @property + def host(self): + """ + Getter method to remove any trailing dots that indicate the hostname is an FQDN. + + In general, SSL certificates don't include the trailing dot indicating a + fully-qualified domain name, and thus, they don't validate properly when + checked against a domain name that includes the dot. In addition, some + servers may not expect to receive the trailing dot when provided. + + However, the hostname with trailing dot is critical to DNS resolution; doing a + lookup with the trailing dot will properly only resolve the appropriate FQDN, + whereas a lookup without a trailing dot will search the system's search domain + list. Thus, it's important to keep the original host around for use only in + those cases where it's appropriate (i.e., when doing DNS lookup to establish the + actual TCP connection across which we're going to send HTTP requests). + """ + return self._dns_host.rstrip(".") + + @host.setter + def host(self, value): + """ + Setter for the `host` property. + + We assume that only urllib3 uses the _dns_host attribute; httplib itself + only uses `host`, and it seems reasonable that other libraries follow suit. + """ + self._dns_host = value + + def _new_conn(self): + """Establish a socket connection and set nodelay settings on it. + + :return: New socket connection. + """ + extra_kw = {} + if self.source_address: + extra_kw["source_address"] = self.source_address + + if self.socket_options: + extra_kw["socket_options"] = self.socket_options + + try: + conn = connection.create_connection( + (self._dns_host, self.port), self.timeout, **extra_kw + ) + + except SocketTimeout: + raise ConnectTimeoutError( + self, + "Connection to %s timed out. (connect timeout=%s)" + % (self.host, self.timeout), + ) + + except SocketError as e: + raise NewConnectionError( + self, "Failed to establish a new connection: %s" % e + ) + + return conn + + def _is_using_tunnel(self): + # Google App Engine's httplib does not define _tunnel_host + return getattr(self, "_tunnel_host", None) + + def _prepare_conn(self, conn): + self.sock = conn + if self._is_using_tunnel(): + # TODO: Fix tunnel so it doesn't depend on self.sock state. + self._tunnel() + # Mark this connection as not reusable + self.auto_open = 0 + + def connect(self): + conn = self._new_conn() + self._prepare_conn(conn) + + def putrequest(self, method, url, *args, **kwargs): + """ """ + # Empty docstring because the indentation of CPython's implementation + # is broken but we don't want this method in our documentation. + match = _CONTAINS_CONTROL_CHAR_RE.search(method) + if match: + raise ValueError( + "Method cannot contain non-token characters %r (found at least %r)" + % (method, match.group()) + ) + + return _HTTPConnection.putrequest(self, method, url, *args, **kwargs) + + def putheader(self, header, *values): + """ """ + if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): + _HTTPConnection.putheader(self, header, *values) + elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: + raise ValueError( + "urllib3.util.SKIP_HEADER only supports '%s'" + % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),) + ) + + def request(self, method, url, body=None, headers=None): + if headers is None: + headers = {} + else: + # Avoid modifying the headers passed into .request() + headers = headers.copy() + if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): + headers["User-Agent"] = _get_default_user_agent() + super(HTTPConnection, self).request(method, url, body=body, headers=headers) + + def request_chunked(self, method, url, body=None, headers=None): + """ + Alternative to the common request method, which sends the + body with chunked encoding and not as one block + """ + headers = headers or {} + header_keys = set([six.ensure_str(k.lower()) for k in headers]) + skip_accept_encoding = "accept-encoding" in header_keys + skip_host = "host" in header_keys + self.putrequest( + method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host + ) + if "user-agent" not in header_keys: + self.putheader("User-Agent", _get_default_user_agent()) + for header, value in headers.items(): + self.putheader(header, value) + if "transfer-encoding" not in header_keys: + self.putheader("Transfer-Encoding", "chunked") + self.endheaders() + + if body is not None: + stringish_types = six.string_types + (bytes,) + if isinstance(body, stringish_types): + body = (body,) + for chunk in body: + if not chunk: + continue + if not isinstance(chunk, bytes): + chunk = chunk.encode("utf8") + len_str = hex(len(chunk))[2:] + to_send = bytearray(len_str.encode()) + to_send += b"\r\n" + to_send += chunk + to_send += b"\r\n" + self.send(to_send) + + # After the if clause, to always have a closed body + self.send(b"0\r\n\r\n") + + +class HTTPSConnection(HTTPConnection): + """ + Many of the parameters to this constructor are passed to the underlying SSL + socket by means of :py:func:`urllib3.util.ssl_wrap_socket`. + """ + + default_port = port_by_scheme["https"] + + cert_reqs = None + ca_certs = None + ca_cert_dir = None + ca_cert_data = None + ssl_version = None + assert_fingerprint = None + tls_in_tls_required = False + + def __init__( + self, + host, + port=None, + key_file=None, + cert_file=None, + key_password=None, + strict=None, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + ssl_context=None, + server_hostname=None, + **kw + ): + + HTTPConnection.__init__(self, host, port, strict=strict, timeout=timeout, **kw) + + self.key_file = key_file + self.cert_file = cert_file + self.key_password = key_password + self.ssl_context = ssl_context + self.server_hostname = server_hostname + + # Required property for Google AppEngine 1.9.0 which otherwise causes + # HTTPS requests to go out as HTTP. (See Issue #356) + self._protocol = "https" + + def set_cert( + self, + key_file=None, + cert_file=None, + cert_reqs=None, + key_password=None, + ca_certs=None, + assert_hostname=None, + assert_fingerprint=None, + ca_cert_dir=None, + ca_cert_data=None, + ): + """ + This method should only be called once, before the connection is used. + """ + # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also + # have an SSLContext object in which case we'll use its verify_mode. + if cert_reqs is None: + if self.ssl_context is not None: + cert_reqs = self.ssl_context.verify_mode + else: + cert_reqs = resolve_cert_reqs(None) + + self.key_file = key_file + self.cert_file = cert_file + self.cert_reqs = cert_reqs + self.key_password = key_password + self.assert_hostname = assert_hostname + self.assert_fingerprint = assert_fingerprint + self.ca_certs = ca_certs and os.path.expanduser(ca_certs) + self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) + self.ca_cert_data = ca_cert_data + + def connect(self): + # Add certificate verification + conn = self._new_conn() + hostname = self.host + tls_in_tls = False + + if self._is_using_tunnel(): + if self.tls_in_tls_required: + conn = self._connect_tls_proxy(hostname, conn) + tls_in_tls = True + + self.sock = conn + + # Calls self._set_hostport(), so self.host is + # self._tunnel_host below. + self._tunnel() + # Mark this connection as not reusable + self.auto_open = 0 + + # Override the host with the one we're requesting data from. + hostname = self._tunnel_host + + server_hostname = hostname + if self.server_hostname is not None: + server_hostname = self.server_hostname + + is_time_off = datetime.date.today() < RECENT_DATE + if is_time_off: + warnings.warn( + ( + "System time is way off (before {0}). This will probably " + "lead to SSL verification errors" + ).format(RECENT_DATE), + SystemTimeWarning, + ) + + # Wrap socket using verification with the root certs in + # trusted_root_certs + default_ssl_context = False + if self.ssl_context is None: + default_ssl_context = True + self.ssl_context = create_urllib3_context( + ssl_version=resolve_ssl_version(self.ssl_version), + cert_reqs=resolve_cert_reqs(self.cert_reqs), + ) + + context = self.ssl_context + context.verify_mode = resolve_cert_reqs(self.cert_reqs) + + # Try to load OS default certs if none are given. + # Works well on Windows (requires Python3.4+) + if ( + not self.ca_certs + and not self.ca_cert_dir + and not self.ca_cert_data + and default_ssl_context + and hasattr(context, "load_default_certs") + ): + context.load_default_certs() + + self.sock = ssl_wrap_socket( + sock=conn, + keyfile=self.key_file, + certfile=self.cert_file, + key_password=self.key_password, + ca_certs=self.ca_certs, + ca_cert_dir=self.ca_cert_dir, + ca_cert_data=self.ca_cert_data, + server_hostname=server_hostname, + ssl_context=context, + tls_in_tls=tls_in_tls, + ) + + # If we're using all defaults and the connection + # is TLSv1 or TLSv1.1 we throw a DeprecationWarning + # for the host. + if ( + default_ssl_context + and self.ssl_version is None + and hasattr(self.sock, "version") + and self.sock.version() in {"TLSv1", "TLSv1.1"} + ): + warnings.warn( + "Negotiating TLSv1/TLSv1.1 by default is deprecated " + "and will be disabled in urllib3 v2.0.0. Connecting to " + "'%s' with '%s' can be enabled by explicitly opting-in " + "with 'ssl_version'" % (self.host, self.sock.version()), + DeprecationWarning, + ) + + if self.assert_fingerprint: + assert_fingerprint( + self.sock.getpeercert(binary_form=True), self.assert_fingerprint + ) + elif ( + context.verify_mode != ssl.CERT_NONE + and not getattr(context, "check_hostname", False) + and self.assert_hostname is not False + ): + # While urllib3 attempts to always turn off hostname matching from + # the TLS library, this cannot always be done. So we check whether + # the TLS Library still thinks it's matching hostnames. + cert = self.sock.getpeercert() + if not cert.get("subjectAltName", ()): + warnings.warn( + ( + "Certificate for {0} has no `subjectAltName`, falling back to check for a " + "`commonName` for now. This feature is being removed by major browsers and " + "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " + "for details.)".format(hostname) + ), + SubjectAltNameWarning, + ) + _match_hostname(cert, self.assert_hostname or server_hostname) + + self.is_verified = ( + context.verify_mode == ssl.CERT_REQUIRED + or self.assert_fingerprint is not None + ) + + def _connect_tls_proxy(self, hostname, conn): + """ + Establish a TLS connection to the proxy using the provided SSL context. + """ + proxy_config = self.proxy_config + ssl_context = proxy_config.ssl_context + if ssl_context: + # If the user provided a proxy context, we assume CA and client + # certificates have already been set + return ssl_wrap_socket( + sock=conn, + server_hostname=hostname, + ssl_context=ssl_context, + ) + + ssl_context = create_proxy_ssl_context( + self.ssl_version, + self.cert_reqs, + self.ca_certs, + self.ca_cert_dir, + self.ca_cert_data, + ) + # By default urllib3's SSLContext disables `check_hostname` and uses + # a custom check. For proxies we're good with relying on the default + # verification. + ssl_context.check_hostname = True + + # If no cert was provided, use only the default options for server + # certificate validation + return ssl_wrap_socket( + sock=conn, + ca_certs=self.ca_certs, + ca_cert_dir=self.ca_cert_dir, + ca_cert_data=self.ca_cert_data, + server_hostname=hostname, + ssl_context=ssl_context, + ) + + +def _match_hostname(cert, asserted_hostname): + try: + match_hostname(cert, asserted_hostname) + except CertificateError as e: + log.warning( + "Certificate did not match expected hostname: %s. Certificate: %s", + asserted_hostname, + cert, + ) + # Add cert to exception and reraise so client code can inspect + # the cert when catching the exception, if they want to + e._peer_cert = cert + raise + + +def _get_default_user_agent(): + return "python-urllib3/%s" % __version__ + + +class DummyConnection(object): + """Used to detect a failed ConnectionCls import.""" + + pass + + +if not ssl: + HTTPSConnection = DummyConnection # noqa: F811 + + +VerifiedHTTPSConnection = HTTPSConnection diff --git a/openpype/hosts/fusion/vendor/urllib3/connectionpool.py b/openpype/hosts/fusion/vendor/urllib3/connectionpool.py new file mode 100644 index 0000000000..459bbe095b --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/connectionpool.py @@ -0,0 +1,1067 @@ +from __future__ import absolute_import + +import errno +import logging +import socket +import sys +import warnings +from socket import error as SocketError +from socket import timeout as SocketTimeout + +from .connection import ( + BaseSSLError, + BrokenPipeError, + DummyConnection, + HTTPConnection, + HTTPException, + HTTPSConnection, + VerifiedHTTPSConnection, + port_by_scheme, +) +from .exceptions import ( + ClosedPoolError, + EmptyPoolError, + HeaderParsingError, + HostChangedError, + InsecureRequestWarning, + LocationValueError, + MaxRetryError, + NewConnectionError, + ProtocolError, + ProxyError, + ReadTimeoutError, + SSLError, + TimeoutError, +) +from .packages import six +from .packages.six.moves import queue +from .packages.ssl_match_hostname import CertificateError +from .request import RequestMethods +from .response import HTTPResponse +from .util.connection import is_connection_dropped +from .util.proxy import connection_requires_http_tunnel +from .util.queue import LifoQueue +from .util.request import set_file_position +from .util.response import assert_header_parsing +from .util.retry import Retry +from .util.timeout import Timeout +from .util.url import Url, _encode_target +from .util.url import _normalize_host as normalize_host +from .util.url import get_host, parse_url + +xrange = six.moves.xrange + +log = logging.getLogger(__name__) + +_Default = object() + + +# Pool objects +class ConnectionPool(object): + """ + Base class for all connection pools, such as + :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. + + .. note:: + ConnectionPool.urlopen() does not normalize or percent-encode target URIs + which is useful if your target server doesn't support percent-encoded + target URIs. + """ + + scheme = None + QueueCls = LifoQueue + + def __init__(self, host, port=None): + if not host: + raise LocationValueError("No host specified.") + + self.host = _normalize_host(host, scheme=self.scheme) + self._proxy_host = host.lower() + self.port = port + + def __str__(self): + return "%s(host=%r, port=%r)" % (type(self).__name__, self.host, self.port) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.close() + # Return False to re-raise any potential exceptions + return False + + def close(self): + """ + Close all pooled connections and disable the pool. + """ + pass + + +# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 +_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK} + + +class HTTPConnectionPool(ConnectionPool, RequestMethods): + """ + Thread-safe connection pool for one host. + + :param host: + Host used for this HTTP Connection (e.g. "localhost"), passed into + :class:`http.client.HTTPConnection`. + + :param port: + Port used for this HTTP Connection (None is equivalent to 80), passed + into :class:`http.client.HTTPConnection`. + + :param strict: + Causes BadStatusLine to be raised if the status line can't be parsed + as a valid HTTP/1.0 or 1.1 status line, passed into + :class:`http.client.HTTPConnection`. + + .. note:: + Only works in Python 2. This parameter is ignored in Python 3. + + :param timeout: + Socket timeout in seconds for each individual connection. This can + be a float or integer, which sets the timeout for the HTTP request, + or an instance of :class:`urllib3.util.Timeout` which gives you more + fine-grained control over request timeouts. After the constructor has + been parsed, this is always a `urllib3.util.Timeout` object. + + :param maxsize: + Number of connections to save that can be reused. More than 1 is useful + in multithreaded situations. If ``block`` is set to False, more + connections will be created but they will not be saved once they've + been used. + + :param block: + If set to True, no more than ``maxsize`` connections will be used at + a time. When no free connections are available, the call will block + until a connection has been released. This is a useful side effect for + particular multithreaded situations where one does not want to use more + than maxsize connections per host to prevent flooding. + + :param headers: + Headers to include with all requests, unless other headers are given + explicitly. + + :param retries: + Retry configuration to use by default with requests in this pool. + + :param _proxy: + Parsed proxy URL, should not be used directly, instead, see + :class:`urllib3.ProxyManager` + + :param _proxy_headers: + A dictionary with proxy headers, should not be used directly, + instead, see :class:`urllib3.ProxyManager` + + :param \\**conn_kw: + Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, + :class:`urllib3.connection.HTTPSConnection` instances. + """ + + scheme = "http" + ConnectionCls = HTTPConnection + ResponseCls = HTTPResponse + + def __init__( + self, + host, + port=None, + strict=False, + timeout=Timeout.DEFAULT_TIMEOUT, + maxsize=1, + block=False, + headers=None, + retries=None, + _proxy=None, + _proxy_headers=None, + _proxy_config=None, + **conn_kw + ): + ConnectionPool.__init__(self, host, port) + RequestMethods.__init__(self, headers) + + self.strict = strict + + if not isinstance(timeout, Timeout): + timeout = Timeout.from_float(timeout) + + if retries is None: + retries = Retry.DEFAULT + + self.timeout = timeout + self.retries = retries + + self.pool = self.QueueCls(maxsize) + self.block = block + + self.proxy = _proxy + self.proxy_headers = _proxy_headers or {} + self.proxy_config = _proxy_config + + # Fill the queue up so that doing get() on it will block properly + for _ in xrange(maxsize): + self.pool.put(None) + + # These are mostly for testing and debugging purposes. + self.num_connections = 0 + self.num_requests = 0 + self.conn_kw = conn_kw + + if self.proxy: + # Enable Nagle's algorithm for proxies, to avoid packet fragmentation. + # We cannot know if the user has added default socket options, so we cannot replace the + # list. + self.conn_kw.setdefault("socket_options", []) + + self.conn_kw["proxy"] = self.proxy + self.conn_kw["proxy_config"] = self.proxy_config + + def _new_conn(self): + """ + Return a fresh :class:`HTTPConnection`. + """ + self.num_connections += 1 + log.debug( + "Starting new HTTP connection (%d): %s:%s", + self.num_connections, + self.host, + self.port or "80", + ) + + conn = self.ConnectionCls( + host=self.host, + port=self.port, + timeout=self.timeout.connect_timeout, + strict=self.strict, + **self.conn_kw + ) + return conn + + def _get_conn(self, timeout=None): + """ + Get a connection. Will return a pooled connection if one is available. + + If no connections are available and :prop:`.block` is ``False``, then a + fresh connection is returned. + + :param timeout: + Seconds to wait before giving up and raising + :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and + :prop:`.block` is ``True``. + """ + conn = None + try: + conn = self.pool.get(block=self.block, timeout=timeout) + + except AttributeError: # self.pool is None + raise ClosedPoolError(self, "Pool is closed.") + + except queue.Empty: + if self.block: + raise EmptyPoolError( + self, + "Pool reached maximum size and no more connections are allowed.", + ) + pass # Oh well, we'll create a new connection then + + # If this is a persistent connection, check if it got disconnected + if conn and is_connection_dropped(conn): + log.debug("Resetting dropped connection: %s", self.host) + conn.close() + if getattr(conn, "auto_open", 1) == 0: + # This is a proxied connection that has been mutated by + # http.client._tunnel() and cannot be reused (since it would + # attempt to bypass the proxy) + conn = None + + return conn or self._new_conn() + + def _put_conn(self, conn): + """ + Put a connection back into the pool. + + :param conn: + Connection object for the current host and port as returned by + :meth:`._new_conn` or :meth:`._get_conn`. + + If the pool is already full, the connection is closed and discarded + because we exceeded maxsize. If connections are discarded frequently, + then maxsize should be increased. + + If the pool is closed, then the connection will be closed and discarded. + """ + try: + self.pool.put(conn, block=False) + return # Everything is dandy, done. + except AttributeError: + # self.pool is None. + pass + except queue.Full: + # This should never happen if self.block == True + log.warning("Connection pool is full, discarding connection: %s", self.host) + + # Connection never got put back into the pool, close it. + if conn: + conn.close() + + def _validate_conn(self, conn): + """ + Called right before a request is made, after the socket is created. + """ + pass + + def _prepare_proxy(self, conn): + # Nothing to do for HTTP connections. + pass + + def _get_timeout(self, timeout): + """Helper that always returns a :class:`urllib3.util.Timeout`""" + if timeout is _Default: + return self.timeout.clone() + + if isinstance(timeout, Timeout): + return timeout.clone() + else: + # User passed us an int/float. This is for backwards compatibility, + # can be removed later + return Timeout.from_float(timeout) + + def _raise_timeout(self, err, url, timeout_value): + """Is the error actually a timeout? Will raise a ReadTimeout or pass""" + + if isinstance(err, SocketTimeout): + raise ReadTimeoutError( + self, url, "Read timed out. (read timeout=%s)" % timeout_value + ) + + # See the above comment about EAGAIN in Python 3. In Python 2 we have + # to specifically catch it and throw the timeout error + if hasattr(err, "errno") and err.errno in _blocking_errnos: + raise ReadTimeoutError( + self, url, "Read timed out. (read timeout=%s)" % timeout_value + ) + + # Catch possible read timeouts thrown as SSL errors. If not the + # case, rethrow the original. We need to do this because of: + # http://bugs.python.org/issue10272 + if "timed out" in str(err) or "did not complete (read)" in str( + err + ): # Python < 2.7.4 + raise ReadTimeoutError( + self, url, "Read timed out. (read timeout=%s)" % timeout_value + ) + + def _make_request( + self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw + ): + """ + Perform a request on a given urllib connection object taken from our + pool. + + :param conn: + a connection from one of our connection pools + + :param timeout: + Socket timeout in seconds for the request. This can be a + float or integer, which will set the same timeout value for + the socket connect and the socket read, or an instance of + :class:`urllib3.util.Timeout`, which gives you more fine-grained + control over your timeouts. + """ + self.num_requests += 1 + + timeout_obj = self._get_timeout(timeout) + timeout_obj.start_connect() + conn.timeout = timeout_obj.connect_timeout + + # Trigger any extra validation we need to do. + try: + self._validate_conn(conn) + except (SocketTimeout, BaseSSLError) as e: + # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. + self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) + raise + + # conn.request() calls http.client.*.request, not the method in + # urllib3.request. It also calls makefile (recv) on the socket. + try: + if chunked: + conn.request_chunked(method, url, **httplib_request_kw) + else: + conn.request(method, url, **httplib_request_kw) + + # We are swallowing BrokenPipeError (errno.EPIPE) since the server is + # legitimately able to close the connection after sending a valid response. + # With this behaviour, the received response is still readable. + except BrokenPipeError: + # Python 3 + pass + except IOError as e: + # Python 2 and macOS/Linux + # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS + # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ + if e.errno not in { + errno.EPIPE, + errno.ESHUTDOWN, + errno.EPROTOTYPE, + }: + raise + + # Reset the timeout for the recv() on the socket + read_timeout = timeout_obj.read_timeout + + # App Engine doesn't have a sock attr + if getattr(conn, "sock", None): + # In Python 3 socket.py will catch EAGAIN and return None when you + # try and read into the file pointer created by http.client, which + # instead raises a BadStatusLine exception. Instead of catching + # the exception and assuming all BadStatusLine exceptions are read + # timeouts, check for a zero timeout before making the request. + if read_timeout == 0: + raise ReadTimeoutError( + self, url, "Read timed out. (read timeout=%s)" % read_timeout + ) + if read_timeout is Timeout.DEFAULT_TIMEOUT: + conn.sock.settimeout(socket.getdefaulttimeout()) + else: # None or a value + conn.sock.settimeout(read_timeout) + + # Receive the response from the server + try: + try: + # Python 2.7, use buffering of HTTP responses + httplib_response = conn.getresponse(buffering=True) + except TypeError: + # Python 3 + try: + httplib_response = conn.getresponse() + except BaseException as e: + # Remove the TypeError from the exception chain in + # Python 3 (including for exceptions like SystemExit). + # Otherwise it looks like a bug in the code. + six.raise_from(e, None) + except (SocketTimeout, BaseSSLError, SocketError) as e: + self._raise_timeout(err=e, url=url, timeout_value=read_timeout) + raise + + # AppEngine doesn't have a version attr. + http_version = getattr(conn, "_http_vsn_str", "HTTP/?") + log.debug( + '%s://%s:%s "%s %s %s" %s %s', + self.scheme, + self.host, + self.port, + method, + url, + http_version, + httplib_response.status, + httplib_response.length, + ) + + try: + assert_header_parsing(httplib_response.msg) + except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3 + log.warning( + "Failed to parse headers (url=%s): %s", + self._absolute_url(url), + hpe, + exc_info=True, + ) + + return httplib_response + + def _absolute_url(self, path): + return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url + + def close(self): + """ + Close all pooled connections and disable the pool. + """ + if self.pool is None: + return + # Disable access to the pool + old_pool, self.pool = self.pool, None + + try: + while True: + conn = old_pool.get(block=False) + if conn: + conn.close() + + except queue.Empty: + pass # Done. + + def is_same_host(self, url): + """ + Check if the given ``url`` is a member of the same host as this + connection pool. + """ + if url.startswith("/"): + return True + + # TODO: Add optional support for socket.gethostbyname checking. + scheme, host, port = get_host(url) + if host is not None: + host = _normalize_host(host, scheme=scheme) + + # Use explicit default port for comparison when none is given + if self.port and not port: + port = port_by_scheme.get(scheme) + elif not self.port and port == port_by_scheme.get(scheme): + port = None + + return (scheme, host, port) == (self.scheme, self.host, self.port) + + def urlopen( + self, + method, + url, + body=None, + headers=None, + retries=None, + redirect=True, + assert_same_host=True, + timeout=_Default, + pool_timeout=None, + release_conn=None, + chunked=False, + body_pos=None, + **response_kw + ): + """ + Get a connection from the pool and perform an HTTP request. This is the + lowest level call for making a request, so you'll need to specify all + the raw details. + + .. note:: + + More commonly, it's appropriate to use a convenience method provided + by :class:`.RequestMethods`, such as :meth:`request`. + + .. note:: + + `release_conn` will only behave as expected if + `preload_content=False` because we want to make + `preload_content=False` the default behaviour someday soon without + breaking backwards compatibility. + + :param method: + HTTP request method (such as GET, POST, PUT, etc.) + + :param url: + The URL to perform the request on. + + :param body: + Data to send in the request body, either :class:`str`, :class:`bytes`, + an iterable of :class:`str`/:class:`bytes`, or a file-like object. + + :param headers: + Dictionary of custom headers to send, such as User-Agent, + If-None-Match, etc. If None, pool headers are used. If provided, + these headers completely replace any pool-specific headers. + + :param retries: + Configure the number of retries to allow before raising a + :class:`~urllib3.exceptions.MaxRetryError` exception. + + Pass ``None`` to retry until you receive a response. Pass a + :class:`~urllib3.util.retry.Retry` object for fine-grained control + over different types of retries. + Pass an integer number to retry connection errors that many times, + but no other types of errors. Pass zero to never retry. + + If ``False``, then retries are disabled and any exception is raised + immediately. Also, instead of raising a MaxRetryError on redirects, + the redirect response will be returned. + + :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. + + :param redirect: + If True, automatically handle redirects (status codes 301, 302, + 303, 307, 308). Each redirect counts as a retry. Disabling retries + will disable redirect, too. + + :param assert_same_host: + If ``True``, will make sure that the host of the pool requests is + consistent else will raise HostChangedError. When ``False``, you can + use the pool on an HTTP proxy and request foreign hosts. + + :param timeout: + If specified, overrides the default timeout for this one + request. It may be a float (in seconds) or an instance of + :class:`urllib3.util.Timeout`. + + :param pool_timeout: + If set and the pool is set to block=True, then this method will + block for ``pool_timeout`` seconds and raise EmptyPoolError if no + connection is available within the time period. + + :param release_conn: + If False, then the urlopen call will not release the connection + back into the pool once a response is received (but will release if + you read the entire contents of the response such as when + `preload_content=True`). This is useful if you're not preloading + the response's content immediately. You will need to call + ``r.release_conn()`` on the response ``r`` to return the connection + back into the pool. If None, it takes the value of + ``response_kw.get('preload_content', True)``. + + :param chunked: + If True, urllib3 will send the body using chunked transfer + encoding. Otherwise, urllib3 will send the body using the standard + content-length form. Defaults to False. + + :param int body_pos: + Position to seek to in file-like body in the event of a retry or + redirect. Typically this won't need to be set because urllib3 will + auto-populate the value when needed. + + :param \\**response_kw: + Additional parameters are passed to + :meth:`urllib3.response.HTTPResponse.from_httplib` + """ + + parsed_url = parse_url(url) + destination_scheme = parsed_url.scheme + + if headers is None: + headers = self.headers + + if not isinstance(retries, Retry): + retries = Retry.from_int(retries, redirect=redirect, default=self.retries) + + if release_conn is None: + release_conn = response_kw.get("preload_content", True) + + # Check host + if assert_same_host and not self.is_same_host(url): + raise HostChangedError(self, url, retries) + + # Ensure that the URL we're connecting to is properly encoded + if url.startswith("/"): + url = six.ensure_str(_encode_target(url)) + else: + url = six.ensure_str(parsed_url.url) + + conn = None + + # Track whether `conn` needs to be released before + # returning/raising/recursing. Update this variable if necessary, and + # leave `release_conn` constant throughout the function. That way, if + # the function recurses, the original value of `release_conn` will be + # passed down into the recursive call, and its value will be respected. + # + # See issue #651 [1] for details. + # + # [1] + release_this_conn = release_conn + + http_tunnel_required = connection_requires_http_tunnel( + self.proxy, self.proxy_config, destination_scheme + ) + + # Merge the proxy headers. Only done when not using HTTP CONNECT. We + # have to copy the headers dict so we can safely change it without those + # changes being reflected in anyone else's copy. + if not http_tunnel_required: + headers = headers.copy() + headers.update(self.proxy_headers) + + # Must keep the exception bound to a separate variable or else Python 3 + # complains about UnboundLocalError. + err = None + + # Keep track of whether we cleanly exited the except block. This + # ensures we do proper cleanup in finally. + clean_exit = False + + # Rewind body position, if needed. Record current position + # for future rewinds in the event of a redirect/retry. + body_pos = set_file_position(body, body_pos) + + try: + # Request a connection from the queue. + timeout_obj = self._get_timeout(timeout) + conn = self._get_conn(timeout=pool_timeout) + + conn.timeout = timeout_obj.connect_timeout + + is_new_proxy_conn = self.proxy is not None and not getattr( + conn, "sock", None + ) + if is_new_proxy_conn and http_tunnel_required: + self._prepare_proxy(conn) + + # Make the request on the httplib connection object. + httplib_response = self._make_request( + conn, + method, + url, + timeout=timeout_obj, + body=body, + headers=headers, + chunked=chunked, + ) + + # If we're going to release the connection in ``finally:``, then + # the response doesn't need to know about the connection. Otherwise + # it will also try to release it and we'll have a double-release + # mess. + response_conn = conn if not release_conn else None + + # Pass method to Response for length checking + response_kw["request_method"] = method + + # Import httplib's response into our own wrapper object + response = self.ResponseCls.from_httplib( + httplib_response, + pool=self, + connection=response_conn, + retries=retries, + **response_kw + ) + + # Everything went great! + clean_exit = True + + except EmptyPoolError: + # Didn't get a connection from the pool, no need to clean up + clean_exit = True + release_this_conn = False + raise + + except ( + TimeoutError, + HTTPException, + SocketError, + ProtocolError, + BaseSSLError, + SSLError, + CertificateError, + ) as e: + # Discard the connection for these exceptions. It will be + # replaced during the next _get_conn() call. + clean_exit = False + if isinstance(e, (BaseSSLError, CertificateError)): + e = SSLError(e) + elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: + e = ProxyError("Cannot connect to proxy.", e) + elif isinstance(e, (SocketError, HTTPException)): + e = ProtocolError("Connection aborted.", e) + + retries = retries.increment( + method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] + ) + retries.sleep() + + # Keep track of the error for the retry warning. + err = e + + finally: + if not clean_exit: + # We hit some kind of exception, handled or otherwise. We need + # to throw the connection away unless explicitly told not to. + # Close the connection, set the variable to None, and make sure + # we put the None back in the pool to avoid leaking it. + conn = conn and conn.close() + release_this_conn = True + + if release_this_conn: + # Put the connection back to be reused. If the connection is + # expired then it will be None, which will get replaced with a + # fresh connection during _get_conn. + self._put_conn(conn) + + if not conn: + # Try again + log.warning( + "Retrying (%r) after connection broken by '%r': %s", retries, err, url + ) + return self.urlopen( + method, + url, + body, + headers, + retries, + redirect, + assert_same_host, + timeout=timeout, + pool_timeout=pool_timeout, + release_conn=release_conn, + chunked=chunked, + body_pos=body_pos, + **response_kw + ) + + # Handle redirect? + redirect_location = redirect and response.get_redirect_location() + if redirect_location: + if response.status == 303: + method = "GET" + + try: + retries = retries.increment(method, url, response=response, _pool=self) + except MaxRetryError: + if retries.raise_on_redirect: + response.drain_conn() + raise + return response + + response.drain_conn() + retries.sleep_for_retry(response) + log.debug("Redirecting %s -> %s", url, redirect_location) + return self.urlopen( + method, + redirect_location, + body, + headers, + retries=retries, + redirect=redirect, + assert_same_host=assert_same_host, + timeout=timeout, + pool_timeout=pool_timeout, + release_conn=release_conn, + chunked=chunked, + body_pos=body_pos, + **response_kw + ) + + # Check if we should retry the HTTP response. + has_retry_after = bool(response.getheader("Retry-After")) + if retries.is_retry(method, response.status, has_retry_after): + try: + retries = retries.increment(method, url, response=response, _pool=self) + except MaxRetryError: + if retries.raise_on_status: + response.drain_conn() + raise + return response + + response.drain_conn() + retries.sleep(response) + log.debug("Retry: %s", url) + return self.urlopen( + method, + url, + body, + headers, + retries=retries, + redirect=redirect, + assert_same_host=assert_same_host, + timeout=timeout, + pool_timeout=pool_timeout, + release_conn=release_conn, + chunked=chunked, + body_pos=body_pos, + **response_kw + ) + + return response + + +class HTTPSConnectionPool(HTTPConnectionPool): + """ + Same as :class:`.HTTPConnectionPool`, but HTTPS. + + :class:`.HTTPSConnection` uses one of ``assert_fingerprint``, + ``assert_hostname`` and ``host`` in this order to verify connections. + If ``assert_hostname`` is False, no verification is done. + + The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``, + ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl` + is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade + the connection socket into an SSL socket. + """ + + scheme = "https" + ConnectionCls = HTTPSConnection + + def __init__( + self, + host, + port=None, + strict=False, + timeout=Timeout.DEFAULT_TIMEOUT, + maxsize=1, + block=False, + headers=None, + retries=None, + _proxy=None, + _proxy_headers=None, + key_file=None, + cert_file=None, + cert_reqs=None, + key_password=None, + ca_certs=None, + ssl_version=None, + assert_hostname=None, + assert_fingerprint=None, + ca_cert_dir=None, + **conn_kw + ): + + HTTPConnectionPool.__init__( + self, + host, + port, + strict, + timeout, + maxsize, + block, + headers, + retries, + _proxy, + _proxy_headers, + **conn_kw + ) + + self.key_file = key_file + self.cert_file = cert_file + self.cert_reqs = cert_reqs + self.key_password = key_password + self.ca_certs = ca_certs + self.ca_cert_dir = ca_cert_dir + self.ssl_version = ssl_version + self.assert_hostname = assert_hostname + self.assert_fingerprint = assert_fingerprint + + def _prepare_conn(self, conn): + """ + Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` + and establish the tunnel if proxy is used. + """ + + if isinstance(conn, VerifiedHTTPSConnection): + conn.set_cert( + key_file=self.key_file, + key_password=self.key_password, + cert_file=self.cert_file, + cert_reqs=self.cert_reqs, + ca_certs=self.ca_certs, + ca_cert_dir=self.ca_cert_dir, + assert_hostname=self.assert_hostname, + assert_fingerprint=self.assert_fingerprint, + ) + conn.ssl_version = self.ssl_version + return conn + + def _prepare_proxy(self, conn): + """ + Establishes a tunnel connection through HTTP CONNECT. + + Tunnel connection is established early because otherwise httplib would + improperly set Host: header to proxy's IP:port. + """ + + conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers) + + if self.proxy.scheme == "https": + conn.tls_in_tls_required = True + + conn.connect() + + def _new_conn(self): + """ + Return a fresh :class:`http.client.HTTPSConnection`. + """ + self.num_connections += 1 + log.debug( + "Starting new HTTPS connection (%d): %s:%s", + self.num_connections, + self.host, + self.port or "443", + ) + + if not self.ConnectionCls or self.ConnectionCls is DummyConnection: + raise SSLError( + "Can't connect to HTTPS URL because the SSL module is not available." + ) + + actual_host = self.host + actual_port = self.port + if self.proxy is not None: + actual_host = self.proxy.host + actual_port = self.proxy.port + + conn = self.ConnectionCls( + host=actual_host, + port=actual_port, + timeout=self.timeout.connect_timeout, + strict=self.strict, + cert_file=self.cert_file, + key_file=self.key_file, + key_password=self.key_password, + **self.conn_kw + ) + + return self._prepare_conn(conn) + + def _validate_conn(self, conn): + """ + Called right before a request is made, after the socket is created. + """ + super(HTTPSConnectionPool, self)._validate_conn(conn) + + # Force connect early to allow us to validate the connection. + if not getattr(conn, "sock", None): # AppEngine might not have `.sock` + conn.connect() + + if not conn.is_verified: + warnings.warn( + ( + "Unverified HTTPS request is being made to host '%s'. " + "Adding certificate verification is strongly advised. See: " + "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings" % conn.host + ), + InsecureRequestWarning, + ) + + +def connection_from_url(url, **kw): + """ + Given a url, return an :class:`.ConnectionPool` instance of its host. + + This is a shortcut for not having to parse out the scheme, host, and port + of the url before creating an :class:`.ConnectionPool` instance. + + :param url: + Absolute URL string that must include the scheme. Port is optional. + + :param \\**kw: + Passes additional parameters to the constructor of the appropriate + :class:`.ConnectionPool`. Useful for specifying things like + timeout, maxsize, headers, etc. + + Example:: + + >>> conn = connection_from_url('http://google.com/') + >>> r = conn.request('GET', '/') + """ + scheme, host, port = get_host(url) + port = port or port_by_scheme.get(scheme, 80) + if scheme == "https": + return HTTPSConnectionPool(host, port=port, **kw) + else: + return HTTPConnectionPool(host, port=port, **kw) + + +def _normalize_host(host, scheme): + """ + Normalize hosts for comparisons and use with sockets. + """ + + host = normalize_host(host, scheme) + + # httplib doesn't like it when we include brackets in IPv6 addresses + # Specifically, if we include brackets but also pass the port then + # httplib crazily doubles up the square brackets on the Host header. + # Instead, we need to make sure we never pass ``None`` as the port. + # However, for backward compatibility reasons we can't actually + # *assert* that. See http://bugs.python.org/issue28539 + if host.startswith("[") and host.endswith("]"): + host = host[1:-1] + return host diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/__init__.py b/openpype/hosts/fusion/vendor/urllib3/contrib/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/_appengine_environ.py b/openpype/hosts/fusion/vendor/urllib3/contrib/_appengine_environ.py new file mode 100644 index 0000000000..8765b907d7 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/_appengine_environ.py @@ -0,0 +1,36 @@ +""" +This module provides means to detect the App Engine environment. +""" + +import os + + +def is_appengine(): + return is_local_appengine() or is_prod_appengine() + + +def is_appengine_sandbox(): + """Reports if the app is running in the first generation sandbox. + + The second generation runtimes are technically still in a sandbox, but it + is much less restrictive, so generally you shouldn't need to check for it. + see https://cloud.google.com/appengine/docs/standard/runtimes + """ + return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27" + + +def is_local_appengine(): + return "APPENGINE_RUNTIME" in os.environ and os.environ.get( + "SERVER_SOFTWARE", "" + ).startswith("Development/") + + +def is_prod_appengine(): + return "APPENGINE_RUNTIME" in os.environ and os.environ.get( + "SERVER_SOFTWARE", "" + ).startswith("Google App Engine/") + + +def is_prod_appengine_mvms(): + """Deprecated.""" + return False diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/__init__.py b/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/bindings.py b/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/bindings.py new file mode 100644 index 0000000000..11524d400b --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/bindings.py @@ -0,0 +1,519 @@ +""" +This module uses ctypes to bind a whole bunch of functions and constants from +SecureTransport. The goal here is to provide the low-level API to +SecureTransport. These are essentially the C-level functions and constants, and +they're pretty gross to work with. + +This code is a bastardised version of the code found in Will Bond's oscrypto +library. An enormous debt is owed to him for blazing this trail for us. For +that reason, this code should be considered to be covered both by urllib3's +license and by oscrypto's: + + Copyright (c) 2015-2016 Will Bond + + Permission is hereby granted, free of charge, to any person obtaining a + copy of this software and associated documentation files (the "Software"), + to deal in the Software without restriction, including without limitation + the rights to use, copy, modify, merge, publish, distribute, sublicense, + and/or sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in + all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. +""" +from __future__ import absolute_import + +import platform +from ctypes import ( + CDLL, + CFUNCTYPE, + POINTER, + c_bool, + c_byte, + c_char_p, + c_int32, + c_long, + c_size_t, + c_uint32, + c_ulong, + c_void_p, +) +from ctypes.util import find_library + +from urllib3.packages.six import raise_from + +if platform.system() != "Darwin": + raise ImportError("Only macOS is supported") + +version = platform.mac_ver()[0] +version_info = tuple(map(int, version.split("."))) +if version_info < (10, 8): + raise OSError( + "Only OS X 10.8 and newer are supported, not %s.%s" + % (version_info[0], version_info[1]) + ) + + +def load_cdll(name, macos10_16_path): + """Loads a CDLL by name, falling back to known path on 10.16+""" + try: + # Big Sur is technically 11 but we use 10.16 due to the Big Sur + # beta being labeled as 10.16. + if version_info >= (10, 16): + path = macos10_16_path + else: + path = find_library(name) + if not path: + raise OSError # Caught and reraised as 'ImportError' + return CDLL(path, use_errno=True) + except OSError: + raise_from(ImportError("The library %s failed to load" % name), None) + + +Security = load_cdll( + "Security", "/System/Library/Frameworks/Security.framework/Security" +) +CoreFoundation = load_cdll( + "CoreFoundation", + "/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation", +) + + +Boolean = c_bool +CFIndex = c_long +CFStringEncoding = c_uint32 +CFData = c_void_p +CFString = c_void_p +CFArray = c_void_p +CFMutableArray = c_void_p +CFDictionary = c_void_p +CFError = c_void_p +CFType = c_void_p +CFTypeID = c_ulong + +CFTypeRef = POINTER(CFType) +CFAllocatorRef = c_void_p + +OSStatus = c_int32 + +CFDataRef = POINTER(CFData) +CFStringRef = POINTER(CFString) +CFArrayRef = POINTER(CFArray) +CFMutableArrayRef = POINTER(CFMutableArray) +CFDictionaryRef = POINTER(CFDictionary) +CFArrayCallBacks = c_void_p +CFDictionaryKeyCallBacks = c_void_p +CFDictionaryValueCallBacks = c_void_p + +SecCertificateRef = POINTER(c_void_p) +SecExternalFormat = c_uint32 +SecExternalItemType = c_uint32 +SecIdentityRef = POINTER(c_void_p) +SecItemImportExportFlags = c_uint32 +SecItemImportExportKeyParameters = c_void_p +SecKeychainRef = POINTER(c_void_p) +SSLProtocol = c_uint32 +SSLCipherSuite = c_uint32 +SSLContextRef = POINTER(c_void_p) +SecTrustRef = POINTER(c_void_p) +SSLConnectionRef = c_uint32 +SecTrustResultType = c_uint32 +SecTrustOptionFlags = c_uint32 +SSLProtocolSide = c_uint32 +SSLConnectionType = c_uint32 +SSLSessionOption = c_uint32 + + +try: + Security.SecItemImport.argtypes = [ + CFDataRef, + CFStringRef, + POINTER(SecExternalFormat), + POINTER(SecExternalItemType), + SecItemImportExportFlags, + POINTER(SecItemImportExportKeyParameters), + SecKeychainRef, + POINTER(CFArrayRef), + ] + Security.SecItemImport.restype = OSStatus + + Security.SecCertificateGetTypeID.argtypes = [] + Security.SecCertificateGetTypeID.restype = CFTypeID + + Security.SecIdentityGetTypeID.argtypes = [] + Security.SecIdentityGetTypeID.restype = CFTypeID + + Security.SecKeyGetTypeID.argtypes = [] + Security.SecKeyGetTypeID.restype = CFTypeID + + Security.SecCertificateCreateWithData.argtypes = [CFAllocatorRef, CFDataRef] + Security.SecCertificateCreateWithData.restype = SecCertificateRef + + Security.SecCertificateCopyData.argtypes = [SecCertificateRef] + Security.SecCertificateCopyData.restype = CFDataRef + + Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p] + Security.SecCopyErrorMessageString.restype = CFStringRef + + Security.SecIdentityCreateWithCertificate.argtypes = [ + CFTypeRef, + SecCertificateRef, + POINTER(SecIdentityRef), + ] + Security.SecIdentityCreateWithCertificate.restype = OSStatus + + Security.SecKeychainCreate.argtypes = [ + c_char_p, + c_uint32, + c_void_p, + Boolean, + c_void_p, + POINTER(SecKeychainRef), + ] + Security.SecKeychainCreate.restype = OSStatus + + Security.SecKeychainDelete.argtypes = [SecKeychainRef] + Security.SecKeychainDelete.restype = OSStatus + + Security.SecPKCS12Import.argtypes = [ + CFDataRef, + CFDictionaryRef, + POINTER(CFArrayRef), + ] + Security.SecPKCS12Import.restype = OSStatus + + SSLReadFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, c_void_p, POINTER(c_size_t)) + SSLWriteFunc = CFUNCTYPE( + OSStatus, SSLConnectionRef, POINTER(c_byte), POINTER(c_size_t) + ) + + Security.SSLSetIOFuncs.argtypes = [SSLContextRef, SSLReadFunc, SSLWriteFunc] + Security.SSLSetIOFuncs.restype = OSStatus + + Security.SSLSetPeerID.argtypes = [SSLContextRef, c_char_p, c_size_t] + Security.SSLSetPeerID.restype = OSStatus + + Security.SSLSetCertificate.argtypes = [SSLContextRef, CFArrayRef] + Security.SSLSetCertificate.restype = OSStatus + + Security.SSLSetCertificateAuthorities.argtypes = [SSLContextRef, CFTypeRef, Boolean] + Security.SSLSetCertificateAuthorities.restype = OSStatus + + Security.SSLSetConnection.argtypes = [SSLContextRef, SSLConnectionRef] + Security.SSLSetConnection.restype = OSStatus + + Security.SSLSetPeerDomainName.argtypes = [SSLContextRef, c_char_p, c_size_t] + Security.SSLSetPeerDomainName.restype = OSStatus + + Security.SSLHandshake.argtypes = [SSLContextRef] + Security.SSLHandshake.restype = OSStatus + + Security.SSLRead.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)] + Security.SSLRead.restype = OSStatus + + Security.SSLWrite.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)] + Security.SSLWrite.restype = OSStatus + + Security.SSLClose.argtypes = [SSLContextRef] + Security.SSLClose.restype = OSStatus + + Security.SSLGetNumberSupportedCiphers.argtypes = [SSLContextRef, POINTER(c_size_t)] + Security.SSLGetNumberSupportedCiphers.restype = OSStatus + + Security.SSLGetSupportedCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + POINTER(c_size_t), + ] + Security.SSLGetSupportedCiphers.restype = OSStatus + + Security.SSLSetEnabledCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + c_size_t, + ] + Security.SSLSetEnabledCiphers.restype = OSStatus + + Security.SSLGetNumberEnabledCiphers.argtype = [SSLContextRef, POINTER(c_size_t)] + Security.SSLGetNumberEnabledCiphers.restype = OSStatus + + Security.SSLGetEnabledCiphers.argtypes = [ + SSLContextRef, + POINTER(SSLCipherSuite), + POINTER(c_size_t), + ] + Security.SSLGetEnabledCiphers.restype = OSStatus + + Security.SSLGetNegotiatedCipher.argtypes = [SSLContextRef, POINTER(SSLCipherSuite)] + Security.SSLGetNegotiatedCipher.restype = OSStatus + + Security.SSLGetNegotiatedProtocolVersion.argtypes = [ + SSLContextRef, + POINTER(SSLProtocol), + ] + Security.SSLGetNegotiatedProtocolVersion.restype = OSStatus + + Security.SSLCopyPeerTrust.argtypes = [SSLContextRef, POINTER(SecTrustRef)] + Security.SSLCopyPeerTrust.restype = OSStatus + + Security.SecTrustSetAnchorCertificates.argtypes = [SecTrustRef, CFArrayRef] + Security.SecTrustSetAnchorCertificates.restype = OSStatus + + Security.SecTrustSetAnchorCertificatesOnly.argstypes = [SecTrustRef, Boolean] + Security.SecTrustSetAnchorCertificatesOnly.restype = OSStatus + + Security.SecTrustEvaluate.argtypes = [SecTrustRef, POINTER(SecTrustResultType)] + Security.SecTrustEvaluate.restype = OSStatus + + Security.SecTrustGetCertificateCount.argtypes = [SecTrustRef] + Security.SecTrustGetCertificateCount.restype = CFIndex + + Security.SecTrustGetCertificateAtIndex.argtypes = [SecTrustRef, CFIndex] + Security.SecTrustGetCertificateAtIndex.restype = SecCertificateRef + + Security.SSLCreateContext.argtypes = [ + CFAllocatorRef, + SSLProtocolSide, + SSLConnectionType, + ] + Security.SSLCreateContext.restype = SSLContextRef + + Security.SSLSetSessionOption.argtypes = [SSLContextRef, SSLSessionOption, Boolean] + Security.SSLSetSessionOption.restype = OSStatus + + Security.SSLSetProtocolVersionMin.argtypes = [SSLContextRef, SSLProtocol] + Security.SSLSetProtocolVersionMin.restype = OSStatus + + Security.SSLSetProtocolVersionMax.argtypes = [SSLContextRef, SSLProtocol] + Security.SSLSetProtocolVersionMax.restype = OSStatus + + try: + Security.SSLSetALPNProtocols.argtypes = [SSLContextRef, CFArrayRef] + Security.SSLSetALPNProtocols.restype = OSStatus + except AttributeError: + # Supported only in 10.12+ + pass + + Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p] + Security.SecCopyErrorMessageString.restype = CFStringRef + + Security.SSLReadFunc = SSLReadFunc + Security.SSLWriteFunc = SSLWriteFunc + Security.SSLContextRef = SSLContextRef + Security.SSLProtocol = SSLProtocol + Security.SSLCipherSuite = SSLCipherSuite + Security.SecIdentityRef = SecIdentityRef + Security.SecKeychainRef = SecKeychainRef + Security.SecTrustRef = SecTrustRef + Security.SecTrustResultType = SecTrustResultType + Security.SecExternalFormat = SecExternalFormat + Security.OSStatus = OSStatus + + Security.kSecImportExportPassphrase = CFStringRef.in_dll( + Security, "kSecImportExportPassphrase" + ) + Security.kSecImportItemIdentity = CFStringRef.in_dll( + Security, "kSecImportItemIdentity" + ) + + # CoreFoundation time! + CoreFoundation.CFRetain.argtypes = [CFTypeRef] + CoreFoundation.CFRetain.restype = CFTypeRef + + CoreFoundation.CFRelease.argtypes = [CFTypeRef] + CoreFoundation.CFRelease.restype = None + + CoreFoundation.CFGetTypeID.argtypes = [CFTypeRef] + CoreFoundation.CFGetTypeID.restype = CFTypeID + + CoreFoundation.CFStringCreateWithCString.argtypes = [ + CFAllocatorRef, + c_char_p, + CFStringEncoding, + ] + CoreFoundation.CFStringCreateWithCString.restype = CFStringRef + + CoreFoundation.CFStringGetCStringPtr.argtypes = [CFStringRef, CFStringEncoding] + CoreFoundation.CFStringGetCStringPtr.restype = c_char_p + + CoreFoundation.CFStringGetCString.argtypes = [ + CFStringRef, + c_char_p, + CFIndex, + CFStringEncoding, + ] + CoreFoundation.CFStringGetCString.restype = c_bool + + CoreFoundation.CFDataCreate.argtypes = [CFAllocatorRef, c_char_p, CFIndex] + CoreFoundation.CFDataCreate.restype = CFDataRef + + CoreFoundation.CFDataGetLength.argtypes = [CFDataRef] + CoreFoundation.CFDataGetLength.restype = CFIndex + + CoreFoundation.CFDataGetBytePtr.argtypes = [CFDataRef] + CoreFoundation.CFDataGetBytePtr.restype = c_void_p + + CoreFoundation.CFDictionaryCreate.argtypes = [ + CFAllocatorRef, + POINTER(CFTypeRef), + POINTER(CFTypeRef), + CFIndex, + CFDictionaryKeyCallBacks, + CFDictionaryValueCallBacks, + ] + CoreFoundation.CFDictionaryCreate.restype = CFDictionaryRef + + CoreFoundation.CFDictionaryGetValue.argtypes = [CFDictionaryRef, CFTypeRef] + CoreFoundation.CFDictionaryGetValue.restype = CFTypeRef + + CoreFoundation.CFArrayCreate.argtypes = [ + CFAllocatorRef, + POINTER(CFTypeRef), + CFIndex, + CFArrayCallBacks, + ] + CoreFoundation.CFArrayCreate.restype = CFArrayRef + + CoreFoundation.CFArrayCreateMutable.argtypes = [ + CFAllocatorRef, + CFIndex, + CFArrayCallBacks, + ] + CoreFoundation.CFArrayCreateMutable.restype = CFMutableArrayRef + + CoreFoundation.CFArrayAppendValue.argtypes = [CFMutableArrayRef, c_void_p] + CoreFoundation.CFArrayAppendValue.restype = None + + CoreFoundation.CFArrayGetCount.argtypes = [CFArrayRef] + CoreFoundation.CFArrayGetCount.restype = CFIndex + + CoreFoundation.CFArrayGetValueAtIndex.argtypes = [CFArrayRef, CFIndex] + CoreFoundation.CFArrayGetValueAtIndex.restype = c_void_p + + CoreFoundation.kCFAllocatorDefault = CFAllocatorRef.in_dll( + CoreFoundation, "kCFAllocatorDefault" + ) + CoreFoundation.kCFTypeArrayCallBacks = c_void_p.in_dll( + CoreFoundation, "kCFTypeArrayCallBacks" + ) + CoreFoundation.kCFTypeDictionaryKeyCallBacks = c_void_p.in_dll( + CoreFoundation, "kCFTypeDictionaryKeyCallBacks" + ) + CoreFoundation.kCFTypeDictionaryValueCallBacks = c_void_p.in_dll( + CoreFoundation, "kCFTypeDictionaryValueCallBacks" + ) + + CoreFoundation.CFTypeRef = CFTypeRef + CoreFoundation.CFArrayRef = CFArrayRef + CoreFoundation.CFStringRef = CFStringRef + CoreFoundation.CFDictionaryRef = CFDictionaryRef + +except (AttributeError): + raise ImportError("Error initializing ctypes") + + +class CFConst(object): + """ + A class object that acts as essentially a namespace for CoreFoundation + constants. + """ + + kCFStringEncodingUTF8 = CFStringEncoding(0x08000100) + + +class SecurityConst(object): + """ + A class object that acts as essentially a namespace for Security constants. + """ + + kSSLSessionOptionBreakOnServerAuth = 0 + + kSSLProtocol2 = 1 + kSSLProtocol3 = 2 + kTLSProtocol1 = 4 + kTLSProtocol11 = 7 + kTLSProtocol12 = 8 + # SecureTransport does not support TLS 1.3 even if there's a constant for it + kTLSProtocol13 = 10 + kTLSProtocolMaxSupported = 999 + + kSSLClientSide = 1 + kSSLStreamType = 0 + + kSecFormatPEMSequence = 10 + + kSecTrustResultInvalid = 0 + kSecTrustResultProceed = 1 + # This gap is present on purpose: this was kSecTrustResultConfirm, which + # is deprecated. + kSecTrustResultDeny = 3 + kSecTrustResultUnspecified = 4 + kSecTrustResultRecoverableTrustFailure = 5 + kSecTrustResultFatalTrustFailure = 6 + kSecTrustResultOtherError = 7 + + errSSLProtocol = -9800 + errSSLWouldBlock = -9803 + errSSLClosedGraceful = -9805 + errSSLClosedNoNotify = -9816 + errSSLClosedAbort = -9806 + + errSSLXCertChainInvalid = -9807 + errSSLCrypto = -9809 + errSSLInternal = -9810 + errSSLCertExpired = -9814 + errSSLCertNotYetValid = -9815 + errSSLUnknownRootCert = -9812 + errSSLNoRootCert = -9813 + errSSLHostNameMismatch = -9843 + errSSLPeerHandshakeFail = -9824 + errSSLPeerUserCancelled = -9839 + errSSLWeakPeerEphemeralDHKey = -9850 + errSSLServerAuthCompleted = -9841 + errSSLRecordOverflow = -9847 + + errSecVerifyFailed = -67808 + errSecNoTrustSettings = -25263 + errSecItemNotFound = -25300 + errSecInvalidTrustSettings = -25262 + + # Cipher suites. We only pick the ones our default cipher string allows. + # Source: https://developer.apple.com/documentation/security/1550981-ssl_cipher_suite_values + TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xC02C + TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xC030 + TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xC02B + TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xC02F + TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA9 + TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA8 + TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009F + TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009E + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xC024 + TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xC028 + TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xC00A + TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xC014 + TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006B + TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039 + TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xC023 + TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xC027 + TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xC009 + TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xC013 + TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067 + TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033 + TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009D + TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009C + TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003D + TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003C + TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035 + TLS_RSA_WITH_AES_128_CBC_SHA = 0x002F + TLS_AES_128_GCM_SHA256 = 0x1301 + TLS_AES_256_GCM_SHA384 = 0x1302 + TLS_AES_128_CCM_8_SHA256 = 0x1305 + TLS_AES_128_CCM_SHA256 = 0x1304 diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/low_level.py b/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/low_level.py new file mode 100644 index 0000000000..ed8120190c --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/_securetransport/low_level.py @@ -0,0 +1,396 @@ +""" +Low-level helpers for the SecureTransport bindings. + +These are Python functions that are not directly related to the high-level APIs +but are necessary to get them to work. They include a whole bunch of low-level +CoreFoundation messing about and memory management. The concerns in this module +are almost entirely about trying to avoid memory leaks and providing +appropriate and useful assistance to the higher-level code. +""" +import base64 +import ctypes +import itertools +import os +import re +import ssl +import struct +import tempfile + +from .bindings import CFConst, CoreFoundation, Security + +# This regular expression is used to grab PEM data out of a PEM bundle. +_PEM_CERTS_RE = re.compile( + b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL +) + + +def _cf_data_from_bytes(bytestring): + """ + Given a bytestring, create a CFData object from it. This CFData object must + be CFReleased by the caller. + """ + return CoreFoundation.CFDataCreate( + CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring) + ) + + +def _cf_dictionary_from_tuples(tuples): + """ + Given a list of Python tuples, create an associated CFDictionary. + """ + dictionary_size = len(tuples) + + # We need to get the dictionary keys and values out in the same order. + keys = (t[0] for t in tuples) + values = (t[1] for t in tuples) + cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys) + cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values) + + return CoreFoundation.CFDictionaryCreate( + CoreFoundation.kCFAllocatorDefault, + cf_keys, + cf_values, + dictionary_size, + CoreFoundation.kCFTypeDictionaryKeyCallBacks, + CoreFoundation.kCFTypeDictionaryValueCallBacks, + ) + + +def _cfstr(py_bstr): + """ + Given a Python binary data, create a CFString. + The string must be CFReleased by the caller. + """ + c_str = ctypes.c_char_p(py_bstr) + cf_str = CoreFoundation.CFStringCreateWithCString( + CoreFoundation.kCFAllocatorDefault, + c_str, + CFConst.kCFStringEncodingUTF8, + ) + return cf_str + + +def _create_cfstring_array(lst): + """ + Given a list of Python binary data, create an associated CFMutableArray. + The array must be CFReleased by the caller. + + Raises an ssl.SSLError on failure. + """ + cf_arr = None + try: + cf_arr = CoreFoundation.CFArrayCreateMutable( + CoreFoundation.kCFAllocatorDefault, + 0, + ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), + ) + if not cf_arr: + raise MemoryError("Unable to allocate memory!") + for item in lst: + cf_str = _cfstr(item) + if not cf_str: + raise MemoryError("Unable to allocate memory!") + try: + CoreFoundation.CFArrayAppendValue(cf_arr, cf_str) + finally: + CoreFoundation.CFRelease(cf_str) + except BaseException as e: + if cf_arr: + CoreFoundation.CFRelease(cf_arr) + raise ssl.SSLError("Unable to allocate array: %s" % (e,)) + return cf_arr + + +def _cf_string_to_unicode(value): + """ + Creates a Unicode string from a CFString object. Used entirely for error + reporting. + + Yes, it annoys me quite a lot that this function is this complex. + """ + value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p)) + + string = CoreFoundation.CFStringGetCStringPtr( + value_as_void_p, CFConst.kCFStringEncodingUTF8 + ) + if string is None: + buffer = ctypes.create_string_buffer(1024) + result = CoreFoundation.CFStringGetCString( + value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8 + ) + if not result: + raise OSError("Error copying C string from CFStringRef") + string = buffer.value + if string is not None: + string = string.decode("utf-8") + return string + + +def _assert_no_error(error, exception_class=None): + """ + Checks the return code and throws an exception if there is an error to + report + """ + if error == 0: + return + + cf_error_string = Security.SecCopyErrorMessageString(error, None) + output = _cf_string_to_unicode(cf_error_string) + CoreFoundation.CFRelease(cf_error_string) + + if output is None or output == u"": + output = u"OSStatus %s" % error + + if exception_class is None: + exception_class = ssl.SSLError + + raise exception_class(output) + + +def _cert_array_from_pem(pem_bundle): + """ + Given a bundle of certs in PEM format, turns them into a CFArray of certs + that can be used to validate a cert chain. + """ + # Normalize the PEM bundle's line endings. + pem_bundle = pem_bundle.replace(b"\r\n", b"\n") + + der_certs = [ + base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle) + ] + if not der_certs: + raise ssl.SSLError("No root certificates specified") + + cert_array = CoreFoundation.CFArrayCreateMutable( + CoreFoundation.kCFAllocatorDefault, + 0, + ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), + ) + if not cert_array: + raise ssl.SSLError("Unable to allocate memory!") + + try: + for der_bytes in der_certs: + certdata = _cf_data_from_bytes(der_bytes) + if not certdata: + raise ssl.SSLError("Unable to allocate memory!") + cert = Security.SecCertificateCreateWithData( + CoreFoundation.kCFAllocatorDefault, certdata + ) + CoreFoundation.CFRelease(certdata) + if not cert: + raise ssl.SSLError("Unable to build cert object!") + + CoreFoundation.CFArrayAppendValue(cert_array, cert) + CoreFoundation.CFRelease(cert) + except Exception: + # We need to free the array before the exception bubbles further. + # We only want to do that if an error occurs: otherwise, the caller + # should free. + CoreFoundation.CFRelease(cert_array) + + return cert_array + + +def _is_cert(item): + """ + Returns True if a given CFTypeRef is a certificate. + """ + expected = Security.SecCertificateGetTypeID() + return CoreFoundation.CFGetTypeID(item) == expected + + +def _is_identity(item): + """ + Returns True if a given CFTypeRef is an identity. + """ + expected = Security.SecIdentityGetTypeID() + return CoreFoundation.CFGetTypeID(item) == expected + + +def _temporary_keychain(): + """ + This function creates a temporary Mac keychain that we can use to work with + credentials. This keychain uses a one-time password and a temporary file to + store the data. We expect to have one keychain per socket. The returned + SecKeychainRef must be freed by the caller, including calling + SecKeychainDelete. + + Returns a tuple of the SecKeychainRef and the path to the temporary + directory that contains it. + """ + # Unfortunately, SecKeychainCreate requires a path to a keychain. This + # means we cannot use mkstemp to use a generic temporary file. Instead, + # we're going to create a temporary directory and a filename to use there. + # This filename will be 8 random bytes expanded into base64. We also need + # some random bytes to password-protect the keychain we're creating, so we + # ask for 40 random bytes. + random_bytes = os.urandom(40) + filename = base64.b16encode(random_bytes[:8]).decode("utf-8") + password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8 + tempdirectory = tempfile.mkdtemp() + + keychain_path = os.path.join(tempdirectory, filename).encode("utf-8") + + # We now want to create the keychain itself. + keychain = Security.SecKeychainRef() + status = Security.SecKeychainCreate( + keychain_path, len(password), password, False, None, ctypes.byref(keychain) + ) + _assert_no_error(status) + + # Having created the keychain, we want to pass it off to the caller. + return keychain, tempdirectory + + +def _load_items_from_file(keychain, path): + """ + Given a single file, loads all the trust objects from it into arrays and + the keychain. + Returns a tuple of lists: the first list is a list of identities, the + second a list of certs. + """ + certificates = [] + identities = [] + result_array = None + + with open(path, "rb") as f: + raw_filedata = f.read() + + try: + filedata = CoreFoundation.CFDataCreate( + CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata) + ) + result_array = CoreFoundation.CFArrayRef() + result = Security.SecItemImport( + filedata, # cert data + None, # Filename, leaving it out for now + None, # What the type of the file is, we don't care + None, # what's in the file, we don't care + 0, # import flags + None, # key params, can include passphrase in the future + keychain, # The keychain to insert into + ctypes.byref(result_array), # Results + ) + _assert_no_error(result) + + # A CFArray is not very useful to us as an intermediary + # representation, so we are going to extract the objects we want + # and then free the array. We don't need to keep hold of keys: the + # keychain already has them! + result_count = CoreFoundation.CFArrayGetCount(result_array) + for index in range(result_count): + item = CoreFoundation.CFArrayGetValueAtIndex(result_array, index) + item = ctypes.cast(item, CoreFoundation.CFTypeRef) + + if _is_cert(item): + CoreFoundation.CFRetain(item) + certificates.append(item) + elif _is_identity(item): + CoreFoundation.CFRetain(item) + identities.append(item) + finally: + if result_array: + CoreFoundation.CFRelease(result_array) + + CoreFoundation.CFRelease(filedata) + + return (identities, certificates) + + +def _load_client_cert_chain(keychain, *paths): + """ + Load certificates and maybe keys from a number of files. Has the end goal + of returning a CFArray containing one SecIdentityRef, and then zero or more + SecCertificateRef objects, suitable for use as a client certificate trust + chain. + """ + # Ok, the strategy. + # + # This relies on knowing that macOS will not give you a SecIdentityRef + # unless you have imported a key into a keychain. This is a somewhat + # artificial limitation of macOS (for example, it doesn't necessarily + # affect iOS), but there is nothing inside Security.framework that lets you + # get a SecIdentityRef without having a key in a keychain. + # + # So the policy here is we take all the files and iterate them in order. + # Each one will use SecItemImport to have one or more objects loaded from + # it. We will also point at a keychain that macOS can use to work with the + # private key. + # + # Once we have all the objects, we'll check what we actually have. If we + # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise, + # we'll take the first certificate (which we assume to be our leaf) and + # ask the keychain to give us a SecIdentityRef with that cert's associated + # key. + # + # We'll then return a CFArray containing the trust chain: one + # SecIdentityRef and then zero-or-more SecCertificateRef objects. The + # responsibility for freeing this CFArray will be with the caller. This + # CFArray must remain alive for the entire connection, so in practice it + # will be stored with a single SSLSocket, along with the reference to the + # keychain. + certificates = [] + identities = [] + + # Filter out bad paths. + paths = (path for path in paths if path) + + try: + for file_path in paths: + new_identities, new_certs = _load_items_from_file(keychain, file_path) + identities.extend(new_identities) + certificates.extend(new_certs) + + # Ok, we have everything. The question is: do we have an identity? If + # not, we want to grab one from the first cert we have. + if not identities: + new_identity = Security.SecIdentityRef() + status = Security.SecIdentityCreateWithCertificate( + keychain, certificates[0], ctypes.byref(new_identity) + ) + _assert_no_error(status) + identities.append(new_identity) + + # We now want to release the original certificate, as we no longer + # need it. + CoreFoundation.CFRelease(certificates.pop(0)) + + # We now need to build a new CFArray that holds the trust chain. + trust_chain = CoreFoundation.CFArrayCreateMutable( + CoreFoundation.kCFAllocatorDefault, + 0, + ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), + ) + for item in itertools.chain(identities, certificates): + # ArrayAppendValue does a CFRetain on the item. That's fine, + # because the finally block will release our other refs to them. + CoreFoundation.CFArrayAppendValue(trust_chain, item) + + return trust_chain + finally: + for obj in itertools.chain(identities, certificates): + CoreFoundation.CFRelease(obj) + + +TLS_PROTOCOL_VERSIONS = { + "SSLv2": (0, 2), + "SSLv3": (3, 0), + "TLSv1": (3, 1), + "TLSv1.1": (3, 2), + "TLSv1.2": (3, 3), +} + + +def _build_tls_unknown_ca_alert(version): + """ + Builds a TLS alert record for an unknown CA. + """ + ver_maj, ver_min = TLS_PROTOCOL_VERSIONS[version] + severity_fatal = 0x02 + description_unknown_ca = 0x30 + msg = struct.pack(">BB", severity_fatal, description_unknown_ca) + msg_len = len(msg) + record_type_alert = 0x15 + record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg + return record diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/appengine.py b/openpype/hosts/fusion/vendor/urllib3/contrib/appengine.py new file mode 100644 index 0000000000..f91bdd6e77 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/appengine.py @@ -0,0 +1,314 @@ +""" +This module provides a pool manager that uses Google App Engine's +`URLFetch Service `_. + +Example usage:: + + from urllib3 import PoolManager + from urllib3.contrib.appengine import AppEngineManager, is_appengine_sandbox + + if is_appengine_sandbox(): + # AppEngineManager uses AppEngine's URLFetch API behind the scenes + http = AppEngineManager() + else: + # PoolManager uses a socket-level API behind the scenes + http = PoolManager() + + r = http.request('GET', 'https://google.com/') + +There are `limitations `_ to the URLFetch service and it may not be +the best choice for your application. There are three options for using +urllib3 on Google App Engine: + +1. You can use :class:`AppEngineManager` with URLFetch. URLFetch is + cost-effective in many circumstances as long as your usage is within the + limitations. +2. You can use a normal :class:`~urllib3.PoolManager` by enabling sockets. + Sockets also have `limitations and restrictions + `_ and have a lower free quota than URLFetch. + To use sockets, be sure to specify the following in your ``app.yaml``:: + + env_variables: + GAE_USE_SOCKETS_HTTPLIB : 'true' + +3. If you are using `App Engine Flexible +`_, you can use the standard +:class:`PoolManager` without any configuration or special environment variables. +""" + +from __future__ import absolute_import + +import io +import logging +import warnings + +from ..exceptions import ( + HTTPError, + HTTPWarning, + MaxRetryError, + ProtocolError, + SSLError, + TimeoutError, +) +from ..packages.six.moves.urllib.parse import urljoin +from ..request import RequestMethods +from ..response import HTTPResponse +from ..util.retry import Retry +from ..util.timeout import Timeout +from . import _appengine_environ + +try: + from google.appengine.api import urlfetch +except ImportError: + urlfetch = None + + +log = logging.getLogger(__name__) + + +class AppEnginePlatformWarning(HTTPWarning): + pass + + +class AppEnginePlatformError(HTTPError): + pass + + +class AppEngineManager(RequestMethods): + """ + Connection manager for Google App Engine sandbox applications. + + This manager uses the URLFetch service directly instead of using the + emulated httplib, and is subject to URLFetch limitations as described in + the App Engine documentation `here + `_. + + Notably it will raise an :class:`AppEnginePlatformError` if: + * URLFetch is not available. + * If you attempt to use this on App Engine Flexible, as full socket + support is available. + * If a request size is more than 10 megabytes. + * If a response size is more than 32 megabytes. + * If you use an unsupported request method such as OPTIONS. + + Beyond those cases, it will raise normal urllib3 errors. + """ + + def __init__( + self, + headers=None, + retries=None, + validate_certificate=True, + urlfetch_retries=True, + ): + if not urlfetch: + raise AppEnginePlatformError( + "URLFetch is not available in this environment." + ) + + warnings.warn( + "urllib3 is using URLFetch on Google App Engine sandbox instead " + "of sockets. To use sockets directly instead of URLFetch see " + "https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html.", + AppEnginePlatformWarning, + ) + + RequestMethods.__init__(self, headers) + self.validate_certificate = validate_certificate + self.urlfetch_retries = urlfetch_retries + + self.retries = retries or Retry.DEFAULT + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + # Return False to re-raise any potential exceptions + return False + + def urlopen( + self, + method, + url, + body=None, + headers=None, + retries=None, + redirect=True, + timeout=Timeout.DEFAULT_TIMEOUT, + **response_kw + ): + + retries = self._get_retries(retries, redirect) + + try: + follow_redirects = redirect and retries.redirect != 0 and retries.total + response = urlfetch.fetch( + url, + payload=body, + method=method, + headers=headers or {}, + allow_truncated=False, + follow_redirects=self.urlfetch_retries and follow_redirects, + deadline=self._get_absolute_timeout(timeout), + validate_certificate=self.validate_certificate, + ) + except urlfetch.DeadlineExceededError as e: + raise TimeoutError(self, e) + + except urlfetch.InvalidURLError as e: + if "too large" in str(e): + raise AppEnginePlatformError( + "URLFetch request too large, URLFetch only " + "supports requests up to 10mb in size.", + e, + ) + raise ProtocolError(e) + + except urlfetch.DownloadError as e: + if "Too many redirects" in str(e): + raise MaxRetryError(self, url, reason=e) + raise ProtocolError(e) + + except urlfetch.ResponseTooLargeError as e: + raise AppEnginePlatformError( + "URLFetch response too large, URLFetch only supports" + "responses up to 32mb in size.", + e, + ) + + except urlfetch.SSLCertificateError as e: + raise SSLError(e) + + except urlfetch.InvalidMethodError as e: + raise AppEnginePlatformError( + "URLFetch does not support method: %s" % method, e + ) + + http_response = self._urlfetch_response_to_http_response( + response, retries=retries, **response_kw + ) + + # Handle redirect? + redirect_location = redirect and http_response.get_redirect_location() + if redirect_location: + # Check for redirect response + if self.urlfetch_retries and retries.raise_on_redirect: + raise MaxRetryError(self, url, "too many redirects") + else: + if http_response.status == 303: + method = "GET" + + try: + retries = retries.increment( + method, url, response=http_response, _pool=self + ) + except MaxRetryError: + if retries.raise_on_redirect: + raise MaxRetryError(self, url, "too many redirects") + return http_response + + retries.sleep_for_retry(http_response) + log.debug("Redirecting %s -> %s", url, redirect_location) + redirect_url = urljoin(url, redirect_location) + return self.urlopen( + method, + redirect_url, + body, + headers, + retries=retries, + redirect=redirect, + timeout=timeout, + **response_kw + ) + + # Check if we should retry the HTTP response. + has_retry_after = bool(http_response.getheader("Retry-After")) + if retries.is_retry(method, http_response.status, has_retry_after): + retries = retries.increment(method, url, response=http_response, _pool=self) + log.debug("Retry: %s", url) + retries.sleep(http_response) + return self.urlopen( + method, + url, + body=body, + headers=headers, + retries=retries, + redirect=redirect, + timeout=timeout, + **response_kw + ) + + return http_response + + def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw): + + if is_prod_appengine(): + # Production GAE handles deflate encoding automatically, but does + # not remove the encoding header. + content_encoding = urlfetch_resp.headers.get("content-encoding") + + if content_encoding == "deflate": + del urlfetch_resp.headers["content-encoding"] + + transfer_encoding = urlfetch_resp.headers.get("transfer-encoding") + # We have a full response's content, + # so let's make sure we don't report ourselves as chunked data. + if transfer_encoding == "chunked": + encodings = transfer_encoding.split(",") + encodings.remove("chunked") + urlfetch_resp.headers["transfer-encoding"] = ",".join(encodings) + + original_response = HTTPResponse( + # In order for decoding to work, we must present the content as + # a file-like object. + body=io.BytesIO(urlfetch_resp.content), + msg=urlfetch_resp.header_msg, + headers=urlfetch_resp.headers, + status=urlfetch_resp.status_code, + **response_kw + ) + + return HTTPResponse( + body=io.BytesIO(urlfetch_resp.content), + headers=urlfetch_resp.headers, + status=urlfetch_resp.status_code, + original_response=original_response, + **response_kw + ) + + def _get_absolute_timeout(self, timeout): + if timeout is Timeout.DEFAULT_TIMEOUT: + return None # Defer to URLFetch's default. + if isinstance(timeout, Timeout): + if timeout._read is not None or timeout._connect is not None: + warnings.warn( + "URLFetch does not support granular timeout settings, " + "reverting to total or default URLFetch timeout.", + AppEnginePlatformWarning, + ) + return timeout.total + return timeout + + def _get_retries(self, retries, redirect): + if not isinstance(retries, Retry): + retries = Retry.from_int(retries, redirect=redirect, default=self.retries) + + if retries.connect or retries.read or retries.redirect: + warnings.warn( + "URLFetch only supports total retries and does not " + "recognize connect, read, or redirect retry parameters.", + AppEnginePlatformWarning, + ) + + return retries + + +# Alias methods from _appengine_environ to maintain public API interface. + +is_appengine = _appengine_environ.is_appengine +is_appengine_sandbox = _appengine_environ.is_appengine_sandbox +is_local_appengine = _appengine_environ.is_local_appengine +is_prod_appengine = _appengine_environ.is_prod_appengine +is_prod_appengine_mvms = _appengine_environ.is_prod_appengine_mvms diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/ntlmpool.py b/openpype/hosts/fusion/vendor/urllib3/contrib/ntlmpool.py new file mode 100644 index 0000000000..41a8fd174c --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/ntlmpool.py @@ -0,0 +1,130 @@ +""" +NTLM authenticating pool, contributed by erikcederstran + +Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10 +""" +from __future__ import absolute_import + +import warnings +from logging import getLogger + +from ntlm import ntlm + +from .. import HTTPSConnectionPool +from ..packages.six.moves.http_client import HTTPSConnection + +warnings.warn( + "The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed " + "in urllib3 v2.0 release, urllib3 is not able to support it properly due " + "to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. " + "If you are a user of this module please comment in the mentioned issue.", + DeprecationWarning, +) + +log = getLogger(__name__) + + +class NTLMConnectionPool(HTTPSConnectionPool): + """ + Implements an NTLM authentication version of an urllib3 connection pool + """ + + scheme = "https" + + def __init__(self, user, pw, authurl, *args, **kwargs): + """ + authurl is a random URL on the server that is protected by NTLM. + user is the Windows user, probably in the DOMAIN\\username format. + pw is the password for the user. + """ + super(NTLMConnectionPool, self).__init__(*args, **kwargs) + self.authurl = authurl + self.rawuser = user + user_parts = user.split("\\", 1) + self.domain = user_parts[0].upper() + self.user = user_parts[1] + self.pw = pw + + def _new_conn(self): + # Performs the NTLM handshake that secures the connection. The socket + # must be kept open while requests are performed. + self.num_connections += 1 + log.debug( + "Starting NTLM HTTPS connection no. %d: https://%s%s", + self.num_connections, + self.host, + self.authurl, + ) + + headers = {"Connection": "Keep-Alive"} + req_header = "Authorization" + resp_header = "www-authenticate" + + conn = HTTPSConnection(host=self.host, port=self.port) + + # Send negotiation message + headers[req_header] = "NTLM %s" % ntlm.create_NTLM_NEGOTIATE_MESSAGE( + self.rawuser + ) + log.debug("Request headers: %s", headers) + conn.request("GET", self.authurl, None, headers) + res = conn.getresponse() + reshdr = dict(res.getheaders()) + log.debug("Response status: %s %s", res.status, res.reason) + log.debug("Response headers: %s", reshdr) + log.debug("Response data: %s [...]", res.read(100)) + + # Remove the reference to the socket, so that it can not be closed by + # the response object (we want to keep the socket open) + res.fp = None + + # Server should respond with a challenge message + auth_header_values = reshdr[resp_header].split(", ") + auth_header_value = None + for s in auth_header_values: + if s[:5] == "NTLM ": + auth_header_value = s[5:] + if auth_header_value is None: + raise Exception( + "Unexpected %s response header: %s" % (resp_header, reshdr[resp_header]) + ) + + # Send authentication message + ServerChallenge, NegotiateFlags = ntlm.parse_NTLM_CHALLENGE_MESSAGE( + auth_header_value + ) + auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE( + ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags + ) + headers[req_header] = "NTLM %s" % auth_msg + log.debug("Request headers: %s", headers) + conn.request("GET", self.authurl, None, headers) + res = conn.getresponse() + log.debug("Response status: %s %s", res.status, res.reason) + log.debug("Response headers: %s", dict(res.getheaders())) + log.debug("Response data: %s [...]", res.read()[:100]) + if res.status != 200: + if res.status == 401: + raise Exception("Server rejected request: wrong username or password") + raise Exception("Wrong server response: %s %s" % (res.status, res.reason)) + + res.fp = None + log.debug("Connection established") + return conn + + def urlopen( + self, + method, + url, + body=None, + headers=None, + retries=3, + redirect=True, + assert_same_host=True, + ): + if headers is None: + headers = {} + headers["Connection"] = "Keep-Alive" + return super(NTLMConnectionPool, self).urlopen( + method, url, body, headers, retries, redirect, assert_same_host + ) diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/pyopenssl.py b/openpype/hosts/fusion/vendor/urllib3/contrib/pyopenssl.py new file mode 100644 index 0000000000..def83afdb2 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/pyopenssl.py @@ -0,0 +1,511 @@ +""" +TLS with SNI_-support for Python 2. Follow these instructions if you would +like to verify TLS certificates in Python 2. Note, the default libraries do +*not* do certificate checking; you need to do additional work to validate +certificates yourself. + +This needs the following packages installed: + +* `pyOpenSSL`_ (tested with 16.0.0) +* `cryptography`_ (minimum 1.3.4, from pyopenssl) +* `idna`_ (minimum 2.0, from cryptography) + +However, pyopenssl depends on cryptography, which depends on idna, so while we +use all three directly here we end up having relatively few packages required. + +You can install them with the following command: + +.. code-block:: bash + + $ python -m pip install pyopenssl cryptography idna + +To activate certificate checking, call +:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code +before you begin making HTTP requests. This can be done in a ``sitecustomize`` +module, or at any other time before your application begins using ``urllib3``, +like this: + +.. code-block:: python + + try: + import urllib3.contrib.pyopenssl + urllib3.contrib.pyopenssl.inject_into_urllib3() + except ImportError: + pass + +Now you can use :mod:`urllib3` as you normally would, and it will support SNI +when the required modules are installed. + +Activating this module also has the positive side effect of disabling SSL/TLS +compression in Python 2 (see `CRIME attack`_). + +.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication +.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit) +.. _pyopenssl: https://www.pyopenssl.org +.. _cryptography: https://cryptography.io +.. _idna: https://github.com/kjd/idna +""" +from __future__ import absolute_import + +import OpenSSL.SSL +from cryptography import x509 +from cryptography.hazmat.backends.openssl import backend as openssl_backend +from cryptography.hazmat.backends.openssl.x509 import _Certificate + +try: + from cryptography.x509 import UnsupportedExtension +except ImportError: + # UnsupportedExtension is gone in cryptography >= 2.1.0 + class UnsupportedExtension(Exception): + pass + + +from io import BytesIO +from socket import error as SocketError +from socket import timeout + +try: # Platform-specific: Python 2 + from socket import _fileobject +except ImportError: # Platform-specific: Python 3 + _fileobject = None + from ..packages.backports.makefile import backport_makefile + +import logging +import ssl +import sys + +from .. import util +from ..packages import six +from ..util.ssl_ import PROTOCOL_TLS_CLIENT + +__all__ = ["inject_into_urllib3", "extract_from_urllib3"] + +# SNI always works. +HAS_SNI = True + +# Map from urllib3 to PyOpenSSL compatible parameter-values. +_openssl_versions = { + util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD, + PROTOCOL_TLS_CLIENT: OpenSSL.SSL.SSLv23_METHOD, + ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, +} + +if hasattr(ssl, "PROTOCOL_SSLv3") and hasattr(OpenSSL.SSL, "SSLv3_METHOD"): + _openssl_versions[ssl.PROTOCOL_SSLv3] = OpenSSL.SSL.SSLv3_METHOD + +if hasattr(ssl, "PROTOCOL_TLSv1_1") and hasattr(OpenSSL.SSL, "TLSv1_1_METHOD"): + _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD + +if hasattr(ssl, "PROTOCOL_TLSv1_2") and hasattr(OpenSSL.SSL, "TLSv1_2_METHOD"): + _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD + + +_stdlib_to_openssl_verify = { + ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE, + ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER, + ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER + + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT, +} +_openssl_to_stdlib_verify = dict((v, k) for k, v in _stdlib_to_openssl_verify.items()) + +# OpenSSL will only write 16K at a time +SSL_WRITE_BLOCKSIZE = 16384 + +orig_util_HAS_SNI = util.HAS_SNI +orig_util_SSLContext = util.ssl_.SSLContext + + +log = logging.getLogger(__name__) + + +def inject_into_urllib3(): + "Monkey-patch urllib3 with PyOpenSSL-backed SSL-support." + + _validate_dependencies_met() + + util.SSLContext = PyOpenSSLContext + util.ssl_.SSLContext = PyOpenSSLContext + util.HAS_SNI = HAS_SNI + util.ssl_.HAS_SNI = HAS_SNI + util.IS_PYOPENSSL = True + util.ssl_.IS_PYOPENSSL = True + + +def extract_from_urllib3(): + "Undo monkey-patching by :func:`inject_into_urllib3`." + + util.SSLContext = orig_util_SSLContext + util.ssl_.SSLContext = orig_util_SSLContext + util.HAS_SNI = orig_util_HAS_SNI + util.ssl_.HAS_SNI = orig_util_HAS_SNI + util.IS_PYOPENSSL = False + util.ssl_.IS_PYOPENSSL = False + + +def _validate_dependencies_met(): + """ + Verifies that PyOpenSSL's package-level dependencies have been met. + Throws `ImportError` if they are not met. + """ + # Method added in `cryptography==1.1`; not available in older versions + from cryptography.x509.extensions import Extensions + + if getattr(Extensions, "get_extension_for_class", None) is None: + raise ImportError( + "'cryptography' module missing required functionality. " + "Try upgrading to v1.3.4 or newer." + ) + + # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509 + # attribute is only present on those versions. + from OpenSSL.crypto import X509 + + x509 = X509() + if getattr(x509, "_x509", None) is None: + raise ImportError( + "'pyOpenSSL' module missing required functionality. " + "Try upgrading to v0.14 or newer." + ) + + +def _dnsname_to_stdlib(name): + """ + Converts a dNSName SubjectAlternativeName field to the form used by the + standard library on the given Python version. + + Cryptography produces a dNSName as a unicode string that was idna-decoded + from ASCII bytes. We need to idna-encode that string to get it back, and + then on Python 3 we also need to convert to unicode via UTF-8 (the stdlib + uses PyUnicode_FromStringAndSize on it, which decodes via UTF-8). + + If the name cannot be idna-encoded then we return None signalling that + the name given should be skipped. + """ + + def idna_encode(name): + """ + Borrowed wholesale from the Python Cryptography Project. It turns out + that we can't just safely call `idna.encode`: it can explode for + wildcard names. This avoids that problem. + """ + import idna + + try: + for prefix in [u"*.", u"."]: + if name.startswith(prefix): + name = name[len(prefix) :] + return prefix.encode("ascii") + idna.encode(name) + return idna.encode(name) + except idna.core.IDNAError: + return None + + # Don't send IPv6 addresses through the IDNA encoder. + if ":" in name: + return name + + name = idna_encode(name) + if name is None: + return None + elif sys.version_info >= (3, 0): + name = name.decode("utf-8") + return name + + +def get_subj_alt_name(peer_cert): + """ + Given an PyOpenSSL certificate, provides all the subject alternative names. + """ + # Pass the cert to cryptography, which has much better APIs for this. + if hasattr(peer_cert, "to_cryptography"): + cert = peer_cert.to_cryptography() + else: + # This is technically using private APIs, but should work across all + # relevant versions before PyOpenSSL got a proper API for this. + cert = _Certificate(openssl_backend, peer_cert._x509) + + # We want to find the SAN extension. Ask Cryptography to locate it (it's + # faster than looping in Python) + try: + ext = cert.extensions.get_extension_for_class(x509.SubjectAlternativeName).value + except x509.ExtensionNotFound: + # No such extension, return the empty list. + return [] + except ( + x509.DuplicateExtension, + UnsupportedExtension, + x509.UnsupportedGeneralNameType, + UnicodeError, + ) as e: + # A problem has been found with the quality of the certificate. Assume + # no SAN field is present. + log.warning( + "A problem was encountered with the certificate that prevented " + "urllib3 from finding the SubjectAlternativeName field. This can " + "affect certificate validation. The error was %s", + e, + ) + return [] + + # We want to return dNSName and iPAddress fields. We need to cast the IPs + # back to strings because the match_hostname function wants them as + # strings. + # Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8 + # decoded. This is pretty frustrating, but that's what the standard library + # does with certificates, and so we need to attempt to do the same. + # We also want to skip over names which cannot be idna encoded. + names = [ + ("DNS", name) + for name in map(_dnsname_to_stdlib, ext.get_values_for_type(x509.DNSName)) + if name is not None + ] + names.extend( + ("IP Address", str(name)) for name in ext.get_values_for_type(x509.IPAddress) + ) + + return names + + +class WrappedSocket(object): + """API-compatibility wrapper for Python OpenSSL's Connection-class. + + Note: _makefile_refs, _drop() and _reuse() are needed for the garbage + collector of pypy. + """ + + def __init__(self, connection, socket, suppress_ragged_eofs=True): + self.connection = connection + self.socket = socket + self.suppress_ragged_eofs = suppress_ragged_eofs + self._makefile_refs = 0 + self._closed = False + + def fileno(self): + return self.socket.fileno() + + # Copy-pasted from Python 3.5 source code + def _decref_socketios(self): + if self._makefile_refs > 0: + self._makefile_refs -= 1 + if self._closed: + self.close() + + def recv(self, *args, **kwargs): + try: + data = self.connection.recv(*args, **kwargs) + except OpenSSL.SSL.SysCallError as e: + if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): + return b"" + else: + raise SocketError(str(e)) + except OpenSSL.SSL.ZeroReturnError: + if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: + return b"" + else: + raise + except OpenSSL.SSL.WantReadError: + if not util.wait_for_read(self.socket, self.socket.gettimeout()): + raise timeout("The read operation timed out") + else: + return self.recv(*args, **kwargs) + + # TLS 1.3 post-handshake authentication + except OpenSSL.SSL.Error as e: + raise ssl.SSLError("read error: %r" % e) + else: + return data + + def recv_into(self, *args, **kwargs): + try: + return self.connection.recv_into(*args, **kwargs) + except OpenSSL.SSL.SysCallError as e: + if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): + return 0 + else: + raise SocketError(str(e)) + except OpenSSL.SSL.ZeroReturnError: + if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: + return 0 + else: + raise + except OpenSSL.SSL.WantReadError: + if not util.wait_for_read(self.socket, self.socket.gettimeout()): + raise timeout("The read operation timed out") + else: + return self.recv_into(*args, **kwargs) + + # TLS 1.3 post-handshake authentication + except OpenSSL.SSL.Error as e: + raise ssl.SSLError("read error: %r" % e) + + def settimeout(self, timeout): + return self.socket.settimeout(timeout) + + def _send_until_done(self, data): + while True: + try: + return self.connection.send(data) + except OpenSSL.SSL.WantWriteError: + if not util.wait_for_write(self.socket, self.socket.gettimeout()): + raise timeout() + continue + except OpenSSL.SSL.SysCallError as e: + raise SocketError(str(e)) + + def sendall(self, data): + total_sent = 0 + while total_sent < len(data): + sent = self._send_until_done( + data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE] + ) + total_sent += sent + + def shutdown(self): + # FIXME rethrow compatible exceptions should we ever use this + self.connection.shutdown() + + def close(self): + if self._makefile_refs < 1: + try: + self._closed = True + return self.connection.close() + except OpenSSL.SSL.Error: + return + else: + self._makefile_refs -= 1 + + def getpeercert(self, binary_form=False): + x509 = self.connection.get_peer_certificate() + + if not x509: + return x509 + + if binary_form: + return OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_ASN1, x509) + + return { + "subject": ((("commonName", x509.get_subject().CN),),), + "subjectAltName": get_subj_alt_name(x509), + } + + def version(self): + return self.connection.get_protocol_version_name() + + def _reuse(self): + self._makefile_refs += 1 + + def _drop(self): + if self._makefile_refs < 1: + self.close() + else: + self._makefile_refs -= 1 + + +if _fileobject: # Platform-specific: Python 2 + + def makefile(self, mode, bufsize=-1): + self._makefile_refs += 1 + return _fileobject(self, mode, bufsize, close=True) + + +else: # Platform-specific: Python 3 + makefile = backport_makefile + +WrappedSocket.makefile = makefile + + +class PyOpenSSLContext(object): + """ + I am a wrapper class for the PyOpenSSL ``Context`` object. I am responsible + for translating the interface of the standard library ``SSLContext`` object + to calls into PyOpenSSL. + """ + + def __init__(self, protocol): + self.protocol = _openssl_versions[protocol] + self._ctx = OpenSSL.SSL.Context(self.protocol) + self._options = 0 + self.check_hostname = False + + @property + def options(self): + return self._options + + @options.setter + def options(self, value): + self._options = value + self._ctx.set_options(value) + + @property + def verify_mode(self): + return _openssl_to_stdlib_verify[self._ctx.get_verify_mode()] + + @verify_mode.setter + def verify_mode(self, value): + self._ctx.set_verify(_stdlib_to_openssl_verify[value], _verify_callback) + + def set_default_verify_paths(self): + self._ctx.set_default_verify_paths() + + def set_ciphers(self, ciphers): + if isinstance(ciphers, six.text_type): + ciphers = ciphers.encode("utf-8") + self._ctx.set_cipher_list(ciphers) + + def load_verify_locations(self, cafile=None, capath=None, cadata=None): + if cafile is not None: + cafile = cafile.encode("utf-8") + if capath is not None: + capath = capath.encode("utf-8") + try: + self._ctx.load_verify_locations(cafile, capath) + if cadata is not None: + self._ctx.load_verify_locations(BytesIO(cadata)) + except OpenSSL.SSL.Error as e: + raise ssl.SSLError("unable to load trusted certificates: %r" % e) + + def load_cert_chain(self, certfile, keyfile=None, password=None): + self._ctx.use_certificate_chain_file(certfile) + if password is not None: + if not isinstance(password, six.binary_type): + password = password.encode("utf-8") + self._ctx.set_passwd_cb(lambda *_: password) + self._ctx.use_privatekey_file(keyfile or certfile) + + def set_alpn_protocols(self, protocols): + protocols = [six.ensure_binary(p) for p in protocols] + return self._ctx.set_alpn_protos(protocols) + + def wrap_socket( + self, + sock, + server_side=False, + do_handshake_on_connect=True, + suppress_ragged_eofs=True, + server_hostname=None, + ): + cnx = OpenSSL.SSL.Connection(self._ctx, sock) + + if isinstance(server_hostname, six.text_type): # Platform-specific: Python 3 + server_hostname = server_hostname.encode("utf-8") + + if server_hostname is not None: + cnx.set_tlsext_host_name(server_hostname) + + cnx.set_connect_state() + + while True: + try: + cnx.do_handshake() + except OpenSSL.SSL.WantReadError: + if not util.wait_for_read(sock, sock.gettimeout()): + raise timeout("select timed out") + continue + except OpenSSL.SSL.Error as e: + raise ssl.SSLError("bad handshake: %r" % e) + break + + return WrappedSocket(cnx, sock) + + +def _verify_callback(cnx, x509, err_no, err_depth, return_code): + return err_no == 0 diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/securetransport.py b/openpype/hosts/fusion/vendor/urllib3/contrib/securetransport.py new file mode 100644 index 0000000000..554c015fed --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/securetransport.py @@ -0,0 +1,922 @@ +""" +SecureTranport support for urllib3 via ctypes. + +This makes platform-native TLS available to urllib3 users on macOS without the +use of a compiler. This is an important feature because the Python Package +Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL +that ships with macOS is not capable of doing TLSv1.2. The only way to resolve +this is to give macOS users an alternative solution to the problem, and that +solution is to use SecureTransport. + +We use ctypes here because this solution must not require a compiler. That's +because pip is not allowed to require a compiler either. + +This is not intended to be a seriously long-term solution to this problem. +The hope is that PEP 543 will eventually solve this issue for us, at which +point we can retire this contrib module. But in the short term, we need to +solve the impending tire fire that is Python on Mac without this kind of +contrib module. So...here we are. + +To use this module, simply import and inject it:: + + import urllib3.contrib.securetransport + urllib3.contrib.securetransport.inject_into_urllib3() + +Happy TLSing! + +This code is a bastardised version of the code found in Will Bond's oscrypto +library. An enormous debt is owed to him for blazing this trail for us. For +that reason, this code should be considered to be covered both by urllib3's +license and by oscrypto's: + +.. code-block:: + + Copyright (c) 2015-2016 Will Bond + + Permission is hereby granted, free of charge, to any person obtaining a + copy of this software and associated documentation files (the "Software"), + to deal in the Software without restriction, including without limitation + the rights to use, copy, modify, merge, publish, distribute, sublicense, + and/or sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in + all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. +""" +from __future__ import absolute_import + +import contextlib +import ctypes +import errno +import os.path +import shutil +import socket +import ssl +import struct +import threading +import weakref + +import six + +from .. import util +from ..util.ssl_ import PROTOCOL_TLS_CLIENT +from ._securetransport.bindings import CoreFoundation, Security, SecurityConst +from ._securetransport.low_level import ( + _assert_no_error, + _build_tls_unknown_ca_alert, + _cert_array_from_pem, + _create_cfstring_array, + _load_client_cert_chain, + _temporary_keychain, +) + +try: # Platform-specific: Python 2 + from socket import _fileobject +except ImportError: # Platform-specific: Python 3 + _fileobject = None + from ..packages.backports.makefile import backport_makefile + +__all__ = ["inject_into_urllib3", "extract_from_urllib3"] + +# SNI always works +HAS_SNI = True + +orig_util_HAS_SNI = util.HAS_SNI +orig_util_SSLContext = util.ssl_.SSLContext + +# This dictionary is used by the read callback to obtain a handle to the +# calling wrapped socket. This is a pretty silly approach, but for now it'll +# do. I feel like I should be able to smuggle a handle to the wrapped socket +# directly in the SSLConnectionRef, but for now this approach will work I +# guess. +# +# We need to lock around this structure for inserts, but we don't do it for +# reads/writes in the callbacks. The reasoning here goes as follows: +# +# 1. It is not possible to call into the callbacks before the dictionary is +# populated, so once in the callback the id must be in the dictionary. +# 2. The callbacks don't mutate the dictionary, they only read from it, and +# so cannot conflict with any of the insertions. +# +# This is good: if we had to lock in the callbacks we'd drastically slow down +# the performance of this code. +_connection_refs = weakref.WeakValueDictionary() +_connection_ref_lock = threading.Lock() + +# Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over +# for no better reason than we need *a* limit, and this one is right there. +SSL_WRITE_BLOCKSIZE = 16384 + +# This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to +# individual cipher suites. We need to do this because this is how +# SecureTransport wants them. +CIPHER_SUITES = [ + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, + SecurityConst.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA, + SecurityConst.TLS_AES_256_GCM_SHA384, + SecurityConst.TLS_AES_128_GCM_SHA256, + SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384, + SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256, + SecurityConst.TLS_AES_128_CCM_8_SHA256, + SecurityConst.TLS_AES_128_CCM_SHA256, + SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256, + SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256, + SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA, + SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA, +] + +# Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of +# TLSv1 and a high of TLSv1.2. For everything else, we pin to that version. +# TLSv1 to 1.2 are supported on macOS 10.8+ +_protocol_to_min_max = { + util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), + PROTOCOL_TLS_CLIENT: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), +} + +if hasattr(ssl, "PROTOCOL_SSLv2"): + _protocol_to_min_max[ssl.PROTOCOL_SSLv2] = ( + SecurityConst.kSSLProtocol2, + SecurityConst.kSSLProtocol2, + ) +if hasattr(ssl, "PROTOCOL_SSLv3"): + _protocol_to_min_max[ssl.PROTOCOL_SSLv3] = ( + SecurityConst.kSSLProtocol3, + SecurityConst.kSSLProtocol3, + ) +if hasattr(ssl, "PROTOCOL_TLSv1"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1] = ( + SecurityConst.kTLSProtocol1, + SecurityConst.kTLSProtocol1, + ) +if hasattr(ssl, "PROTOCOL_TLSv1_1"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = ( + SecurityConst.kTLSProtocol11, + SecurityConst.kTLSProtocol11, + ) +if hasattr(ssl, "PROTOCOL_TLSv1_2"): + _protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = ( + SecurityConst.kTLSProtocol12, + SecurityConst.kTLSProtocol12, + ) + + +def inject_into_urllib3(): + """ + Monkey-patch urllib3 with SecureTransport-backed SSL-support. + """ + util.SSLContext = SecureTransportContext + util.ssl_.SSLContext = SecureTransportContext + util.HAS_SNI = HAS_SNI + util.ssl_.HAS_SNI = HAS_SNI + util.IS_SECURETRANSPORT = True + util.ssl_.IS_SECURETRANSPORT = True + + +def extract_from_urllib3(): + """ + Undo monkey-patching by :func:`inject_into_urllib3`. + """ + util.SSLContext = orig_util_SSLContext + util.ssl_.SSLContext = orig_util_SSLContext + util.HAS_SNI = orig_util_HAS_SNI + util.ssl_.HAS_SNI = orig_util_HAS_SNI + util.IS_SECURETRANSPORT = False + util.ssl_.IS_SECURETRANSPORT = False + + +def _read_callback(connection_id, data_buffer, data_length_pointer): + """ + SecureTransport read callback. This is called by ST to request that data + be returned from the socket. + """ + wrapped_socket = None + try: + wrapped_socket = _connection_refs.get(connection_id) + if wrapped_socket is None: + return SecurityConst.errSSLInternal + base_socket = wrapped_socket.socket + + requested_length = data_length_pointer[0] + + timeout = wrapped_socket.gettimeout() + error = None + read_count = 0 + + try: + while read_count < requested_length: + if timeout is None or timeout >= 0: + if not util.wait_for_read(base_socket, timeout): + raise socket.error(errno.EAGAIN, "timed out") + + remaining = requested_length - read_count + buffer = (ctypes.c_char * remaining).from_address( + data_buffer + read_count + ) + chunk_size = base_socket.recv_into(buffer, remaining) + read_count += chunk_size + if not chunk_size: + if not read_count: + return SecurityConst.errSSLClosedGraceful + break + except (socket.error) as e: + error = e.errno + + if error is not None and error != errno.EAGAIN: + data_length_pointer[0] = read_count + if error == errno.ECONNRESET or error == errno.EPIPE: + return SecurityConst.errSSLClosedAbort + raise + + data_length_pointer[0] = read_count + + if read_count != requested_length: + return SecurityConst.errSSLWouldBlock + + return 0 + except Exception as e: + if wrapped_socket is not None: + wrapped_socket._exception = e + return SecurityConst.errSSLInternal + + +def _write_callback(connection_id, data_buffer, data_length_pointer): + """ + SecureTransport write callback. This is called by ST to request that data + actually be sent on the network. + """ + wrapped_socket = None + try: + wrapped_socket = _connection_refs.get(connection_id) + if wrapped_socket is None: + return SecurityConst.errSSLInternal + base_socket = wrapped_socket.socket + + bytes_to_write = data_length_pointer[0] + data = ctypes.string_at(data_buffer, bytes_to_write) + + timeout = wrapped_socket.gettimeout() + error = None + sent = 0 + + try: + while sent < bytes_to_write: + if timeout is None or timeout >= 0: + if not util.wait_for_write(base_socket, timeout): + raise socket.error(errno.EAGAIN, "timed out") + chunk_sent = base_socket.send(data) + sent += chunk_sent + + # This has some needless copying here, but I'm not sure there's + # much value in optimising this data path. + data = data[chunk_sent:] + except (socket.error) as e: + error = e.errno + + if error is not None and error != errno.EAGAIN: + data_length_pointer[0] = sent + if error == errno.ECONNRESET or error == errno.EPIPE: + return SecurityConst.errSSLClosedAbort + raise + + data_length_pointer[0] = sent + + if sent != bytes_to_write: + return SecurityConst.errSSLWouldBlock + + return 0 + except Exception as e: + if wrapped_socket is not None: + wrapped_socket._exception = e + return SecurityConst.errSSLInternal + + +# We need to keep these two objects references alive: if they get GC'd while +# in use then SecureTransport could attempt to call a function that is in freed +# memory. That would be...uh...bad. Yeah, that's the word. Bad. +_read_callback_pointer = Security.SSLReadFunc(_read_callback) +_write_callback_pointer = Security.SSLWriteFunc(_write_callback) + + +class WrappedSocket(object): + """ + API-compatibility wrapper for Python's OpenSSL wrapped socket object. + + Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage + collector of PyPy. + """ + + def __init__(self, socket): + self.socket = socket + self.context = None + self._makefile_refs = 0 + self._closed = False + self._exception = None + self._keychain = None + self._keychain_dir = None + self._client_cert_chain = None + + # We save off the previously-configured timeout and then set it to + # zero. This is done because we use select and friends to handle the + # timeouts, but if we leave the timeout set on the lower socket then + # Python will "kindly" call select on that socket again for us. Avoid + # that by forcing the timeout to zero. + self._timeout = self.socket.gettimeout() + self.socket.settimeout(0) + + @contextlib.contextmanager + def _raise_on_error(self): + """ + A context manager that can be used to wrap calls that do I/O from + SecureTransport. If any of the I/O callbacks hit an exception, this + context manager will correctly propagate the exception after the fact. + This avoids silently swallowing those exceptions. + + It also correctly forces the socket closed. + """ + self._exception = None + + # We explicitly don't catch around this yield because in the unlikely + # event that an exception was hit in the block we don't want to swallow + # it. + yield + if self._exception is not None: + exception, self._exception = self._exception, None + self.close() + raise exception + + def _set_ciphers(self): + """ + Sets up the allowed ciphers. By default this matches the set in + util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done + custom and doesn't allow changing at this time, mostly because parsing + OpenSSL cipher strings is going to be a freaking nightmare. + """ + ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES) + result = Security.SSLSetEnabledCiphers( + self.context, ciphers, len(CIPHER_SUITES) + ) + _assert_no_error(result) + + def _set_alpn_protocols(self, protocols): + """ + Sets up the ALPN protocols on the context. + """ + if not protocols: + return + protocols_arr = _create_cfstring_array(protocols) + try: + result = Security.SSLSetALPNProtocols(self.context, protocols_arr) + _assert_no_error(result) + finally: + CoreFoundation.CFRelease(protocols_arr) + + def _custom_validate(self, verify, trust_bundle): + """ + Called when we have set custom validation. We do this in two cases: + first, when cert validation is entirely disabled; and second, when + using a custom trust DB. + Raises an SSLError if the connection is not trusted. + """ + # If we disabled cert validation, just say: cool. + if not verify: + return + + successes = ( + SecurityConst.kSecTrustResultUnspecified, + SecurityConst.kSecTrustResultProceed, + ) + try: + trust_result = self._evaluate_trust(trust_bundle) + if trust_result in successes: + return + reason = "error code: %d" % (trust_result,) + except Exception as e: + # Do not trust on error + reason = "exception: %r" % (e,) + + # SecureTransport does not send an alert nor shuts down the connection. + rec = _build_tls_unknown_ca_alert(self.version()) + self.socket.sendall(rec) + # close the connection immediately + # l_onoff = 1, activate linger + # l_linger = 0, linger for 0 seoncds + opts = struct.pack("ii", 1, 0) + self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, opts) + self.close() + raise ssl.SSLError("certificate verify failed, %s" % reason) + + def _evaluate_trust(self, trust_bundle): + # We want data in memory, so load it up. + if os.path.isfile(trust_bundle): + with open(trust_bundle, "rb") as f: + trust_bundle = f.read() + + cert_array = None + trust = Security.SecTrustRef() + + try: + # Get a CFArray that contains the certs we want. + cert_array = _cert_array_from_pem(trust_bundle) + + # Ok, now the hard part. We want to get the SecTrustRef that ST has + # created for this connection, shove our CAs into it, tell ST to + # ignore everything else it knows, and then ask if it can build a + # chain. This is a buuuunch of code. + result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust)) + _assert_no_error(result) + if not trust: + raise ssl.SSLError("Failed to copy trust reference") + + result = Security.SecTrustSetAnchorCertificates(trust, cert_array) + _assert_no_error(result) + + result = Security.SecTrustSetAnchorCertificatesOnly(trust, True) + _assert_no_error(result) + + trust_result = Security.SecTrustResultType() + result = Security.SecTrustEvaluate(trust, ctypes.byref(trust_result)) + _assert_no_error(result) + finally: + if trust: + CoreFoundation.CFRelease(trust) + + if cert_array is not None: + CoreFoundation.CFRelease(cert_array) + + return trust_result.value + + def handshake( + self, + server_hostname, + verify, + trust_bundle, + min_version, + max_version, + client_cert, + client_key, + client_key_passphrase, + alpn_protocols, + ): + """ + Actually performs the TLS handshake. This is run automatically by + wrapped socket, and shouldn't be needed in user code. + """ + # First, we do the initial bits of connection setup. We need to create + # a context, set its I/O funcs, and set the connection reference. + self.context = Security.SSLCreateContext( + None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType + ) + result = Security.SSLSetIOFuncs( + self.context, _read_callback_pointer, _write_callback_pointer + ) + _assert_no_error(result) + + # Here we need to compute the handle to use. We do this by taking the + # id of self modulo 2**31 - 1. If this is already in the dictionary, we + # just keep incrementing by one until we find a free space. + with _connection_ref_lock: + handle = id(self) % 2147483647 + while handle in _connection_refs: + handle = (handle + 1) % 2147483647 + _connection_refs[handle] = self + + result = Security.SSLSetConnection(self.context, handle) + _assert_no_error(result) + + # If we have a server hostname, we should set that too. + if server_hostname: + if not isinstance(server_hostname, bytes): + server_hostname = server_hostname.encode("utf-8") + + result = Security.SSLSetPeerDomainName( + self.context, server_hostname, len(server_hostname) + ) + _assert_no_error(result) + + # Setup the ciphers. + self._set_ciphers() + + # Setup the ALPN protocols. + self._set_alpn_protocols(alpn_protocols) + + # Set the minimum and maximum TLS versions. + result = Security.SSLSetProtocolVersionMin(self.context, min_version) + _assert_no_error(result) + + result = Security.SSLSetProtocolVersionMax(self.context, max_version) + _assert_no_error(result) + + # If there's a trust DB, we need to use it. We do that by telling + # SecureTransport to break on server auth. We also do that if we don't + # want to validate the certs at all: we just won't actually do any + # authing in that case. + if not verify or trust_bundle is not None: + result = Security.SSLSetSessionOption( + self.context, SecurityConst.kSSLSessionOptionBreakOnServerAuth, True + ) + _assert_no_error(result) + + # If there's a client cert, we need to use it. + if client_cert: + self._keychain, self._keychain_dir = _temporary_keychain() + self._client_cert_chain = _load_client_cert_chain( + self._keychain, client_cert, client_key + ) + result = Security.SSLSetCertificate(self.context, self._client_cert_chain) + _assert_no_error(result) + + while True: + with self._raise_on_error(): + result = Security.SSLHandshake(self.context) + + if result == SecurityConst.errSSLWouldBlock: + raise socket.timeout("handshake timed out") + elif result == SecurityConst.errSSLServerAuthCompleted: + self._custom_validate(verify, trust_bundle) + continue + else: + _assert_no_error(result) + break + + def fileno(self): + return self.socket.fileno() + + # Copy-pasted from Python 3.5 source code + def _decref_socketios(self): + if self._makefile_refs > 0: + self._makefile_refs -= 1 + if self._closed: + self.close() + + def recv(self, bufsiz): + buffer = ctypes.create_string_buffer(bufsiz) + bytes_read = self.recv_into(buffer, bufsiz) + data = buffer[:bytes_read] + return data + + def recv_into(self, buffer, nbytes=None): + # Read short on EOF. + if self._closed: + return 0 + + if nbytes is None: + nbytes = len(buffer) + + buffer = (ctypes.c_char * nbytes).from_buffer(buffer) + processed_bytes = ctypes.c_size_t(0) + + with self._raise_on_error(): + result = Security.SSLRead( + self.context, buffer, nbytes, ctypes.byref(processed_bytes) + ) + + # There are some result codes that we want to treat as "not always + # errors". Specifically, those are errSSLWouldBlock, + # errSSLClosedGraceful, and errSSLClosedNoNotify. + if result == SecurityConst.errSSLWouldBlock: + # If we didn't process any bytes, then this was just a time out. + # However, we can get errSSLWouldBlock in situations when we *did* + # read some data, and in those cases we should just read "short" + # and return. + if processed_bytes.value == 0: + # Timed out, no data read. + raise socket.timeout("recv timed out") + elif result in ( + SecurityConst.errSSLClosedGraceful, + SecurityConst.errSSLClosedNoNotify, + ): + # The remote peer has closed this connection. We should do so as + # well. Note that we don't actually return here because in + # principle this could actually be fired along with return data. + # It's unlikely though. + self.close() + else: + _assert_no_error(result) + + # Ok, we read and probably succeeded. We should return whatever data + # was actually read. + return processed_bytes.value + + def settimeout(self, timeout): + self._timeout = timeout + + def gettimeout(self): + return self._timeout + + def send(self, data): + processed_bytes = ctypes.c_size_t(0) + + with self._raise_on_error(): + result = Security.SSLWrite( + self.context, data, len(data), ctypes.byref(processed_bytes) + ) + + if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0: + # Timed out + raise socket.timeout("send timed out") + else: + _assert_no_error(result) + + # We sent, and probably succeeded. Tell them how much we sent. + return processed_bytes.value + + def sendall(self, data): + total_sent = 0 + while total_sent < len(data): + sent = self.send(data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE]) + total_sent += sent + + def shutdown(self): + with self._raise_on_error(): + Security.SSLClose(self.context) + + def close(self): + # TODO: should I do clean shutdown here? Do I have to? + if self._makefile_refs < 1: + self._closed = True + if self.context: + CoreFoundation.CFRelease(self.context) + self.context = None + if self._client_cert_chain: + CoreFoundation.CFRelease(self._client_cert_chain) + self._client_cert_chain = None + if self._keychain: + Security.SecKeychainDelete(self._keychain) + CoreFoundation.CFRelease(self._keychain) + shutil.rmtree(self._keychain_dir) + self._keychain = self._keychain_dir = None + return self.socket.close() + else: + self._makefile_refs -= 1 + + def getpeercert(self, binary_form=False): + # Urgh, annoying. + # + # Here's how we do this: + # + # 1. Call SSLCopyPeerTrust to get hold of the trust object for this + # connection. + # 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf. + # 3. To get the CN, call SecCertificateCopyCommonName and process that + # string so that it's of the appropriate type. + # 4. To get the SAN, we need to do something a bit more complex: + # a. Call SecCertificateCopyValues to get the data, requesting + # kSecOIDSubjectAltName. + # b. Mess about with this dictionary to try to get the SANs out. + # + # This is gross. Really gross. It's going to be a few hundred LoC extra + # just to repeat something that SecureTransport can *already do*. So my + # operating assumption at this time is that what we want to do is + # instead to just flag to urllib3 that it shouldn't do its own hostname + # validation when using SecureTransport. + if not binary_form: + raise ValueError("SecureTransport only supports dumping binary certs") + trust = Security.SecTrustRef() + certdata = None + der_bytes = None + + try: + # Grab the trust store. + result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust)) + _assert_no_error(result) + if not trust: + # Probably we haven't done the handshake yet. No biggie. + return None + + cert_count = Security.SecTrustGetCertificateCount(trust) + if not cert_count: + # Also a case that might happen if we haven't handshaked. + # Handshook? Handshaken? + return None + + leaf = Security.SecTrustGetCertificateAtIndex(trust, 0) + assert leaf + + # Ok, now we want the DER bytes. + certdata = Security.SecCertificateCopyData(leaf) + assert certdata + + data_length = CoreFoundation.CFDataGetLength(certdata) + data_buffer = CoreFoundation.CFDataGetBytePtr(certdata) + der_bytes = ctypes.string_at(data_buffer, data_length) + finally: + if certdata: + CoreFoundation.CFRelease(certdata) + if trust: + CoreFoundation.CFRelease(trust) + + return der_bytes + + def version(self): + protocol = Security.SSLProtocol() + result = Security.SSLGetNegotiatedProtocolVersion( + self.context, ctypes.byref(protocol) + ) + _assert_no_error(result) + if protocol.value == SecurityConst.kTLSProtocol13: + raise ssl.SSLError("SecureTransport does not support TLS 1.3") + elif protocol.value == SecurityConst.kTLSProtocol12: + return "TLSv1.2" + elif protocol.value == SecurityConst.kTLSProtocol11: + return "TLSv1.1" + elif protocol.value == SecurityConst.kTLSProtocol1: + return "TLSv1" + elif protocol.value == SecurityConst.kSSLProtocol3: + return "SSLv3" + elif protocol.value == SecurityConst.kSSLProtocol2: + return "SSLv2" + else: + raise ssl.SSLError("Unknown TLS version: %r" % protocol) + + def _reuse(self): + self._makefile_refs += 1 + + def _drop(self): + if self._makefile_refs < 1: + self.close() + else: + self._makefile_refs -= 1 + + +if _fileobject: # Platform-specific: Python 2 + + def makefile(self, mode, bufsize=-1): + self._makefile_refs += 1 + return _fileobject(self, mode, bufsize, close=True) + + +else: # Platform-specific: Python 3 + + def makefile(self, mode="r", buffering=None, *args, **kwargs): + # We disable buffering with SecureTransport because it conflicts with + # the buffering that ST does internally (see issue #1153 for more). + buffering = 0 + return backport_makefile(self, mode, buffering, *args, **kwargs) + + +WrappedSocket.makefile = makefile + + +class SecureTransportContext(object): + """ + I am a wrapper class for the SecureTransport library, to translate the + interface of the standard library ``SSLContext`` object to calls into + SecureTransport. + """ + + def __init__(self, protocol): + self._min_version, self._max_version = _protocol_to_min_max[protocol] + self._options = 0 + self._verify = False + self._trust_bundle = None + self._client_cert = None + self._client_key = None + self._client_key_passphrase = None + self._alpn_protocols = None + + @property + def check_hostname(self): + """ + SecureTransport cannot have its hostname checking disabled. For more, + see the comment on getpeercert() in this file. + """ + return True + + @check_hostname.setter + def check_hostname(self, value): + """ + SecureTransport cannot have its hostname checking disabled. For more, + see the comment on getpeercert() in this file. + """ + pass + + @property + def options(self): + # TODO: Well, crap. + # + # So this is the bit of the code that is the most likely to cause us + # trouble. Essentially we need to enumerate all of the SSL options that + # users might want to use and try to see if we can sensibly translate + # them, or whether we should just ignore them. + return self._options + + @options.setter + def options(self, value): + # TODO: Update in line with above. + self._options = value + + @property + def verify_mode(self): + return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE + + @verify_mode.setter + def verify_mode(self, value): + self._verify = True if value == ssl.CERT_REQUIRED else False + + def set_default_verify_paths(self): + # So, this has to do something a bit weird. Specifically, what it does + # is nothing. + # + # This means that, if we had previously had load_verify_locations + # called, this does not undo that. We need to do that because it turns + # out that the rest of the urllib3 code will attempt to load the + # default verify paths if it hasn't been told about any paths, even if + # the context itself was sometime earlier. We resolve that by just + # ignoring it. + pass + + def load_default_certs(self): + return self.set_default_verify_paths() + + def set_ciphers(self, ciphers): + # For now, we just require the default cipher string. + if ciphers != util.ssl_.DEFAULT_CIPHERS: + raise ValueError("SecureTransport doesn't support custom cipher strings") + + def load_verify_locations(self, cafile=None, capath=None, cadata=None): + # OK, we only really support cadata and cafile. + if capath is not None: + raise ValueError("SecureTransport does not support cert directories") + + # Raise if cafile does not exist. + if cafile is not None: + with open(cafile): + pass + + self._trust_bundle = cafile or cadata + + def load_cert_chain(self, certfile, keyfile=None, password=None): + self._client_cert = certfile + self._client_key = keyfile + self._client_cert_passphrase = password + + def set_alpn_protocols(self, protocols): + """ + Sets the ALPN protocols that will later be set on the context. + + Raises a NotImplementedError if ALPN is not supported. + """ + if not hasattr(Security, "SSLSetALPNProtocols"): + raise NotImplementedError( + "SecureTransport supports ALPN only in macOS 10.12+" + ) + self._alpn_protocols = [six.ensure_binary(p) for p in protocols] + + def wrap_socket( + self, + sock, + server_side=False, + do_handshake_on_connect=True, + suppress_ragged_eofs=True, + server_hostname=None, + ): + # So, what do we do here? Firstly, we assert some properties. This is a + # stripped down shim, so there is some functionality we don't support. + # See PEP 543 for the real deal. + assert not server_side + assert do_handshake_on_connect + assert suppress_ragged_eofs + + # Ok, we're good to go. Now we want to create the wrapped socket object + # and store it in the appropriate place. + wrapped_socket = WrappedSocket(sock) + + # Now we can handshake + wrapped_socket.handshake( + server_hostname, + self._verify, + self._trust_bundle, + self._min_version, + self._max_version, + self._client_cert, + self._client_key, + self._client_key_passphrase, + self._alpn_protocols, + ) + return wrapped_socket diff --git a/openpype/hosts/fusion/vendor/urllib3/contrib/socks.py b/openpype/hosts/fusion/vendor/urllib3/contrib/socks.py new file mode 100644 index 0000000000..c326e80dd1 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/contrib/socks.py @@ -0,0 +1,216 @@ +# -*- coding: utf-8 -*- +""" +This module contains provisional support for SOCKS proxies from within +urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and +SOCKS5. To enable its functionality, either install PySocks or install this +module with the ``socks`` extra. + +The SOCKS implementation supports the full range of urllib3 features. It also +supports the following SOCKS features: + +- SOCKS4A (``proxy_url='socks4a://...``) +- SOCKS4 (``proxy_url='socks4://...``) +- SOCKS5 with remote DNS (``proxy_url='socks5h://...``) +- SOCKS5 with local DNS (``proxy_url='socks5://...``) +- Usernames and passwords for the SOCKS proxy + +.. note:: + It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in + your ``proxy_url`` to ensure that DNS resolution is done from the remote + server instead of client-side when connecting to a domain name. + +SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 +supports IPv4, IPv6, and domain names. + +When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` +will be sent as the ``userid`` section of the SOCKS request: + +.. code-block:: python + + proxy_url="socks4a://@proxy-host" + +When connecting to a SOCKS5 proxy the ``username`` and ``password`` portion +of the ``proxy_url`` will be sent as the username/password to authenticate +with the proxy: + +.. code-block:: python + + proxy_url="socks5h://:@proxy-host" + +""" +from __future__ import absolute_import + +try: + import socks +except ImportError: + import warnings + + from ..exceptions import DependencyWarning + + warnings.warn( + ( + "SOCKS support in urllib3 requires the installation of optional " + "dependencies: specifically, PySocks. For more information, see " + "https://urllib3.readthedocs.io/en/1.26.x/contrib.html#socks-proxies" + ), + DependencyWarning, + ) + raise + +from socket import error as SocketError +from socket import timeout as SocketTimeout + +from ..connection import HTTPConnection, HTTPSConnection +from ..connectionpool import HTTPConnectionPool, HTTPSConnectionPool +from ..exceptions import ConnectTimeoutError, NewConnectionError +from ..poolmanager import PoolManager +from ..util.url import parse_url + +try: + import ssl +except ImportError: + ssl = None + + +class SOCKSConnection(HTTPConnection): + """ + A plain-text HTTP connection that connects via a SOCKS proxy. + """ + + def __init__(self, *args, **kwargs): + self._socks_options = kwargs.pop("_socks_options") + super(SOCKSConnection, self).__init__(*args, **kwargs) + + def _new_conn(self): + """ + Establish a new connection via the SOCKS proxy. + """ + extra_kw = {} + if self.source_address: + extra_kw["source_address"] = self.source_address + + if self.socket_options: + extra_kw["socket_options"] = self.socket_options + + try: + conn = socks.create_connection( + (self.host, self.port), + proxy_type=self._socks_options["socks_version"], + proxy_addr=self._socks_options["proxy_host"], + proxy_port=self._socks_options["proxy_port"], + proxy_username=self._socks_options["username"], + proxy_password=self._socks_options["password"], + proxy_rdns=self._socks_options["rdns"], + timeout=self.timeout, + **extra_kw + ) + + except SocketTimeout: + raise ConnectTimeoutError( + self, + "Connection to %s timed out. (connect timeout=%s)" + % (self.host, self.timeout), + ) + + except socks.ProxyError as e: + # This is fragile as hell, but it seems to be the only way to raise + # useful errors here. + if e.socket_err: + error = e.socket_err + if isinstance(error, SocketTimeout): + raise ConnectTimeoutError( + self, + "Connection to %s timed out. (connect timeout=%s)" + % (self.host, self.timeout), + ) + else: + raise NewConnectionError( + self, "Failed to establish a new connection: %s" % error + ) + else: + raise NewConnectionError( + self, "Failed to establish a new connection: %s" % e + ) + + except SocketError as e: # Defensive: PySocks should catch all these. + raise NewConnectionError( + self, "Failed to establish a new connection: %s" % e + ) + + return conn + + +# We don't need to duplicate the Verified/Unverified distinction from +# urllib3/connection.py here because the HTTPSConnection will already have been +# correctly set to either the Verified or Unverified form by that module. This +# means the SOCKSHTTPSConnection will automatically be the correct type. +class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection): + pass + + +class SOCKSHTTPConnectionPool(HTTPConnectionPool): + ConnectionCls = SOCKSConnection + + +class SOCKSHTTPSConnectionPool(HTTPSConnectionPool): + ConnectionCls = SOCKSHTTPSConnection + + +class SOCKSProxyManager(PoolManager): + """ + A version of the urllib3 ProxyManager that routes connections via the + defined SOCKS proxy. + """ + + pool_classes_by_scheme = { + "http": SOCKSHTTPConnectionPool, + "https": SOCKSHTTPSConnectionPool, + } + + def __init__( + self, + proxy_url, + username=None, + password=None, + num_pools=10, + headers=None, + **connection_pool_kw + ): + parsed = parse_url(proxy_url) + + if username is None and password is None and parsed.auth is not None: + split = parsed.auth.split(":") + if len(split) == 2: + username, password = split + if parsed.scheme == "socks5": + socks_version = socks.PROXY_TYPE_SOCKS5 + rdns = False + elif parsed.scheme == "socks5h": + socks_version = socks.PROXY_TYPE_SOCKS5 + rdns = True + elif parsed.scheme == "socks4": + socks_version = socks.PROXY_TYPE_SOCKS4 + rdns = False + elif parsed.scheme == "socks4a": + socks_version = socks.PROXY_TYPE_SOCKS4 + rdns = True + else: + raise ValueError("Unable to determine SOCKS version from %s" % proxy_url) + + self.proxy_url = proxy_url + + socks_options = { + "socks_version": socks_version, + "proxy_host": parsed.host, + "proxy_port": parsed.port, + "username": username, + "password": password, + "rdns": rdns, + } + connection_pool_kw["_socks_options"] = socks_options + + super(SOCKSProxyManager, self).__init__( + num_pools, headers, **connection_pool_kw + ) + + self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme diff --git a/openpype/hosts/fusion/vendor/urllib3/exceptions.py b/openpype/hosts/fusion/vendor/urllib3/exceptions.py new file mode 100644 index 0000000000..cba6f3f560 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/exceptions.py @@ -0,0 +1,323 @@ +from __future__ import absolute_import + +from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead + +# Base Exceptions + + +class HTTPError(Exception): + """Base exception used by this module.""" + + pass + + +class HTTPWarning(Warning): + """Base warning used by this module.""" + + pass + + +class PoolError(HTTPError): + """Base exception for errors caused within a pool.""" + + def __init__(self, pool, message): + self.pool = pool + HTTPError.__init__(self, "%s: %s" % (pool, message)) + + def __reduce__(self): + # For pickling purposes. + return self.__class__, (None, None) + + +class RequestError(PoolError): + """Base exception for PoolErrors that have associated URLs.""" + + def __init__(self, pool, url, message): + self.url = url + PoolError.__init__(self, pool, message) + + def __reduce__(self): + # For pickling purposes. + return self.__class__, (None, self.url, None) + + +class SSLError(HTTPError): + """Raised when SSL certificate fails in an HTTPS connection.""" + + pass + + +class ProxyError(HTTPError): + """Raised when the connection to a proxy fails.""" + + def __init__(self, message, error, *args): + super(ProxyError, self).__init__(message, error, *args) + self.original_error = error + + +class DecodeError(HTTPError): + """Raised when automatic decoding based on Content-Type fails.""" + + pass + + +class ProtocolError(HTTPError): + """Raised when something unexpected happens mid-request/response.""" + + pass + + +#: Renamed to ProtocolError but aliased for backwards compatibility. +ConnectionError = ProtocolError + + +# Leaf Exceptions + + +class MaxRetryError(RequestError): + """Raised when the maximum number of retries is exceeded. + + :param pool: The connection pool + :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool` + :param string url: The requested Url + :param exceptions.Exception reason: The underlying error + + """ + + def __init__(self, pool, url, reason=None): + self.reason = reason + + message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason) + + RequestError.__init__(self, pool, url, message) + + +class HostChangedError(RequestError): + """Raised when an existing pool gets a request for a foreign host.""" + + def __init__(self, pool, url, retries=3): + message = "Tried to open a foreign host with url: %s" % url + RequestError.__init__(self, pool, url, message) + self.retries = retries + + +class TimeoutStateError(HTTPError): + """Raised when passing an invalid state to a timeout""" + + pass + + +class TimeoutError(HTTPError): + """Raised when a socket timeout error occurs. + + Catching this error will catch both :exc:`ReadTimeoutErrors + ` and :exc:`ConnectTimeoutErrors `. + """ + + pass + + +class ReadTimeoutError(TimeoutError, RequestError): + """Raised when a socket timeout occurs while receiving data from a server""" + + pass + + +# This timeout error does not have a URL attached and needs to inherit from the +# base HTTPError +class ConnectTimeoutError(TimeoutError): + """Raised when a socket timeout occurs while connecting to a server""" + + pass + + +class NewConnectionError(ConnectTimeoutError, PoolError): + """Raised when we fail to establish a new connection. Usually ECONNREFUSED.""" + + pass + + +class EmptyPoolError(PoolError): + """Raised when a pool runs out of connections and no more are allowed.""" + + pass + + +class ClosedPoolError(PoolError): + """Raised when a request enters a pool after the pool has been closed.""" + + pass + + +class LocationValueError(ValueError, HTTPError): + """Raised when there is something wrong with a given URL input.""" + + pass + + +class LocationParseError(LocationValueError): + """Raised when get_host or similar fails to parse the URL input.""" + + def __init__(self, location): + message = "Failed to parse: %s" % location + HTTPError.__init__(self, message) + + self.location = location + + +class URLSchemeUnknown(LocationValueError): + """Raised when a URL input has an unsupported scheme.""" + + def __init__(self, scheme): + message = "Not supported URL scheme %s" % scheme + super(URLSchemeUnknown, self).__init__(message) + + self.scheme = scheme + + +class ResponseError(HTTPError): + """Used as a container for an error reason supplied in a MaxRetryError.""" + + GENERIC_ERROR = "too many error responses" + SPECIFIC_ERROR = "too many {status_code} error responses" + + +class SecurityWarning(HTTPWarning): + """Warned when performing security reducing actions""" + + pass + + +class SubjectAltNameWarning(SecurityWarning): + """Warned when connecting to a host with a certificate missing a SAN.""" + + pass + + +class InsecureRequestWarning(SecurityWarning): + """Warned when making an unverified HTTPS request.""" + + pass + + +class SystemTimeWarning(SecurityWarning): + """Warned when system time is suspected to be wrong""" + + pass + + +class InsecurePlatformWarning(SecurityWarning): + """Warned when certain TLS/SSL configuration is not available on a platform.""" + + pass + + +class SNIMissingWarning(HTTPWarning): + """Warned when making a HTTPS request without SNI available.""" + + pass + + +class DependencyWarning(HTTPWarning): + """ + Warned when an attempt is made to import a module with missing optional + dependencies. + """ + + pass + + +class ResponseNotChunked(ProtocolError, ValueError): + """Response needs to be chunked in order to read it as chunks.""" + + pass + + +class BodyNotHttplibCompatible(HTTPError): + """ + Body should be :class:`http.client.HTTPResponse` like + (have an fp attribute which returns raw chunks) for read_chunked(). + """ + + pass + + +class IncompleteRead(HTTPError, httplib_IncompleteRead): + """ + Response length doesn't match expected Content-Length + + Subclass of :class:`http.client.IncompleteRead` to allow int value + for ``partial`` to avoid creating large objects on streamed reads. + """ + + def __init__(self, partial, expected): + super(IncompleteRead, self).__init__(partial, expected) + + def __repr__(self): + return "IncompleteRead(%i bytes read, %i more expected)" % ( + self.partial, + self.expected, + ) + + +class InvalidChunkLength(HTTPError, httplib_IncompleteRead): + """Invalid chunk length in a chunked response.""" + + def __init__(self, response, length): + super(InvalidChunkLength, self).__init__( + response.tell(), response.length_remaining + ) + self.response = response + self.length = length + + def __repr__(self): + return "InvalidChunkLength(got length %r, %i bytes read)" % ( + self.length, + self.partial, + ) + + +class InvalidHeader(HTTPError): + """The header provided was somehow invalid.""" + + pass + + +class ProxySchemeUnknown(AssertionError, URLSchemeUnknown): + """ProxyManager does not support the supplied scheme""" + + # TODO(t-8ch): Stop inheriting from AssertionError in v2.0. + + def __init__(self, scheme): + # 'localhost' is here because our URL parser parses + # localhost:8080 -> scheme=localhost, remove if we fix this. + if scheme == "localhost": + scheme = None + if scheme is None: + message = "Proxy URL had no scheme, should start with http:// or https://" + else: + message = ( + "Proxy URL had unsupported scheme %s, should use http:// or https://" + % scheme + ) + super(ProxySchemeUnknown, self).__init__(message) + + +class ProxySchemeUnsupported(ValueError): + """Fetching HTTPS resources through HTTPS proxies is unsupported""" + + pass + + +class HeaderParsingError(HTTPError): + """Raised by assert_header_parsing, but we convert it to a log.warning statement.""" + + def __init__(self, defects, unparsed_data): + message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data) + super(HeaderParsingError, self).__init__(message) + + +class UnrewindableBodyError(HTTPError): + """urllib3 encountered an error when trying to rewind a body""" + + pass diff --git a/openpype/hosts/fusion/vendor/urllib3/fields.py b/openpype/hosts/fusion/vendor/urllib3/fields.py new file mode 100644 index 0000000000..9d630f491d --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/fields.py @@ -0,0 +1,274 @@ +from __future__ import absolute_import + +import email.utils +import mimetypes +import re + +from .packages import six + + +def guess_content_type(filename, default="application/octet-stream"): + """ + Guess the "Content-Type" of a file. + + :param filename: + The filename to guess the "Content-Type" of using :mod:`mimetypes`. + :param default: + If no "Content-Type" can be guessed, default to `default`. + """ + if filename: + return mimetypes.guess_type(filename)[0] or default + return default + + +def format_header_param_rfc2231(name, value): + """ + Helper function to format and quote a single header parameter using the + strategy defined in RFC 2231. + + Particularly useful for header parameters which might contain + non-ASCII values, like file names. This follows + `RFC 2388 Section 4.4 `_. + + :param name: + The name of the parameter, a string expected to be ASCII only. + :param value: + The value of the parameter, provided as ``bytes`` or `str``. + :ret: + An RFC-2231-formatted unicode string. + """ + if isinstance(value, six.binary_type): + value = value.decode("utf-8") + + if not any(ch in value for ch in '"\\\r\n'): + result = u'%s="%s"' % (name, value) + try: + result.encode("ascii") + except (UnicodeEncodeError, UnicodeDecodeError): + pass + else: + return result + + if six.PY2: # Python 2: + value = value.encode("utf-8") + + # encode_rfc2231 accepts an encoded string and returns an ascii-encoded + # string in Python 2 but accepts and returns unicode strings in Python 3 + value = email.utils.encode_rfc2231(value, "utf-8") + value = "%s*=%s" % (name, value) + + if six.PY2: # Python 2: + value = value.decode("utf-8") + + return value + + +_HTML5_REPLACEMENTS = { + u"\u0022": u"%22", + # Replace "\" with "\\". + u"\u005C": u"\u005C\u005C", +} + +# All control characters from 0x00 to 0x1F *except* 0x1B. +_HTML5_REPLACEMENTS.update( + { + six.unichr(cc): u"%{:02X}".format(cc) + for cc in range(0x00, 0x1F + 1) + if cc not in (0x1B,) + } +) + + +def _replace_multiple(value, needles_and_replacements): + def replacer(match): + return needles_and_replacements[match.group(0)] + + pattern = re.compile( + r"|".join([re.escape(needle) for needle in needles_and_replacements.keys()]) + ) + + result = pattern.sub(replacer, value) + + return result + + +def format_header_param_html5(name, value): + """ + Helper function to format and quote a single header parameter using the + HTML5 strategy. + + Particularly useful for header parameters which might contain + non-ASCII values, like file names. This follows the `HTML5 Working Draft + Section 4.10.22.7`_ and matches the behavior of curl and modern browsers. + + .. _HTML5 Working Draft Section 4.10.22.7: + https://w3c.github.io/html/sec-forms.html#multipart-form-data + + :param name: + The name of the parameter, a string expected to be ASCII only. + :param value: + The value of the parameter, provided as ``bytes`` or `str``. + :ret: + A unicode string, stripped of troublesome characters. + """ + if isinstance(value, six.binary_type): + value = value.decode("utf-8") + + value = _replace_multiple(value, _HTML5_REPLACEMENTS) + + return u'%s="%s"' % (name, value) + + +# For backwards-compatibility. +format_header_param = format_header_param_html5 + + +class RequestField(object): + """ + A data container for request body parameters. + + :param name: + The name of this request field. Must be unicode. + :param data: + The data/value body. + :param filename: + An optional filename of the request field. Must be unicode. + :param headers: + An optional dict-like object of headers to initially use for the field. + :param header_formatter: + An optional callable that is used to encode and format the headers. By + default, this is :func:`format_header_param_html5`. + """ + + def __init__( + self, + name, + data, + filename=None, + headers=None, + header_formatter=format_header_param_html5, + ): + self._name = name + self._filename = filename + self.data = data + self.headers = {} + if headers: + self.headers = dict(headers) + self.header_formatter = header_formatter + + @classmethod + def from_tuples(cls, fieldname, value, header_formatter=format_header_param_html5): + """ + A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters. + + Supports constructing :class:`~urllib3.fields.RequestField` from + parameter of key/value strings AND key/filetuple. A filetuple is a + (filename, data, MIME type) tuple where the MIME type is optional. + For example:: + + 'foo': 'bar', + 'fakefile': ('foofile.txt', 'contents of foofile'), + 'realfile': ('barfile.txt', open('realfile').read()), + 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), + 'nonamefile': 'contents of nonamefile field', + + Field names and filenames must be unicode. + """ + if isinstance(value, tuple): + if len(value) == 3: + filename, data, content_type = value + else: + filename, data = value + content_type = guess_content_type(filename) + else: + filename = None + content_type = None + data = value + + request_param = cls( + fieldname, data, filename=filename, header_formatter=header_formatter + ) + request_param.make_multipart(content_type=content_type) + + return request_param + + def _render_part(self, name, value): + """ + Overridable helper function to format a single header parameter. By + default, this calls ``self.header_formatter``. + + :param name: + The name of the parameter, a string expected to be ASCII only. + :param value: + The value of the parameter, provided as a unicode string. + """ + + return self.header_formatter(name, value) + + def _render_parts(self, header_parts): + """ + Helper function to format and quote a single header. + + Useful for single headers that are composed of multiple items. E.g., + 'Content-Disposition' fields. + + :param header_parts: + A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format + as `k1="v1"; k2="v2"; ...`. + """ + parts = [] + iterable = header_parts + if isinstance(header_parts, dict): + iterable = header_parts.items() + + for name, value in iterable: + if value is not None: + parts.append(self._render_part(name, value)) + + return u"; ".join(parts) + + def render_headers(self): + """ + Renders the headers for this request field. + """ + lines = [] + + sort_keys = ["Content-Disposition", "Content-Type", "Content-Location"] + for sort_key in sort_keys: + if self.headers.get(sort_key, False): + lines.append(u"%s: %s" % (sort_key, self.headers[sort_key])) + + for header_name, header_value in self.headers.items(): + if header_name not in sort_keys: + if header_value: + lines.append(u"%s: %s" % (header_name, header_value)) + + lines.append(u"\r\n") + return u"\r\n".join(lines) + + def make_multipart( + self, content_disposition=None, content_type=None, content_location=None + ): + """ + Makes this request field into a multipart request field. + + This method overrides "Content-Disposition", "Content-Type" and + "Content-Location" headers to the request parameter. + + :param content_type: + The 'Content-Type' of the request body. + :param content_location: + The 'Content-Location' of the request body. + + """ + self.headers["Content-Disposition"] = content_disposition or u"form-data" + self.headers["Content-Disposition"] += u"; ".join( + [ + u"", + self._render_parts( + ((u"name", self._name), (u"filename", self._filename)) + ), + ] + ) + self.headers["Content-Type"] = content_type + self.headers["Content-Location"] = content_location diff --git a/openpype/hosts/fusion/vendor/urllib3/filepost.py b/openpype/hosts/fusion/vendor/urllib3/filepost.py new file mode 100644 index 0000000000..36c9252c64 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/filepost.py @@ -0,0 +1,98 @@ +from __future__ import absolute_import + +import binascii +import codecs +import os +from io import BytesIO + +from .fields import RequestField +from .packages import six +from .packages.six import b + +writer = codecs.lookup("utf-8")[3] + + +def choose_boundary(): + """ + Our embarrassingly-simple replacement for mimetools.choose_boundary. + """ + boundary = binascii.hexlify(os.urandom(16)) + if not six.PY2: + boundary = boundary.decode("ascii") + return boundary + + +def iter_field_objects(fields): + """ + Iterate over fields. + + Supports list of (k, v) tuples and dicts, and lists of + :class:`~urllib3.fields.RequestField`. + + """ + if isinstance(fields, dict): + i = six.iteritems(fields) + else: + i = iter(fields) + + for field in i: + if isinstance(field, RequestField): + yield field + else: + yield RequestField.from_tuples(*field) + + +def iter_fields(fields): + """ + .. deprecated:: 1.6 + + Iterate over fields. + + The addition of :class:`~urllib3.fields.RequestField` makes this function + obsolete. Instead, use :func:`iter_field_objects`, which returns + :class:`~urllib3.fields.RequestField` objects. + + Supports list of (k, v) tuples and dicts. + """ + if isinstance(fields, dict): + return ((k, v) for k, v in six.iteritems(fields)) + + return ((k, v) for k, v in fields) + + +def encode_multipart_formdata(fields, boundary=None): + """ + Encode a dictionary of ``fields`` using the multipart/form-data MIME format. + + :param fields: + Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). + + :param boundary: + If not specified, then a random boundary will be generated using + :func:`urllib3.filepost.choose_boundary`. + """ + body = BytesIO() + if boundary is None: + boundary = choose_boundary() + + for field in iter_field_objects(fields): + body.write(b("--%s\r\n" % (boundary))) + + writer(body).write(field.render_headers()) + data = field.data + + if isinstance(data, int): + data = str(data) # Backwards compatibility + + if isinstance(data, six.text_type): + writer(body).write(data) + else: + body.write(data) + + body.write(b"\r\n") + + body.write(b("--%s--\r\n" % (boundary))) + + content_type = str("multipart/form-data; boundary=%s" % boundary) + + return body.getvalue(), content_type diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/__init__.py b/openpype/hosts/fusion/vendor/urllib3/packages/__init__.py new file mode 100644 index 0000000000..fce4caa65d --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/packages/__init__.py @@ -0,0 +1,5 @@ +from __future__ import absolute_import + +from . import ssl_match_hostname + +__all__ = ("ssl_match_hostname",) diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/backports/__init__.py b/openpype/hosts/fusion/vendor/urllib3/packages/backports/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/backports/makefile.py b/openpype/hosts/fusion/vendor/urllib3/packages/backports/makefile.py new file mode 100644 index 0000000000..b8fb2154b6 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/packages/backports/makefile.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +""" +backports.makefile +~~~~~~~~~~~~~~~~~~ + +Backports the Python 3 ``socket.makefile`` method for use with anything that +wants to create a "fake" socket object. +""" +import io +from socket import SocketIO + + +def backport_makefile( + self, mode="r", buffering=None, encoding=None, errors=None, newline=None +): + """ + Backport of ``socket.makefile`` from Python 3.5. + """ + if not set(mode) <= {"r", "w", "b"}: + raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,)) + writing = "w" in mode + reading = "r" in mode or not writing + assert reading or writing + binary = "b" in mode + rawmode = "" + if reading: + rawmode += "r" + if writing: + rawmode += "w" + raw = SocketIO(self, rawmode) + self._makefile_refs += 1 + if buffering is None: + buffering = -1 + if buffering < 0: + buffering = io.DEFAULT_BUFFER_SIZE + if buffering == 0: + if not binary: + raise ValueError("unbuffered streams must be binary") + return raw + if reading and writing: + buffer = io.BufferedRWPair(raw, raw, buffering) + elif reading: + buffer = io.BufferedReader(raw, buffering) + else: + assert writing + buffer = io.BufferedWriter(raw, buffering) + if binary: + return buffer + text = io.TextIOWrapper(buffer, encoding, errors, newline) + text.mode = mode + return text diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/six.py b/openpype/hosts/fusion/vendor/urllib3/packages/six.py new file mode 100644 index 0000000000..ba50acb062 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/packages/six.py @@ -0,0 +1,1077 @@ +# Copyright (c) 2010-2020 Benjamin Peterson +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. + +"""Utilities for writing code that runs on Python 2 and 3""" + +from __future__ import absolute_import + +import functools +import itertools +import operator +import sys +import types + +__author__ = "Benjamin Peterson " +__version__ = "1.16.0" + + +# Useful for very coarse version differentiation. +PY2 = sys.version_info[0] == 2 +PY3 = sys.version_info[0] == 3 +PY34 = sys.version_info[0:2] >= (3, 4) + +if PY3: + string_types = (str,) + integer_types = (int,) + class_types = (type,) + text_type = str + binary_type = bytes + + MAXSIZE = sys.maxsize +else: + string_types = (basestring,) + integer_types = (int, long) + class_types = (type, types.ClassType) + text_type = unicode + binary_type = str + + if sys.platform.startswith("java"): + # Jython always uses 32 bits. + MAXSIZE = int((1 << 31) - 1) + else: + # It's possible to have sizeof(long) != sizeof(Py_ssize_t). + class X(object): + def __len__(self): + return 1 << 31 + + try: + len(X()) + except OverflowError: + # 32-bit + MAXSIZE = int((1 << 31) - 1) + else: + # 64-bit + MAXSIZE = int((1 << 63) - 1) + del X + +if PY34: + from importlib.util import spec_from_loader +else: + spec_from_loader = None + + +def _add_doc(func, doc): + """Add documentation to a function.""" + func.__doc__ = doc + + +def _import_module(name): + """Import module, returning the module after the last dot.""" + __import__(name) + return sys.modules[name] + + +class _LazyDescr(object): + def __init__(self, name): + self.name = name + + def __get__(self, obj, tp): + result = self._resolve() + setattr(obj, self.name, result) # Invokes __set__. + try: + # This is a bit ugly, but it avoids running this again by + # removing this descriptor. + delattr(obj.__class__, self.name) + except AttributeError: + pass + return result + + +class MovedModule(_LazyDescr): + def __init__(self, name, old, new=None): + super(MovedModule, self).__init__(name) + if PY3: + if new is None: + new = name + self.mod = new + else: + self.mod = old + + def _resolve(self): + return _import_module(self.mod) + + def __getattr__(self, attr): + _module = self._resolve() + value = getattr(_module, attr) + setattr(self, attr, value) + return value + + +class _LazyModule(types.ModuleType): + def __init__(self, name): + super(_LazyModule, self).__init__(name) + self.__doc__ = self.__class__.__doc__ + + def __dir__(self): + attrs = ["__doc__", "__name__"] + attrs += [attr.name for attr in self._moved_attributes] + return attrs + + # Subclasses should override this + _moved_attributes = [] + + +class MovedAttribute(_LazyDescr): + def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): + super(MovedAttribute, self).__init__(name) + if PY3: + if new_mod is None: + new_mod = name + self.mod = new_mod + if new_attr is None: + if old_attr is None: + new_attr = name + else: + new_attr = old_attr + self.attr = new_attr + else: + self.mod = old_mod + if old_attr is None: + old_attr = name + self.attr = old_attr + + def _resolve(self): + module = _import_module(self.mod) + return getattr(module, self.attr) + + +class _SixMetaPathImporter(object): + + """ + A meta path importer to import six.moves and its submodules. + + This class implements a PEP302 finder and loader. It should be compatible + with Python 2.5 and all existing versions of Python3 + """ + + def __init__(self, six_module_name): + self.name = six_module_name + self.known_modules = {} + + def _add_module(self, mod, *fullnames): + for fullname in fullnames: + self.known_modules[self.name + "." + fullname] = mod + + def _get_module(self, fullname): + return self.known_modules[self.name + "." + fullname] + + def find_module(self, fullname, path=None): + if fullname in self.known_modules: + return self + return None + + def find_spec(self, fullname, path, target=None): + if fullname in self.known_modules: + return spec_from_loader(fullname, self) + return None + + def __get_module(self, fullname): + try: + return self.known_modules[fullname] + except KeyError: + raise ImportError("This loader does not know module " + fullname) + + def load_module(self, fullname): + try: + # in case of a reload + return sys.modules[fullname] + except KeyError: + pass + mod = self.__get_module(fullname) + if isinstance(mod, MovedModule): + mod = mod._resolve() + else: + mod.__loader__ = self + sys.modules[fullname] = mod + return mod + + def is_package(self, fullname): + """ + Return true, if the named module is a package. + + We need this method to get correct spec objects with + Python 3.4 (see PEP451) + """ + return hasattr(self.__get_module(fullname), "__path__") + + def get_code(self, fullname): + """Return None + + Required, if is_package is implemented""" + self.__get_module(fullname) # eventually raises ImportError + return None + + get_source = get_code # same as get_code + + def create_module(self, spec): + return self.load_module(spec.name) + + def exec_module(self, module): + pass + + +_importer = _SixMetaPathImporter(__name__) + + +class _MovedItems(_LazyModule): + + """Lazy loading of moved objects""" + + __path__ = [] # mark as package + + +_moved_attributes = [ + MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), + MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), + MovedAttribute( + "filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse" + ), + MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), + MovedAttribute("intern", "__builtin__", "sys"), + MovedAttribute("map", "itertools", "builtins", "imap", "map"), + MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), + MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), + MovedAttribute("getoutput", "commands", "subprocess"), + MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), + MovedAttribute( + "reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload" + ), + MovedAttribute("reduce", "__builtin__", "functools"), + MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), + MovedAttribute("StringIO", "StringIO", "io"), + MovedAttribute("UserDict", "UserDict", "collections"), + MovedAttribute("UserList", "UserList", "collections"), + MovedAttribute("UserString", "UserString", "collections"), + MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), + MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), + MovedAttribute( + "zip_longest", "itertools", "itertools", "izip_longest", "zip_longest" + ), + MovedModule("builtins", "__builtin__"), + MovedModule("configparser", "ConfigParser"), + MovedModule( + "collections_abc", + "collections", + "collections.abc" if sys.version_info >= (3, 3) else "collections", + ), + MovedModule("copyreg", "copy_reg"), + MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), + MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), + MovedModule( + "_dummy_thread", + "dummy_thread", + "_dummy_thread" if sys.version_info < (3, 9) else "_thread", + ), + MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), + MovedModule("http_cookies", "Cookie", "http.cookies"), + MovedModule("html_entities", "htmlentitydefs", "html.entities"), + MovedModule("html_parser", "HTMLParser", "html.parser"), + MovedModule("http_client", "httplib", "http.client"), + MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), + MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), + MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), + MovedModule( + "email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart" + ), + MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), + MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), + MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), + MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), + MovedModule("cPickle", "cPickle", "pickle"), + MovedModule("queue", "Queue"), + MovedModule("reprlib", "repr"), + MovedModule("socketserver", "SocketServer"), + MovedModule("_thread", "thread", "_thread"), + MovedModule("tkinter", "Tkinter"), + MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), + MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), + MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), + MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), + MovedModule("tkinter_tix", "Tix", "tkinter.tix"), + MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), + MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), + MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), + MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), + MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), + MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), + MovedModule("tkinter_font", "tkFont", "tkinter.font"), + MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), + MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), + MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), + MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), + MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), + MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), + MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), + MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), +] +# Add windows specific modules. +if sys.platform == "win32": + _moved_attributes += [ + MovedModule("winreg", "_winreg"), + ] + +for attr in _moved_attributes: + setattr(_MovedItems, attr.name, attr) + if isinstance(attr, MovedModule): + _importer._add_module(attr, "moves." + attr.name) +del attr + +_MovedItems._moved_attributes = _moved_attributes + +moves = _MovedItems(__name__ + ".moves") +_importer._add_module(moves, "moves") + + +class Module_six_moves_urllib_parse(_LazyModule): + + """Lazy loading of moved objects in six.moves.urllib_parse""" + + +_urllib_parse_moved_attributes = [ + MovedAttribute("ParseResult", "urlparse", "urllib.parse"), + MovedAttribute("SplitResult", "urlparse", "urllib.parse"), + MovedAttribute("parse_qs", "urlparse", "urllib.parse"), + MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), + MovedAttribute("urldefrag", "urlparse", "urllib.parse"), + MovedAttribute("urljoin", "urlparse", "urllib.parse"), + MovedAttribute("urlparse", "urlparse", "urllib.parse"), + MovedAttribute("urlsplit", "urlparse", "urllib.parse"), + MovedAttribute("urlunparse", "urlparse", "urllib.parse"), + MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), + MovedAttribute("quote", "urllib", "urllib.parse"), + MovedAttribute("quote_plus", "urllib", "urllib.parse"), + MovedAttribute("unquote", "urllib", "urllib.parse"), + MovedAttribute("unquote_plus", "urllib", "urllib.parse"), + MovedAttribute( + "unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes" + ), + MovedAttribute("urlencode", "urllib", "urllib.parse"), + MovedAttribute("splitquery", "urllib", "urllib.parse"), + MovedAttribute("splittag", "urllib", "urllib.parse"), + MovedAttribute("splituser", "urllib", "urllib.parse"), + MovedAttribute("splitvalue", "urllib", "urllib.parse"), + MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), + MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), + MovedAttribute("uses_params", "urlparse", "urllib.parse"), + MovedAttribute("uses_query", "urlparse", "urllib.parse"), + MovedAttribute("uses_relative", "urlparse", "urllib.parse"), +] +for attr in _urllib_parse_moved_attributes: + setattr(Module_six_moves_urllib_parse, attr.name, attr) +del attr + +Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes + +_importer._add_module( + Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), + "moves.urllib_parse", + "moves.urllib.parse", +) + + +class Module_six_moves_urllib_error(_LazyModule): + + """Lazy loading of moved objects in six.moves.urllib_error""" + + +_urllib_error_moved_attributes = [ + MovedAttribute("URLError", "urllib2", "urllib.error"), + MovedAttribute("HTTPError", "urllib2", "urllib.error"), + MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), +] +for attr in _urllib_error_moved_attributes: + setattr(Module_six_moves_urllib_error, attr.name, attr) +del attr + +Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes + +_importer._add_module( + Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), + "moves.urllib_error", + "moves.urllib.error", +) + + +class Module_six_moves_urllib_request(_LazyModule): + + """Lazy loading of moved objects in six.moves.urllib_request""" + + +_urllib_request_moved_attributes = [ + MovedAttribute("urlopen", "urllib2", "urllib.request"), + MovedAttribute("install_opener", "urllib2", "urllib.request"), + MovedAttribute("build_opener", "urllib2", "urllib.request"), + MovedAttribute("pathname2url", "urllib", "urllib.request"), + MovedAttribute("url2pathname", "urllib", "urllib.request"), + MovedAttribute("getproxies", "urllib", "urllib.request"), + MovedAttribute("Request", "urllib2", "urllib.request"), + MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), + MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), + MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), + MovedAttribute("BaseHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), + MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), + MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), + MovedAttribute("FileHandler", "urllib2", "urllib.request"), + MovedAttribute("FTPHandler", "urllib2", "urllib.request"), + MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), + MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), + MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), + MovedAttribute("urlretrieve", "urllib", "urllib.request"), + MovedAttribute("urlcleanup", "urllib", "urllib.request"), + MovedAttribute("URLopener", "urllib", "urllib.request"), + MovedAttribute("FancyURLopener", "urllib", "urllib.request"), + MovedAttribute("proxy_bypass", "urllib", "urllib.request"), + MovedAttribute("parse_http_list", "urllib2", "urllib.request"), + MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), +] +for attr in _urllib_request_moved_attributes: + setattr(Module_six_moves_urllib_request, attr.name, attr) +del attr + +Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes + +_importer._add_module( + Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), + "moves.urllib_request", + "moves.urllib.request", +) + + +class Module_six_moves_urllib_response(_LazyModule): + + """Lazy loading of moved objects in six.moves.urllib_response""" + + +_urllib_response_moved_attributes = [ + MovedAttribute("addbase", "urllib", "urllib.response"), + MovedAttribute("addclosehook", "urllib", "urllib.response"), + MovedAttribute("addinfo", "urllib", "urllib.response"), + MovedAttribute("addinfourl", "urllib", "urllib.response"), +] +for attr in _urllib_response_moved_attributes: + setattr(Module_six_moves_urllib_response, attr.name, attr) +del attr + +Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes + +_importer._add_module( + Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), + "moves.urllib_response", + "moves.urllib.response", +) + + +class Module_six_moves_urllib_robotparser(_LazyModule): + + """Lazy loading of moved objects in six.moves.urllib_robotparser""" + + +_urllib_robotparser_moved_attributes = [ + MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), +] +for attr in _urllib_robotparser_moved_attributes: + setattr(Module_six_moves_urllib_robotparser, attr.name, attr) +del attr + +Module_six_moves_urllib_robotparser._moved_attributes = ( + _urllib_robotparser_moved_attributes +) + +_importer._add_module( + Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), + "moves.urllib_robotparser", + "moves.urllib.robotparser", +) + + +class Module_six_moves_urllib(types.ModuleType): + + """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" + + __path__ = [] # mark as package + parse = _importer._get_module("moves.urllib_parse") + error = _importer._get_module("moves.urllib_error") + request = _importer._get_module("moves.urllib_request") + response = _importer._get_module("moves.urllib_response") + robotparser = _importer._get_module("moves.urllib_robotparser") + + def __dir__(self): + return ["parse", "error", "request", "response", "robotparser"] + + +_importer._add_module( + Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib" +) + + +def add_move(move): + """Add an item to six.moves.""" + setattr(_MovedItems, move.name, move) + + +def remove_move(name): + """Remove item from six.moves.""" + try: + delattr(_MovedItems, name) + except AttributeError: + try: + del moves.__dict__[name] + except KeyError: + raise AttributeError("no such move, %r" % (name,)) + + +if PY3: + _meth_func = "__func__" + _meth_self = "__self__" + + _func_closure = "__closure__" + _func_code = "__code__" + _func_defaults = "__defaults__" + _func_globals = "__globals__" +else: + _meth_func = "im_func" + _meth_self = "im_self" + + _func_closure = "func_closure" + _func_code = "func_code" + _func_defaults = "func_defaults" + _func_globals = "func_globals" + + +try: + advance_iterator = next +except NameError: + + def advance_iterator(it): + return it.next() + + +next = advance_iterator + + +try: + callable = callable +except NameError: + + def callable(obj): + return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) + + +if PY3: + + def get_unbound_function(unbound): + return unbound + + create_bound_method = types.MethodType + + def create_unbound_method(func, cls): + return func + + Iterator = object +else: + + def get_unbound_function(unbound): + return unbound.im_func + + def create_bound_method(func, obj): + return types.MethodType(func, obj, obj.__class__) + + def create_unbound_method(func, cls): + return types.MethodType(func, None, cls) + + class Iterator(object): + def next(self): + return type(self).__next__(self) + + callable = callable +_add_doc( + get_unbound_function, """Get the function out of a possibly unbound function""" +) + + +get_method_function = operator.attrgetter(_meth_func) +get_method_self = operator.attrgetter(_meth_self) +get_function_closure = operator.attrgetter(_func_closure) +get_function_code = operator.attrgetter(_func_code) +get_function_defaults = operator.attrgetter(_func_defaults) +get_function_globals = operator.attrgetter(_func_globals) + + +if PY3: + + def iterkeys(d, **kw): + return iter(d.keys(**kw)) + + def itervalues(d, **kw): + return iter(d.values(**kw)) + + def iteritems(d, **kw): + return iter(d.items(**kw)) + + def iterlists(d, **kw): + return iter(d.lists(**kw)) + + viewkeys = operator.methodcaller("keys") + + viewvalues = operator.methodcaller("values") + + viewitems = operator.methodcaller("items") +else: + + def iterkeys(d, **kw): + return d.iterkeys(**kw) + + def itervalues(d, **kw): + return d.itervalues(**kw) + + def iteritems(d, **kw): + return d.iteritems(**kw) + + def iterlists(d, **kw): + return d.iterlists(**kw) + + viewkeys = operator.methodcaller("viewkeys") + + viewvalues = operator.methodcaller("viewvalues") + + viewitems = operator.methodcaller("viewitems") + +_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") +_add_doc(itervalues, "Return an iterator over the values of a dictionary.") +_add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") +_add_doc( + iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary." +) + + +if PY3: + + def b(s): + return s.encode("latin-1") + + def u(s): + return s + + unichr = chr + import struct + + int2byte = struct.Struct(">B").pack + del struct + byte2int = operator.itemgetter(0) + indexbytes = operator.getitem + iterbytes = iter + import io + + StringIO = io.StringIO + BytesIO = io.BytesIO + del io + _assertCountEqual = "assertCountEqual" + if sys.version_info[1] <= 1: + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" + _assertNotRegex = "assertNotRegexpMatches" + else: + _assertRaisesRegex = "assertRaisesRegex" + _assertRegex = "assertRegex" + _assertNotRegex = "assertNotRegex" +else: + + def b(s): + return s + + # Workaround for standalone backslash + + def u(s): + return unicode(s.replace(r"\\", r"\\\\"), "unicode_escape") + + unichr = unichr + int2byte = chr + + def byte2int(bs): + return ord(bs[0]) + + def indexbytes(buf, i): + return ord(buf[i]) + + iterbytes = functools.partial(itertools.imap, ord) + import StringIO + + StringIO = BytesIO = StringIO.StringIO + _assertCountEqual = "assertItemsEqual" + _assertRaisesRegex = "assertRaisesRegexp" + _assertRegex = "assertRegexpMatches" + _assertNotRegex = "assertNotRegexpMatches" +_add_doc(b, """Byte literal""") +_add_doc(u, """Text literal""") + + +def assertCountEqual(self, *args, **kwargs): + return getattr(self, _assertCountEqual)(*args, **kwargs) + + +def assertRaisesRegex(self, *args, **kwargs): + return getattr(self, _assertRaisesRegex)(*args, **kwargs) + + +def assertRegex(self, *args, **kwargs): + return getattr(self, _assertRegex)(*args, **kwargs) + + +def assertNotRegex(self, *args, **kwargs): + return getattr(self, _assertNotRegex)(*args, **kwargs) + + +if PY3: + exec_ = getattr(moves.builtins, "exec") + + def reraise(tp, value, tb=None): + try: + if value is None: + value = tp() + if value.__traceback__ is not tb: + raise value.with_traceback(tb) + raise value + finally: + value = None + tb = None + + +else: + + def exec_(_code_, _globs_=None, _locs_=None): + """Execute code in a namespace.""" + if _globs_ is None: + frame = sys._getframe(1) + _globs_ = frame.f_globals + if _locs_ is None: + _locs_ = frame.f_locals + del frame + elif _locs_ is None: + _locs_ = _globs_ + exec ("""exec _code_ in _globs_, _locs_""") + + exec_( + """def reraise(tp, value, tb=None): + try: + raise tp, value, tb + finally: + tb = None +""" + ) + + +if sys.version_info[:2] > (3,): + exec_( + """def raise_from(value, from_value): + try: + raise value from from_value + finally: + value = None +""" + ) +else: + + def raise_from(value, from_value): + raise value + + +print_ = getattr(moves.builtins, "print", None) +if print_ is None: + + def print_(*args, **kwargs): + """The new-style print function for Python 2.4 and 2.5.""" + fp = kwargs.pop("file", sys.stdout) + if fp is None: + return + + def write(data): + if not isinstance(data, basestring): + data = str(data) + # If the file has an encoding, encode unicode with it. + if ( + isinstance(fp, file) + and isinstance(data, unicode) + and fp.encoding is not None + ): + errors = getattr(fp, "errors", None) + if errors is None: + errors = "strict" + data = data.encode(fp.encoding, errors) + fp.write(data) + + want_unicode = False + sep = kwargs.pop("sep", None) + if sep is not None: + if isinstance(sep, unicode): + want_unicode = True + elif not isinstance(sep, str): + raise TypeError("sep must be None or a string") + end = kwargs.pop("end", None) + if end is not None: + if isinstance(end, unicode): + want_unicode = True + elif not isinstance(end, str): + raise TypeError("end must be None or a string") + if kwargs: + raise TypeError("invalid keyword arguments to print()") + if not want_unicode: + for arg in args: + if isinstance(arg, unicode): + want_unicode = True + break + if want_unicode: + newline = unicode("\n") + space = unicode(" ") + else: + newline = "\n" + space = " " + if sep is None: + sep = space + if end is None: + end = newline + for i, arg in enumerate(args): + if i: + write(sep) + write(arg) + write(end) + + +if sys.version_info[:2] < (3, 3): + _print = print_ + + def print_(*args, **kwargs): + fp = kwargs.get("file", sys.stdout) + flush = kwargs.pop("flush", False) + _print(*args, **kwargs) + if flush and fp is not None: + fp.flush() + + +_add_doc(reraise, """Reraise an exception.""") + +if sys.version_info[0:2] < (3, 4): + # This does exactly the same what the :func:`py3:functools.update_wrapper` + # function does on Python versions after 3.2. It sets the ``__wrapped__`` + # attribute on ``wrapper`` object and it doesn't raise an error if any of + # the attributes mentioned in ``assigned`` and ``updated`` are missing on + # ``wrapped`` object. + def _update_wrapper( + wrapper, + wrapped, + assigned=functools.WRAPPER_ASSIGNMENTS, + updated=functools.WRAPPER_UPDATES, + ): + for attr in assigned: + try: + value = getattr(wrapped, attr) + except AttributeError: + continue + else: + setattr(wrapper, attr, value) + for attr in updated: + getattr(wrapper, attr).update(getattr(wrapped, attr, {})) + wrapper.__wrapped__ = wrapped + return wrapper + + _update_wrapper.__doc__ = functools.update_wrapper.__doc__ + + def wraps( + wrapped, + assigned=functools.WRAPPER_ASSIGNMENTS, + updated=functools.WRAPPER_UPDATES, + ): + return functools.partial( + _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated + ) + + wraps.__doc__ = functools.wraps.__doc__ + +else: + wraps = functools.wraps + + +def with_metaclass(meta, *bases): + """Create a base class with a metaclass.""" + # This requires a bit of explanation: the basic idea is to make a dummy + # metaclass for one level of class instantiation that replaces itself with + # the actual metaclass. + class metaclass(type): + def __new__(cls, name, this_bases, d): + if sys.version_info[:2] >= (3, 7): + # This version introduced PEP 560 that requires a bit + # of extra care (we mimic what is done by __build_class__). + resolved_bases = types.resolve_bases(bases) + if resolved_bases is not bases: + d["__orig_bases__"] = bases + else: + resolved_bases = bases + return meta(name, resolved_bases, d) + + @classmethod + def __prepare__(cls, name, this_bases): + return meta.__prepare__(name, bases) + + return type.__new__(metaclass, "temporary_class", (), {}) + + +def add_metaclass(metaclass): + """Class decorator for creating a class with a metaclass.""" + + def wrapper(cls): + orig_vars = cls.__dict__.copy() + slots = orig_vars.get("__slots__") + if slots is not None: + if isinstance(slots, str): + slots = [slots] + for slots_var in slots: + orig_vars.pop(slots_var) + orig_vars.pop("__dict__", None) + orig_vars.pop("__weakref__", None) + if hasattr(cls, "__qualname__"): + orig_vars["__qualname__"] = cls.__qualname__ + return metaclass(cls.__name__, cls.__bases__, orig_vars) + + return wrapper + + +def ensure_binary(s, encoding="utf-8", errors="strict"): + """Coerce **s** to six.binary_type. + + For Python 2: + - `unicode` -> encoded to `str` + - `str` -> `str` + + For Python 3: + - `str` -> encoded to `bytes` + - `bytes` -> `bytes` + """ + if isinstance(s, binary_type): + return s + if isinstance(s, text_type): + return s.encode(encoding, errors) + raise TypeError("not expecting type '%s'" % type(s)) + + +def ensure_str(s, encoding="utf-8", errors="strict"): + """Coerce *s* to `str`. + + For Python 2: + - `unicode` -> encoded to `str` + - `str` -> `str` + + For Python 3: + - `str` -> `str` + - `bytes` -> decoded to `str` + """ + # Optimization: Fast return for the common case. + if type(s) is str: + return s + if PY2 and isinstance(s, text_type): + return s.encode(encoding, errors) + elif PY3 and isinstance(s, binary_type): + return s.decode(encoding, errors) + elif not isinstance(s, (text_type, binary_type)): + raise TypeError("not expecting type '%s'" % type(s)) + return s + + +def ensure_text(s, encoding="utf-8", errors="strict"): + """Coerce *s* to six.text_type. + + For Python 2: + - `unicode` -> `unicode` + - `str` -> `unicode` + + For Python 3: + - `str` -> `str` + - `bytes` -> decoded to `str` + """ + if isinstance(s, binary_type): + return s.decode(encoding, errors) + elif isinstance(s, text_type): + return s + else: + raise TypeError("not expecting type '%s'" % type(s)) + + +def python_2_unicode_compatible(klass): + """ + A class decorator that defines __unicode__ and __str__ methods under Python 2. + Under Python 3 it does nothing. + + To support Python 2 and 3 with a single code base, define a __str__ method + returning text and apply this decorator to the class. + """ + if PY2: + if "__str__" not in klass.__dict__: + raise ValueError( + "@python_2_unicode_compatible cannot be applied " + "to %s because it doesn't define __str__()." % klass.__name__ + ) + klass.__unicode__ = klass.__str__ + klass.__str__ = lambda self: self.__unicode__().encode("utf-8") + return klass + + +# Complete the moves implementation. +# This code is at the end of this module to speed up module loading. +# Turn this module into a package. +__path__ = [] # required for PEP 302 and PEP 451 +__package__ = __name__ # see PEP 366 @ReservedAssignment +if globals().get("__spec__") is not None: + __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable +# Remove other six meta path importers, since they cause problems. This can +# happen if six is removed from sys.modules and then reloaded. (Setuptools does +# this for some reason.) +if sys.meta_path: + for i, importer in enumerate(sys.meta_path): + # Here's some real nastiness: Another "instance" of the six module might + # be floating around. Therefore, we can't use isinstance() to check for + # the six meta path importer, since the other six instance will have + # inserted an importer with different class. + if ( + type(importer).__name__ == "_SixMetaPathImporter" + and importer.name == __name__ + ): + del sys.meta_path[i] + break + del i, importer +# Finally, add the importer to the meta path import hook. +sys.meta_path.append(_importer) diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/__init__.py b/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/__init__.py new file mode 100644 index 0000000000..ef3fde5206 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/__init__.py @@ -0,0 +1,24 @@ +import sys + +try: + # Our match_hostname function is the same as 3.10's, so we only want to + # import the match_hostname function if it's at least that good. + # We also fallback on Python 3.10+ because our code doesn't emit + # deprecation warnings and is the same as Python 3.10 otherwise. + if sys.version_info < (3, 5) or sys.version_info >= (3, 10): + raise ImportError("Fallback to vendored code") + + from ssl import CertificateError, match_hostname +except ImportError: + try: + # Backport of the function from a pypi module + from backports.ssl_match_hostname import ( # type: ignore + CertificateError, + match_hostname, + ) + except ImportError: + # Our vendored copy + from ._implementation import CertificateError, match_hostname # type: ignore + +# Not needed, but documenting what we provide. +__all__ = ("CertificateError", "match_hostname") diff --git a/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/_implementation.py b/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/_implementation.py new file mode 100644 index 0000000000..689208d3c6 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/packages/ssl_match_hostname/_implementation.py @@ -0,0 +1,160 @@ +"""The match_hostname() function from Python 3.3.3, essential when using SSL.""" + +# Note: This file is under the PSF license as the code comes from the python +# stdlib. http://docs.python.org/3/license.html + +import re +import sys + +# ipaddress has been backported to 2.6+ in pypi. If it is installed on the +# system, use it to handle IPAddress ServerAltnames (this was added in +# python-3.5) otherwise only do DNS matching. This allows +# backports.ssl_match_hostname to continue to be used in Python 2.7. +try: + import ipaddress +except ImportError: + ipaddress = None + +__version__ = "3.5.0.1" + + +class CertificateError(ValueError): + pass + + +def _dnsname_match(dn, hostname, max_wildcards=1): + """Matching according to RFC 6125, section 6.4.3 + + http://tools.ietf.org/html/rfc6125#section-6.4.3 + """ + pats = [] + if not dn: + return False + + # Ported from python3-syntax: + # leftmost, *remainder = dn.split(r'.') + parts = dn.split(r".") + leftmost = parts[0] + remainder = parts[1:] + + wildcards = leftmost.count("*") + if wildcards > max_wildcards: + # Issue #17980: avoid denials of service by refusing more + # than one wildcard per fragment. A survey of established + # policy among SSL implementations showed it to be a + # reasonable choice. + raise CertificateError( + "too many wildcards in certificate DNS name: " + repr(dn) + ) + + # speed up common case w/o wildcards + if not wildcards: + return dn.lower() == hostname.lower() + + # RFC 6125, section 6.4.3, subitem 1. + # The client SHOULD NOT attempt to match a presented identifier in which + # the wildcard character comprises a label other than the left-most label. + if leftmost == "*": + # When '*' is a fragment by itself, it matches a non-empty dotless + # fragment. + pats.append("[^.]+") + elif leftmost.startswith("xn--") or hostname.startswith("xn--"): + # RFC 6125, section 6.4.3, subitem 3. + # The client SHOULD NOT attempt to match a presented identifier + # where the wildcard character is embedded within an A-label or + # U-label of an internationalized domain name. + pats.append(re.escape(leftmost)) + else: + # Otherwise, '*' matches any dotless string, e.g. www* + pats.append(re.escape(leftmost).replace(r"\*", "[^.]*")) + + # add the remaining fragments, ignore any wildcards + for frag in remainder: + pats.append(re.escape(frag)) + + pat = re.compile(r"\A" + r"\.".join(pats) + r"\Z", re.IGNORECASE) + return pat.match(hostname) + + +def _to_unicode(obj): + if isinstance(obj, str) and sys.version_info < (3,): + obj = unicode(obj, encoding="ascii", errors="strict") + return obj + + +def _ipaddress_match(ipname, host_ip): + """Exact matching of IP addresses. + + RFC 6125 explicitly doesn't define an algorithm for this + (section 1.7.2 - "Out of Scope"). + """ + # OpenSSL may add a trailing newline to a subjectAltName's IP address + # Divergence from upstream: ipaddress can't handle byte str + ip = ipaddress.ip_address(_to_unicode(ipname).rstrip()) + return ip == host_ip + + +def match_hostname(cert, hostname): + """Verify that *cert* (in decoded format as returned by + SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 + rules are followed, but IP addresses are not accepted for *hostname*. + + CertificateError is raised on failure. On success, the function + returns nothing. + """ + if not cert: + raise ValueError( + "empty or no certificate, match_hostname needs a " + "SSL socket or SSL context with either " + "CERT_OPTIONAL or CERT_REQUIRED" + ) + try: + # Divergence from upstream: ipaddress can't handle byte str + host_ip = ipaddress.ip_address(_to_unicode(hostname)) + except ValueError: + # Not an IP address (common case) + host_ip = None + except UnicodeError: + # Divergence from upstream: Have to deal with ipaddress not taking + # byte strings. addresses should be all ascii, so we consider it not + # an ipaddress in this case + host_ip = None + except AttributeError: + # Divergence from upstream: Make ipaddress library optional + if ipaddress is None: + host_ip = None + else: + raise + dnsnames = [] + san = cert.get("subjectAltName", ()) + for key, value in san: + if key == "DNS": + if host_ip is None and _dnsname_match(value, hostname): + return + dnsnames.append(value) + elif key == "IP Address": + if host_ip is not None and _ipaddress_match(value, host_ip): + return + dnsnames.append(value) + if not dnsnames: + # The subject is only checked when there is no dNSName entry + # in subjectAltName + for sub in cert.get("subject", ()): + for key, value in sub: + # XXX according to RFC 2818, the most specific Common Name + # must be used. + if key == "commonName": + if _dnsname_match(value, hostname): + return + dnsnames.append(value) + if len(dnsnames) > 1: + raise CertificateError( + "hostname %r " + "doesn't match either of %s" % (hostname, ", ".join(map(repr, dnsnames))) + ) + elif len(dnsnames) == 1: + raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0])) + else: + raise CertificateError( + "no appropriate commonName or subjectAltName fields were found" + ) diff --git a/openpype/hosts/fusion/vendor/urllib3/poolmanager.py b/openpype/hosts/fusion/vendor/urllib3/poolmanager.py new file mode 100644 index 0000000000..3a31a285bf --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/poolmanager.py @@ -0,0 +1,536 @@ +from __future__ import absolute_import + +import collections +import functools +import logging + +from ._collections import RecentlyUsedContainer +from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, port_by_scheme +from .exceptions import ( + LocationValueError, + MaxRetryError, + ProxySchemeUnknown, + ProxySchemeUnsupported, + URLSchemeUnknown, +) +from .packages import six +from .packages.six.moves.urllib.parse import urljoin +from .request import RequestMethods +from .util.proxy import connection_requires_http_tunnel +from .util.retry import Retry +from .util.url import parse_url + +__all__ = ["PoolManager", "ProxyManager", "proxy_from_url"] + + +log = logging.getLogger(__name__) + +SSL_KEYWORDS = ( + "key_file", + "cert_file", + "cert_reqs", + "ca_certs", + "ssl_version", + "ca_cert_dir", + "ssl_context", + "key_password", +) + +# All known keyword arguments that could be provided to the pool manager, its +# pools, or the underlying connections. This is used to construct a pool key. +_key_fields = ( + "key_scheme", # str + "key_host", # str + "key_port", # int + "key_timeout", # int or float or Timeout + "key_retries", # int or Retry + "key_strict", # bool + "key_block", # bool + "key_source_address", # str + "key_key_file", # str + "key_key_password", # str + "key_cert_file", # str + "key_cert_reqs", # str + "key_ca_certs", # str + "key_ssl_version", # str + "key_ca_cert_dir", # str + "key_ssl_context", # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext + "key_maxsize", # int + "key_headers", # dict + "key__proxy", # parsed proxy url + "key__proxy_headers", # dict + "key__proxy_config", # class + "key_socket_options", # list of (level (int), optname (int), value (int or str)) tuples + "key__socks_options", # dict + "key_assert_hostname", # bool or string + "key_assert_fingerprint", # str + "key_server_hostname", # str +) + +#: The namedtuple class used to construct keys for the connection pool. +#: All custom key schemes should include the fields in this key at a minimum. +PoolKey = collections.namedtuple("PoolKey", _key_fields) + +_proxy_config_fields = ("ssl_context", "use_forwarding_for_https") +ProxyConfig = collections.namedtuple("ProxyConfig", _proxy_config_fields) + + +def _default_key_normalizer(key_class, request_context): + """ + Create a pool key out of a request context dictionary. + + According to RFC 3986, both the scheme and host are case-insensitive. + Therefore, this function normalizes both before constructing the pool + key for an HTTPS request. If you wish to change this behaviour, provide + alternate callables to ``key_fn_by_scheme``. + + :param key_class: + The class to use when constructing the key. This should be a namedtuple + with the ``scheme`` and ``host`` keys at a minimum. + :type key_class: namedtuple + :param request_context: + A dictionary-like object that contain the context for a request. + :type request_context: dict + + :return: A namedtuple that can be used as a connection pool key. + :rtype: PoolKey + """ + # Since we mutate the dictionary, make a copy first + context = request_context.copy() + context["scheme"] = context["scheme"].lower() + context["host"] = context["host"].lower() + + # These are both dictionaries and need to be transformed into frozensets + for key in ("headers", "_proxy_headers", "_socks_options"): + if key in context and context[key] is not None: + context[key] = frozenset(context[key].items()) + + # The socket_options key may be a list and needs to be transformed into a + # tuple. + socket_opts = context.get("socket_options") + if socket_opts is not None: + context["socket_options"] = tuple(socket_opts) + + # Map the kwargs to the names in the namedtuple - this is necessary since + # namedtuples can't have fields starting with '_'. + for key in list(context.keys()): + context["key_" + key] = context.pop(key) + + # Default to ``None`` for keys missing from the context + for field in key_class._fields: + if field not in context: + context[field] = None + + return key_class(**context) + + +#: A dictionary that maps a scheme to a callable that creates a pool key. +#: This can be used to alter the way pool keys are constructed, if desired. +#: Each PoolManager makes a copy of this dictionary so they can be configured +#: globally here, or individually on the instance. +key_fn_by_scheme = { + "http": functools.partial(_default_key_normalizer, PoolKey), + "https": functools.partial(_default_key_normalizer, PoolKey), +} + +pool_classes_by_scheme = {"http": HTTPConnectionPool, "https": HTTPSConnectionPool} + + +class PoolManager(RequestMethods): + """ + Allows for arbitrary requests while transparently keeping track of + necessary connection pools for you. + + :param num_pools: + Number of connection pools to cache before discarding the least + recently used pool. + + :param headers: + Headers to include with all requests, unless other headers are given + explicitly. + + :param \\**connection_pool_kw: + Additional parameters are used to create fresh + :class:`urllib3.connectionpool.ConnectionPool` instances. + + Example:: + + >>> manager = PoolManager(num_pools=2) + >>> r = manager.request('GET', 'http://google.com/') + >>> r = manager.request('GET', 'http://google.com/mail') + >>> r = manager.request('GET', 'http://yahoo.com/') + >>> len(manager.pools) + 2 + + """ + + proxy = None + proxy_config = None + + def __init__(self, num_pools=10, headers=None, **connection_pool_kw): + RequestMethods.__init__(self, headers) + self.connection_pool_kw = connection_pool_kw + self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) + + # Locally set the pool classes and keys so other PoolManagers can + # override them. + self.pool_classes_by_scheme = pool_classes_by_scheme + self.key_fn_by_scheme = key_fn_by_scheme.copy() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_val, exc_tb): + self.clear() + # Return False to re-raise any potential exceptions + return False + + def _new_pool(self, scheme, host, port, request_context=None): + """ + Create a new :class:`urllib3.connectionpool.ConnectionPool` based on host, port, scheme, and + any additional pool keyword arguments. + + If ``request_context`` is provided, it is provided as keyword arguments + to the pool class used. This method is used to actually create the + connection pools handed out by :meth:`connection_from_url` and + companion methods. It is intended to be overridden for customization. + """ + pool_cls = self.pool_classes_by_scheme[scheme] + if request_context is None: + request_context = self.connection_pool_kw.copy() + + # Although the context has everything necessary to create the pool, + # this function has historically only used the scheme, host, and port + # in the positional args. When an API change is acceptable these can + # be removed. + for key in ("scheme", "host", "port"): + request_context.pop(key, None) + + if scheme == "http": + for kw in SSL_KEYWORDS: + request_context.pop(kw, None) + + return pool_cls(host, port, **request_context) + + def clear(self): + """ + Empty our store of pools and direct them all to close. + + This will not affect in-flight connections, but they will not be + re-used after completion. + """ + self.pools.clear() + + def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): + """ + Get a :class:`urllib3.connectionpool.ConnectionPool` based on the host, port, and scheme. + + If ``port`` isn't given, it will be derived from the ``scheme`` using + ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is + provided, it is merged with the instance's ``connection_pool_kw`` + variable and used to create the new connection pool, if one is + needed. + """ + + if not host: + raise LocationValueError("No host specified.") + + request_context = self._merge_pool_kwargs(pool_kwargs) + request_context["scheme"] = scheme or "http" + if not port: + port = port_by_scheme.get(request_context["scheme"].lower(), 80) + request_context["port"] = port + request_context["host"] = host + + return self.connection_from_context(request_context) + + def connection_from_context(self, request_context): + """ + Get a :class:`urllib3.connectionpool.ConnectionPool` based on the request context. + + ``request_context`` must at least contain the ``scheme`` key and its + value must be a key in ``key_fn_by_scheme`` instance variable. + """ + scheme = request_context["scheme"].lower() + pool_key_constructor = self.key_fn_by_scheme.get(scheme) + if not pool_key_constructor: + raise URLSchemeUnknown(scheme) + pool_key = pool_key_constructor(request_context) + + return self.connection_from_pool_key(pool_key, request_context=request_context) + + def connection_from_pool_key(self, pool_key, request_context=None): + """ + Get a :class:`urllib3.connectionpool.ConnectionPool` based on the provided pool key. + + ``pool_key`` should be a namedtuple that only contains immutable + objects. At a minimum it must have the ``scheme``, ``host``, and + ``port`` fields. + """ + with self.pools.lock: + # If the scheme, host, or port doesn't match existing open + # connections, open a new ConnectionPool. + pool = self.pools.get(pool_key) + if pool: + return pool + + # Make a fresh ConnectionPool of the desired type + scheme = request_context["scheme"] + host = request_context["host"] + port = request_context["port"] + pool = self._new_pool(scheme, host, port, request_context=request_context) + self.pools[pool_key] = pool + + return pool + + def connection_from_url(self, url, pool_kwargs=None): + """ + Similar to :func:`urllib3.connectionpool.connection_from_url`. + + If ``pool_kwargs`` is not provided and a new pool needs to be + constructed, ``self.connection_pool_kw`` is used to initialize + the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs`` + is provided, it is used instead. Note that if a new pool does not + need to be created for the request, the provided ``pool_kwargs`` are + not used. + """ + u = parse_url(url) + return self.connection_from_host( + u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs + ) + + def _merge_pool_kwargs(self, override): + """ + Merge a dictionary of override values for self.connection_pool_kw. + + This does not modify self.connection_pool_kw and returns a new dict. + Any keys in the override dictionary with a value of ``None`` are + removed from the merged dictionary. + """ + base_pool_kwargs = self.connection_pool_kw.copy() + if override: + for key, value in override.items(): + if value is None: + try: + del base_pool_kwargs[key] + except KeyError: + pass + else: + base_pool_kwargs[key] = value + return base_pool_kwargs + + def _proxy_requires_url_absolute_form(self, parsed_url): + """ + Indicates if the proxy requires the complete destination URL in the + request. Normally this is only needed when not using an HTTP CONNECT + tunnel. + """ + if self.proxy is None: + return False + + return not connection_requires_http_tunnel( + self.proxy, self.proxy_config, parsed_url.scheme + ) + + def _validate_proxy_scheme_url_selection(self, url_scheme): + """ + Validates that were not attempting to do TLS in TLS connections on + Python2 or with unsupported SSL implementations. + """ + if self.proxy is None or url_scheme != "https": + return + + if self.proxy.scheme != "https": + return + + if six.PY2 and not self.proxy_config.use_forwarding_for_https: + raise ProxySchemeUnsupported( + "Contacting HTTPS destinations through HTTPS proxies " + "'via CONNECT tunnels' is not supported in Python 2" + ) + + def urlopen(self, method, url, redirect=True, **kw): + """ + Same as :meth:`urllib3.HTTPConnectionPool.urlopen` + with custom cross-host redirect logic and only sends the request-uri + portion of the ``url``. + + The given ``url`` parameter must be absolute, such that an appropriate + :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. + """ + u = parse_url(url) + self._validate_proxy_scheme_url_selection(u.scheme) + + conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) + + kw["assert_same_host"] = False + kw["redirect"] = False + + if "headers" not in kw: + kw["headers"] = self.headers.copy() + + if self._proxy_requires_url_absolute_form(u): + response = conn.urlopen(method, url, **kw) + else: + response = conn.urlopen(method, u.request_uri, **kw) + + redirect_location = redirect and response.get_redirect_location() + if not redirect_location: + return response + + # Support relative URLs for redirecting. + redirect_location = urljoin(url, redirect_location) + + # RFC 7231, Section 6.4.4 + if response.status == 303: + method = "GET" + + retries = kw.get("retries") + if not isinstance(retries, Retry): + retries = Retry.from_int(retries, redirect=redirect) + + # Strip headers marked as unsafe to forward to the redirected location. + # Check remove_headers_on_redirect to avoid a potential network call within + # conn.is_same_host() which may use socket.gethostbyname() in the future. + if retries.remove_headers_on_redirect and not conn.is_same_host( + redirect_location + ): + headers = list(six.iterkeys(kw["headers"])) + for header in headers: + if header.lower() in retries.remove_headers_on_redirect: + kw["headers"].pop(header, None) + + try: + retries = retries.increment(method, url, response=response, _pool=conn) + except MaxRetryError: + if retries.raise_on_redirect: + response.drain_conn() + raise + return response + + kw["retries"] = retries + kw["redirect"] = redirect + + log.info("Redirecting %s -> %s", url, redirect_location) + + response.drain_conn() + return self.urlopen(method, redirect_location, **kw) + + +class ProxyManager(PoolManager): + """ + Behaves just like :class:`PoolManager`, but sends all requests through + the defined proxy, using the CONNECT method for HTTPS URLs. + + :param proxy_url: + The URL of the proxy to be used. + + :param proxy_headers: + A dictionary containing headers that will be sent to the proxy. In case + of HTTP they are being sent with each request, while in the + HTTPS/CONNECT case they are sent only once. Could be used for proxy + authentication. + + :param proxy_ssl_context: + The proxy SSL context is used to establish the TLS connection to the + proxy when using HTTPS proxies. + + :param use_forwarding_for_https: + (Defaults to False) If set to True will forward requests to the HTTPS + proxy to be made on behalf of the client instead of creating a TLS + tunnel via the CONNECT method. **Enabling this flag means that request + and response headers and content will be visible from the HTTPS proxy** + whereas tunneling keeps request and response headers and content + private. IP address, target hostname, SNI, and port are always visible + to an HTTPS proxy even when this flag is disabled. + + Example: + >>> proxy = urllib3.ProxyManager('http://localhost:3128/') + >>> r1 = proxy.request('GET', 'http://google.com/') + >>> r2 = proxy.request('GET', 'http://httpbin.org/') + >>> len(proxy.pools) + 1 + >>> r3 = proxy.request('GET', 'https://httpbin.org/') + >>> r4 = proxy.request('GET', 'https://twitter.com/') + >>> len(proxy.pools) + 3 + + """ + + def __init__( + self, + proxy_url, + num_pools=10, + headers=None, + proxy_headers=None, + proxy_ssl_context=None, + use_forwarding_for_https=False, + **connection_pool_kw + ): + + if isinstance(proxy_url, HTTPConnectionPool): + proxy_url = "%s://%s:%i" % ( + proxy_url.scheme, + proxy_url.host, + proxy_url.port, + ) + proxy = parse_url(proxy_url) + + if proxy.scheme not in ("http", "https"): + raise ProxySchemeUnknown(proxy.scheme) + + if not proxy.port: + port = port_by_scheme.get(proxy.scheme, 80) + proxy = proxy._replace(port=port) + + self.proxy = proxy + self.proxy_headers = proxy_headers or {} + self.proxy_ssl_context = proxy_ssl_context + self.proxy_config = ProxyConfig(proxy_ssl_context, use_forwarding_for_https) + + connection_pool_kw["_proxy"] = self.proxy + connection_pool_kw["_proxy_headers"] = self.proxy_headers + connection_pool_kw["_proxy_config"] = self.proxy_config + + super(ProxyManager, self).__init__(num_pools, headers, **connection_pool_kw) + + def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): + if scheme == "https": + return super(ProxyManager, self).connection_from_host( + host, port, scheme, pool_kwargs=pool_kwargs + ) + + return super(ProxyManager, self).connection_from_host( + self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs + ) + + def _set_proxy_headers(self, url, headers=None): + """ + Sets headers needed by proxies: specifically, the Accept and Host + headers. Only sets headers not provided by the user. + """ + headers_ = {"Accept": "*/*"} + + netloc = parse_url(url).netloc + if netloc: + headers_["Host"] = netloc + + if headers: + headers_.update(headers) + return headers_ + + def urlopen(self, method, url, redirect=True, **kw): + "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute." + u = parse_url(url) + if not connection_requires_http_tunnel(self.proxy, self.proxy_config, u.scheme): + # For connections using HTTP CONNECT, httplib sets the necessary + # headers on the CONNECT to the proxy. If we're not using CONNECT, + # we'll definitely need to set 'Host' at the very least. + headers = kw.get("headers", self.headers) + kw["headers"] = self._set_proxy_headers(url, headers) + + return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw) + + +def proxy_from_url(url, **kw): + return ProxyManager(proxy_url=url, **kw) diff --git a/openpype/hosts/fusion/vendor/urllib3/request.py b/openpype/hosts/fusion/vendor/urllib3/request.py new file mode 100644 index 0000000000..398386a5b9 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/request.py @@ -0,0 +1,170 @@ +from __future__ import absolute_import + +from .filepost import encode_multipart_formdata +from .packages.six.moves.urllib.parse import urlencode + +__all__ = ["RequestMethods"] + + +class RequestMethods(object): + """ + Convenience mixin for classes who implement a :meth:`urlopen` method, such + as :class:`urllib3.HTTPConnectionPool` and + :class:`urllib3.PoolManager`. + + Provides behavior for making common types of HTTP request methods and + decides which type of request field encoding to use. + + Specifically, + + :meth:`.request_encode_url` is for sending requests whose fields are + encoded in the URL (such as GET, HEAD, DELETE). + + :meth:`.request_encode_body` is for sending requests whose fields are + encoded in the *body* of the request using multipart or www-form-urlencoded + (such as for POST, PUT, PATCH). + + :meth:`.request` is for making any kind of request, it will look up the + appropriate encoding format and use one of the above two methods to make + the request. + + Initializer parameters: + + :param headers: + Headers to include with all requests, unless other headers are given + explicitly. + """ + + _encode_url_methods = {"DELETE", "GET", "HEAD", "OPTIONS"} + + def __init__(self, headers=None): + self.headers = headers or {} + + def urlopen( + self, + method, + url, + body=None, + headers=None, + encode_multipart=True, + multipart_boundary=None, + **kw + ): # Abstract + raise NotImplementedError( + "Classes extending RequestMethods must implement " + "their own ``urlopen`` method." + ) + + def request(self, method, url, fields=None, headers=None, **urlopen_kw): + """ + Make a request using :meth:`urlopen` with the appropriate encoding of + ``fields`` based on the ``method`` used. + + This is a convenience method that requires the least amount of manual + effort. It can be used in most situations, while still having the + option to drop down to more specific methods when necessary, such as + :meth:`request_encode_url`, :meth:`request_encode_body`, + or even the lowest level :meth:`urlopen`. + """ + method = method.upper() + + urlopen_kw["request_url"] = url + + if method in self._encode_url_methods: + return self.request_encode_url( + method, url, fields=fields, headers=headers, **urlopen_kw + ) + else: + return self.request_encode_body( + method, url, fields=fields, headers=headers, **urlopen_kw + ) + + def request_encode_url(self, method, url, fields=None, headers=None, **urlopen_kw): + """ + Make a request using :meth:`urlopen` with the ``fields`` encoded in + the url. This is useful for request methods like GET, HEAD, DELETE, etc. + """ + if headers is None: + headers = self.headers + + extra_kw = {"headers": headers} + extra_kw.update(urlopen_kw) + + if fields: + url += "?" + urlencode(fields) + + return self.urlopen(method, url, **extra_kw) + + def request_encode_body( + self, + method, + url, + fields=None, + headers=None, + encode_multipart=True, + multipart_boundary=None, + **urlopen_kw + ): + """ + Make a request using :meth:`urlopen` with the ``fields`` encoded in + the body. This is useful for request methods like POST, PUT, PATCH, etc. + + When ``encode_multipart=True`` (default), then + :func:`urllib3.encode_multipart_formdata` is used to encode + the payload with the appropriate content type. Otherwise + :func:`urllib.parse.urlencode` is used with the + 'application/x-www-form-urlencoded' content type. + + Multipart encoding must be used when posting files, and it's reasonably + safe to use it in other times too. However, it may break request + signing, such as with OAuth. + + Supports an optional ``fields`` parameter of key/value strings AND + key/filetuple. A filetuple is a (filename, data, MIME type) tuple where + the MIME type is optional. For example:: + + fields = { + 'foo': 'bar', + 'fakefile': ('foofile.txt', 'contents of foofile'), + 'realfile': ('barfile.txt', open('realfile').read()), + 'typedfile': ('bazfile.bin', open('bazfile').read(), + 'image/jpeg'), + 'nonamefile': 'contents of nonamefile field', + } + + When uploading a file, providing a filename (the first parameter of the + tuple) is optional but recommended to best mimic behavior of browsers. + + Note that if ``headers`` are supplied, the 'Content-Type' header will + be overwritten because it depends on the dynamic random boundary string + which is used to compose the body of the request. The random boundary + string can be explicitly set with the ``multipart_boundary`` parameter. + """ + if headers is None: + headers = self.headers + + extra_kw = {"headers": {}} + + if fields: + if "body" in urlopen_kw: + raise TypeError( + "request got values for both 'fields' and 'body', can only specify one." + ) + + if encode_multipart: + body, content_type = encode_multipart_formdata( + fields, boundary=multipart_boundary + ) + else: + body, content_type = ( + urlencode(fields), + "application/x-www-form-urlencoded", + ) + + extra_kw["body"] = body + extra_kw["headers"] = {"Content-Type": content_type} + + extra_kw["headers"].update(headers) + extra_kw.update(urlopen_kw) + + return self.urlopen(method, url, **extra_kw) diff --git a/openpype/hosts/fusion/vendor/urllib3/response.py b/openpype/hosts/fusion/vendor/urllib3/response.py new file mode 100644 index 0000000000..38693f4fc6 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/response.py @@ -0,0 +1,821 @@ +from __future__ import absolute_import + +import io +import logging +import zlib +from contextlib import contextmanager +from socket import error as SocketError +from socket import timeout as SocketTimeout + +try: + import brotli +except ImportError: + brotli = None + +from ._collections import HTTPHeaderDict +from .connection import BaseSSLError, HTTPException +from .exceptions import ( + BodyNotHttplibCompatible, + DecodeError, + HTTPError, + IncompleteRead, + InvalidChunkLength, + InvalidHeader, + ProtocolError, + ReadTimeoutError, + ResponseNotChunked, + SSLError, +) +from .packages import six +from .util.response import is_fp_closed, is_response_to_head + +log = logging.getLogger(__name__) + + +class DeflateDecoder(object): + def __init__(self): + self._first_try = True + self._data = b"" + self._obj = zlib.decompressobj() + + def __getattr__(self, name): + return getattr(self._obj, name) + + def decompress(self, data): + if not data: + return data + + if not self._first_try: + return self._obj.decompress(data) + + self._data += data + try: + decompressed = self._obj.decompress(data) + if decompressed: + self._first_try = False + self._data = None + return decompressed + except zlib.error: + self._first_try = False + self._obj = zlib.decompressobj(-zlib.MAX_WBITS) + try: + return self.decompress(self._data) + finally: + self._data = None + + +class GzipDecoderState(object): + + FIRST_MEMBER = 0 + OTHER_MEMBERS = 1 + SWALLOW_DATA = 2 + + +class GzipDecoder(object): + def __init__(self): + self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) + self._state = GzipDecoderState.FIRST_MEMBER + + def __getattr__(self, name): + return getattr(self._obj, name) + + def decompress(self, data): + ret = bytearray() + if self._state == GzipDecoderState.SWALLOW_DATA or not data: + return bytes(ret) + while True: + try: + ret += self._obj.decompress(data) + except zlib.error: + previous_state = self._state + # Ignore data after the first error + self._state = GzipDecoderState.SWALLOW_DATA + if previous_state == GzipDecoderState.OTHER_MEMBERS: + # Allow trailing garbage acceptable in other gzip clients + return bytes(ret) + raise + data = self._obj.unused_data + if not data: + return bytes(ret) + self._state = GzipDecoderState.OTHER_MEMBERS + self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) + + +if brotli is not None: + + class BrotliDecoder(object): + # Supports both 'brotlipy' and 'Brotli' packages + # since they share an import name. The top branches + # are for 'brotlipy' and bottom branches for 'Brotli' + def __init__(self): + self._obj = brotli.Decompressor() + if hasattr(self._obj, "decompress"): + self.decompress = self._obj.decompress + else: + self.decompress = self._obj.process + + def flush(self): + if hasattr(self._obj, "flush"): + return self._obj.flush() + return b"" + + +class MultiDecoder(object): + """ + From RFC7231: + If one or more encodings have been applied to a representation, the + sender that applied the encodings MUST generate a Content-Encoding + header field that lists the content codings in the order in which + they were applied. + """ + + def __init__(self, modes): + self._decoders = [_get_decoder(m.strip()) for m in modes.split(",")] + + def flush(self): + return self._decoders[0].flush() + + def decompress(self, data): + for d in reversed(self._decoders): + data = d.decompress(data) + return data + + +def _get_decoder(mode): + if "," in mode: + return MultiDecoder(mode) + + if mode == "gzip": + return GzipDecoder() + + if brotli is not None and mode == "br": + return BrotliDecoder() + + return DeflateDecoder() + + +class HTTPResponse(io.IOBase): + """ + HTTP Response container. + + Backwards-compatible with :class:`http.client.HTTPResponse` but the response ``body`` is + loaded and decoded on-demand when the ``data`` property is accessed. This + class is also compatible with the Python standard library's :mod:`io` + module, and can hence be treated as a readable object in the context of that + framework. + + Extra parameters for behaviour not present in :class:`http.client.HTTPResponse`: + + :param preload_content: + If True, the response's body will be preloaded during construction. + + :param decode_content: + If True, will attempt to decode the body based on the + 'content-encoding' header. + + :param original_response: + When this HTTPResponse wrapper is generated from an :class:`http.client.HTTPResponse` + object, it's convenient to include the original for debug purposes. It's + otherwise unused. + + :param retries: + The retries contains the last :class:`~urllib3.util.retry.Retry` that + was used during the request. + + :param enforce_content_length: + Enforce content length checking. Body returned by server must match + value of Content-Length header, if present. Otherwise, raise error. + """ + + CONTENT_DECODERS = ["gzip", "deflate"] + if brotli is not None: + CONTENT_DECODERS += ["br"] + REDIRECT_STATUSES = [301, 302, 303, 307, 308] + + def __init__( + self, + body="", + headers=None, + status=0, + version=0, + reason=None, + strict=0, + preload_content=True, + decode_content=True, + original_response=None, + pool=None, + connection=None, + msg=None, + retries=None, + enforce_content_length=False, + request_method=None, + request_url=None, + auto_close=True, + ): + + if isinstance(headers, HTTPHeaderDict): + self.headers = headers + else: + self.headers = HTTPHeaderDict(headers) + self.status = status + self.version = version + self.reason = reason + self.strict = strict + self.decode_content = decode_content + self.retries = retries + self.enforce_content_length = enforce_content_length + self.auto_close = auto_close + + self._decoder = None + self._body = None + self._fp = None + self._original_response = original_response + self._fp_bytes_read = 0 + self.msg = msg + self._request_url = request_url + + if body and isinstance(body, (six.string_types, bytes)): + self._body = body + + self._pool = pool + self._connection = connection + + if hasattr(body, "read"): + self._fp = body + + # Are we using the chunked-style of transfer encoding? + self.chunked = False + self.chunk_left = None + tr_enc = self.headers.get("transfer-encoding", "").lower() + # Don't incur the penalty of creating a list and then discarding it + encodings = (enc.strip() for enc in tr_enc.split(",")) + if "chunked" in encodings: + self.chunked = True + + # Determine length of response + self.length_remaining = self._init_length(request_method) + + # If requested, preload the body. + if preload_content and not self._body: + self._body = self.read(decode_content=decode_content) + + def get_redirect_location(self): + """ + Should we redirect and where to? + + :returns: Truthy redirect location string if we got a redirect status + code and valid location. ``None`` if redirect status and no + location. ``False`` if not a redirect status code. + """ + if self.status in self.REDIRECT_STATUSES: + return self.headers.get("location") + + return False + + def release_conn(self): + if not self._pool or not self._connection: + return + + self._pool._put_conn(self._connection) + self._connection = None + + def drain_conn(self): + """ + Read and discard any remaining HTTP response data in the response connection. + + Unread data in the HTTPResponse connection blocks the connection from being released back to the pool. + """ + try: + self.read() + except (HTTPError, SocketError, BaseSSLError, HTTPException): + pass + + @property + def data(self): + # For backwards-compat with earlier urllib3 0.4 and earlier. + if self._body: + return self._body + + if self._fp: + return self.read(cache_content=True) + + @property + def connection(self): + return self._connection + + def isclosed(self): + return is_fp_closed(self._fp) + + def tell(self): + """ + Obtain the number of bytes pulled over the wire so far. May differ from + the amount of content returned by :meth:``urllib3.response.HTTPResponse.read`` + if bytes are encoded on the wire (e.g, compressed). + """ + return self._fp_bytes_read + + def _init_length(self, request_method): + """ + Set initial length value for Response content if available. + """ + length = self.headers.get("content-length") + + if length is not None: + if self.chunked: + # This Response will fail with an IncompleteRead if it can't be + # received as chunked. This method falls back to attempt reading + # the response before raising an exception. + log.warning( + "Received response with both Content-Length and " + "Transfer-Encoding set. This is expressly forbidden " + "by RFC 7230 sec 3.3.2. Ignoring Content-Length and " + "attempting to process response as Transfer-Encoding: " + "chunked." + ) + return None + + try: + # RFC 7230 section 3.3.2 specifies multiple content lengths can + # be sent in a single Content-Length header + # (e.g. Content-Length: 42, 42). This line ensures the values + # are all valid ints and that as long as the `set` length is 1, + # all values are the same. Otherwise, the header is invalid. + lengths = set([int(val) for val in length.split(",")]) + if len(lengths) > 1: + raise InvalidHeader( + "Content-Length contained multiple " + "unmatching values (%s)" % length + ) + length = lengths.pop() + except ValueError: + length = None + else: + if length < 0: + length = None + + # Convert status to int for comparison + # In some cases, httplib returns a status of "_UNKNOWN" + try: + status = int(self.status) + except ValueError: + status = 0 + + # Check for responses that shouldn't include a body + if status in (204, 304) or 100 <= status < 200 or request_method == "HEAD": + length = 0 + + return length + + def _init_decoder(self): + """ + Set-up the _decoder attribute if necessary. + """ + # Note: content-encoding value should be case-insensitive, per RFC 7230 + # Section 3.2 + content_encoding = self.headers.get("content-encoding", "").lower() + if self._decoder is None: + if content_encoding in self.CONTENT_DECODERS: + self._decoder = _get_decoder(content_encoding) + elif "," in content_encoding: + encodings = [ + e.strip() + for e in content_encoding.split(",") + if e.strip() in self.CONTENT_DECODERS + ] + if len(encodings): + self._decoder = _get_decoder(content_encoding) + + DECODER_ERROR_CLASSES = (IOError, zlib.error) + if brotli is not None: + DECODER_ERROR_CLASSES += (brotli.error,) + + def _decode(self, data, decode_content, flush_decoder): + """ + Decode the data passed in and potentially flush the decoder. + """ + if not decode_content: + return data + + try: + if self._decoder: + data = self._decoder.decompress(data) + except self.DECODER_ERROR_CLASSES as e: + content_encoding = self.headers.get("content-encoding", "").lower() + raise DecodeError( + "Received response with content-encoding: %s, but " + "failed to decode it." % content_encoding, + e, + ) + if flush_decoder: + data += self._flush_decoder() + + return data + + def _flush_decoder(self): + """ + Flushes the decoder. Should only be called if the decoder is actually + being used. + """ + if self._decoder: + buf = self._decoder.decompress(b"") + return buf + self._decoder.flush() + + return b"" + + @contextmanager + def _error_catcher(self): + """ + Catch low-level python exceptions, instead re-raising urllib3 + variants, so that low-level exceptions are not leaked in the + high-level api. + + On exit, release the connection back to the pool. + """ + clean_exit = False + + try: + try: + yield + + except SocketTimeout: + # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but + # there is yet no clean way to get at it from this context. + raise ReadTimeoutError(self._pool, None, "Read timed out.") + + except BaseSSLError as e: + # FIXME: Is there a better way to differentiate between SSLErrors? + if "read operation timed out" not in str(e): + # SSL errors related to framing/MAC get wrapped and reraised here + raise SSLError(e) + + raise ReadTimeoutError(self._pool, None, "Read timed out.") + + except (HTTPException, SocketError) as e: + # This includes IncompleteRead. + raise ProtocolError("Connection broken: %r" % e, e) + + # If no exception is thrown, we should avoid cleaning up + # unnecessarily. + clean_exit = True + finally: + # If we didn't terminate cleanly, we need to throw away our + # connection. + if not clean_exit: + # The response may not be closed but we're not going to use it + # anymore so close it now to ensure that the connection is + # released back to the pool. + if self._original_response: + self._original_response.close() + + # Closing the response may not actually be sufficient to close + # everything, so if we have a hold of the connection close that + # too. + if self._connection: + self._connection.close() + + # If we hold the original response but it's closed now, we should + # return the connection back to the pool. + if self._original_response and self._original_response.isclosed(): + self.release_conn() + + def read(self, amt=None, decode_content=None, cache_content=False): + """ + Similar to :meth:`http.client.HTTPResponse.read`, but with two additional + parameters: ``decode_content`` and ``cache_content``. + + :param amt: + How much of the content to read. If specified, caching is skipped + because it doesn't make sense to cache partial content as the full + response. + + :param decode_content: + If True, will attempt to decode the body based on the + 'content-encoding' header. + + :param cache_content: + If True, will save the returned data such that the same result is + returned despite of the state of the underlying file object. This + is useful if you want the ``.data`` property to continue working + after having ``.read()`` the file object. (Overridden if ``amt`` is + set.) + """ + self._init_decoder() + if decode_content is None: + decode_content = self.decode_content + + if self._fp is None: + return + + flush_decoder = False + fp_closed = getattr(self._fp, "closed", False) + + with self._error_catcher(): + if amt is None: + # cStringIO doesn't like amt=None + data = self._fp.read() if not fp_closed else b"" + flush_decoder = True + else: + cache_content = False + data = self._fp.read(amt) if not fp_closed else b"" + if ( + amt != 0 and not data + ): # Platform-specific: Buggy versions of Python. + # Close the connection when no data is returned + # + # This is redundant to what httplib/http.client _should_ + # already do. However, versions of python released before + # December 15, 2012 (http://bugs.python.org/issue16298) do + # not properly close the connection in all cases. There is + # no harm in redundantly calling close. + self._fp.close() + flush_decoder = True + if self.enforce_content_length and self.length_remaining not in ( + 0, + None, + ): + # This is an edge case that httplib failed to cover due + # to concerns of backward compatibility. We're + # addressing it here to make sure IncompleteRead is + # raised during streaming, so all calls with incorrect + # Content-Length are caught. + raise IncompleteRead(self._fp_bytes_read, self.length_remaining) + + if data: + self._fp_bytes_read += len(data) + if self.length_remaining is not None: + self.length_remaining -= len(data) + + data = self._decode(data, decode_content, flush_decoder) + + if cache_content: + self._body = data + + return data + + def stream(self, amt=2 ** 16, decode_content=None): + """ + A generator wrapper for the read() method. A call will block until + ``amt`` bytes have been read from the connection or until the + connection is closed. + + :param amt: + How much of the content to read. The generator will return up to + much data per iteration, but may return less. This is particularly + likely when using compressed data. However, the empty string will + never be returned. + + :param decode_content: + If True, will attempt to decode the body based on the + 'content-encoding' header. + """ + if self.chunked and self.supports_chunked_reads(): + for line in self.read_chunked(amt, decode_content=decode_content): + yield line + else: + while not is_fp_closed(self._fp): + data = self.read(amt=amt, decode_content=decode_content) + + if data: + yield data + + @classmethod + def from_httplib(ResponseCls, r, **response_kw): + """ + Given an :class:`http.client.HTTPResponse` instance ``r``, return a + corresponding :class:`urllib3.response.HTTPResponse` object. + + Remaining parameters are passed to the HTTPResponse constructor, along + with ``original_response=r``. + """ + headers = r.msg + + if not isinstance(headers, HTTPHeaderDict): + if six.PY2: + # Python 2.7 + headers = HTTPHeaderDict.from_httplib(headers) + else: + headers = HTTPHeaderDict(headers.items()) + + # HTTPResponse objects in Python 3 don't have a .strict attribute + strict = getattr(r, "strict", 0) + resp = ResponseCls( + body=r, + headers=headers, + status=r.status, + version=r.version, + reason=r.reason, + strict=strict, + original_response=r, + **response_kw + ) + return resp + + # Backwards-compatibility methods for http.client.HTTPResponse + def getheaders(self): + return self.headers + + def getheader(self, name, default=None): + return self.headers.get(name, default) + + # Backwards compatibility for http.cookiejar + def info(self): + return self.headers + + # Overrides from io.IOBase + def close(self): + if not self.closed: + self._fp.close() + + if self._connection: + self._connection.close() + + if not self.auto_close: + io.IOBase.close(self) + + @property + def closed(self): + if not self.auto_close: + return io.IOBase.closed.__get__(self) + elif self._fp is None: + return True + elif hasattr(self._fp, "isclosed"): + return self._fp.isclosed() + elif hasattr(self._fp, "closed"): + return self._fp.closed + else: + return True + + def fileno(self): + if self._fp is None: + raise IOError("HTTPResponse has no file to get a fileno from") + elif hasattr(self._fp, "fileno"): + return self._fp.fileno() + else: + raise IOError( + "The file-like object this HTTPResponse is wrapped " + "around has no file descriptor" + ) + + def flush(self): + if ( + self._fp is not None + and hasattr(self._fp, "flush") + and not getattr(self._fp, "closed", False) + ): + return self._fp.flush() + + def readable(self): + # This method is required for `io` module compatibility. + return True + + def readinto(self, b): + # This method is required for `io` module compatibility. + temp = self.read(len(b)) + if len(temp) == 0: + return 0 + else: + b[: len(temp)] = temp + return len(temp) + + def supports_chunked_reads(self): + """ + Checks if the underlying file-like object looks like a + :class:`http.client.HTTPResponse` object. We do this by testing for + the fp attribute. If it is present we assume it returns raw chunks as + processed by read_chunked(). + """ + return hasattr(self._fp, "fp") + + def _update_chunk_length(self): + # First, we'll figure out length of a chunk and then + # we'll try to read it from socket. + if self.chunk_left is not None: + return + line = self._fp.fp.readline() + line = line.split(b";", 1)[0] + try: + self.chunk_left = int(line, 16) + except ValueError: + # Invalid chunked protocol response, abort. + self.close() + raise InvalidChunkLength(self, line) + + def _handle_chunk(self, amt): + returned_chunk = None + if amt is None: + chunk = self._fp._safe_read(self.chunk_left) + returned_chunk = chunk + self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. + self.chunk_left = None + elif amt < self.chunk_left: + value = self._fp._safe_read(amt) + self.chunk_left = self.chunk_left - amt + returned_chunk = value + elif amt == self.chunk_left: + value = self._fp._safe_read(amt) + self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. + self.chunk_left = None + returned_chunk = value + else: # amt > self.chunk_left + returned_chunk = self._fp._safe_read(self.chunk_left) + self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. + self.chunk_left = None + return returned_chunk + + def read_chunked(self, amt=None, decode_content=None): + """ + Similar to :meth:`HTTPResponse.read`, but with an additional + parameter: ``decode_content``. + + :param amt: + How much of the content to read. If specified, caching is skipped + because it doesn't make sense to cache partial content as the full + response. + + :param decode_content: + If True, will attempt to decode the body based on the + 'content-encoding' header. + """ + self._init_decoder() + # FIXME: Rewrite this method and make it a class with a better structured logic. + if not self.chunked: + raise ResponseNotChunked( + "Response is not chunked. " + "Header 'transfer-encoding: chunked' is missing." + ) + if not self.supports_chunked_reads(): + raise BodyNotHttplibCompatible( + "Body should be http.client.HTTPResponse like. " + "It should have have an fp attribute which returns raw chunks." + ) + + with self._error_catcher(): + # Don't bother reading the body of a HEAD request. + if self._original_response and is_response_to_head(self._original_response): + self._original_response.close() + return + + # If a response is already read and closed + # then return immediately. + if self._fp.fp is None: + return + + while True: + self._update_chunk_length() + if self.chunk_left == 0: + break + chunk = self._handle_chunk(amt) + decoded = self._decode( + chunk, decode_content=decode_content, flush_decoder=False + ) + if decoded: + yield decoded + + if decode_content: + # On CPython and PyPy, we should never need to flush the + # decoder. However, on Jython we *might* need to, so + # lets defensively do it anyway. + decoded = self._flush_decoder() + if decoded: # Platform-specific: Jython. + yield decoded + + # Chunk content ends with \r\n: discard it. + while True: + line = self._fp.fp.readline() + if not line: + # Some sites may not end with '\r\n'. + break + if line == b"\r\n": + break + + # We read everything; close the "file". + if self._original_response: + self._original_response.close() + + def geturl(self): + """ + Returns the URL that was the source of this response. + If the request that generated this response redirected, this method + will return the final redirect location. + """ + if self.retries is not None and len(self.retries.history): + return self.retries.history[-1].redirect_location + else: + return self._request_url + + def __iter__(self): + buffer = [] + for chunk in self.stream(decode_content=True): + if b"\n" in chunk: + chunk = chunk.split(b"\n") + yield b"".join(buffer) + chunk[0] + b"\n" + for x in chunk[1:-1]: + yield x + b"\n" + if chunk[-1]: + buffer = [chunk[-1]] + else: + buffer = [] + else: + buffer.append(chunk) + if buffer: + yield b"".join(buffer) diff --git a/openpype/hosts/fusion/vendor/urllib3/util/__init__.py b/openpype/hosts/fusion/vendor/urllib3/util/__init__.py new file mode 100644 index 0000000000..4547fc522b --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/__init__.py @@ -0,0 +1,49 @@ +from __future__ import absolute_import + +# For backwards compatibility, provide imports that used to be here. +from .connection import is_connection_dropped +from .request import SKIP_HEADER, SKIPPABLE_HEADERS, make_headers +from .response import is_fp_closed +from .retry import Retry +from .ssl_ import ( + ALPN_PROTOCOLS, + HAS_SNI, + IS_PYOPENSSL, + IS_SECURETRANSPORT, + PROTOCOL_TLS, + SSLContext, + assert_fingerprint, + resolve_cert_reqs, + resolve_ssl_version, + ssl_wrap_socket, +) +from .timeout import Timeout, current_time +from .url import Url, get_host, parse_url, split_first +from .wait import wait_for_read, wait_for_write + +__all__ = ( + "HAS_SNI", + "IS_PYOPENSSL", + "IS_SECURETRANSPORT", + "SSLContext", + "PROTOCOL_TLS", + "ALPN_PROTOCOLS", + "Retry", + "Timeout", + "Url", + "assert_fingerprint", + "current_time", + "is_connection_dropped", + "is_fp_closed", + "get_host", + "parse_url", + "make_headers", + "resolve_cert_reqs", + "resolve_ssl_version", + "split_first", + "ssl_wrap_socket", + "wait_for_read", + "wait_for_write", + "SKIP_HEADER", + "SKIPPABLE_HEADERS", +) diff --git a/openpype/hosts/fusion/vendor/urllib3/util/connection.py b/openpype/hosts/fusion/vendor/urllib3/util/connection.py new file mode 100644 index 0000000000..bdc240c50c --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/connection.py @@ -0,0 +1,150 @@ +from __future__ import absolute_import + +import socket + +from urllib3.exceptions import LocationParseError + +from ..contrib import _appengine_environ +from ..packages import six +from .wait import NoWayToWaitForSocketError, wait_for_read + + +def is_connection_dropped(conn): # Platform-specific + """ + Returns True if the connection is dropped and should be closed. + + :param conn: + :class:`http.client.HTTPConnection` object. + + Note: For platforms like AppEngine, this will always return ``False`` to + let the platform handle connection recycling transparently for us. + """ + sock = getattr(conn, "sock", False) + if sock is False: # Platform-specific: AppEngine + return False + if sock is None: # Connection already closed (such as by httplib). + return True + try: + # Returns True if readable, which here means it's been dropped + return wait_for_read(sock, timeout=0.0) + except NoWayToWaitForSocketError: # Platform-specific: AppEngine + return False + + +# This function is copied from socket.py in the Python 2.7 standard +# library test suite. Added to its signature is only `socket_options`. +# One additional modification is that we avoid binding to IPv6 servers +# discovered in DNS if the system doesn't have IPv6 functionality. +def create_connection( + address, + timeout=socket._GLOBAL_DEFAULT_TIMEOUT, + source_address=None, + socket_options=None, +): + """Connect to *address* and return the socket object. + + Convenience function. Connect to *address* (a 2-tuple ``(host, + port)``) and return the socket object. Passing the optional + *timeout* parameter will set the timeout on the socket instance + before attempting to connect. If no *timeout* is supplied, the + global default timeout setting returned by :func:`socket.getdefaulttimeout` + is used. If *source_address* is set it must be a tuple of (host, port) + for the socket to bind as a source address before making the connection. + An host of '' or port 0 tells the OS to use the default. + """ + + host, port = address + if host.startswith("["): + host = host.strip("[]") + err = None + + # Using the value from allowed_gai_family() in the context of getaddrinfo lets + # us select whether to work with IPv4 DNS records, IPv6 records, or both. + # The original create_connection function always returns all records. + family = allowed_gai_family() + + try: + host.encode("idna") + except UnicodeError: + return six.raise_from( + LocationParseError(u"'%s', label empty or too long" % host), None + ) + + for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): + af, socktype, proto, canonname, sa = res + sock = None + try: + sock = socket.socket(af, socktype, proto) + + # If provided, set socket level options before connecting. + _set_socket_options(sock, socket_options) + + if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: + sock.settimeout(timeout) + if source_address: + sock.bind(source_address) + sock.connect(sa) + return sock + + except socket.error as e: + err = e + if sock is not None: + sock.close() + sock = None + + if err is not None: + raise err + + raise socket.error("getaddrinfo returns an empty list") + + +def _set_socket_options(sock, options): + if options is None: + return + + for opt in options: + sock.setsockopt(*opt) + + +def allowed_gai_family(): + """This function is designed to work in the context of + getaddrinfo, where family=socket.AF_UNSPEC is the default and + will perform a DNS search for both IPv6 and IPv4 records.""" + + family = socket.AF_INET + if HAS_IPV6: + family = socket.AF_UNSPEC + return family + + +def _has_ipv6(host): + """Returns True if the system can bind an IPv6 address.""" + sock = None + has_ipv6 = False + + # App Engine doesn't support IPV6 sockets and actually has a quota on the + # number of sockets that can be used, so just early out here instead of + # creating a socket needlessly. + # See https://github.com/urllib3/urllib3/issues/1446 + if _appengine_environ.is_appengine_sandbox(): + return False + + if socket.has_ipv6: + # has_ipv6 returns true if cPython was compiled with IPv6 support. + # It does not tell us if the system has IPv6 support enabled. To + # determine that we must bind to an IPv6 address. + # https://github.com/urllib3/urllib3/pull/611 + # https://bugs.python.org/issue658327 + try: + sock = socket.socket(socket.AF_INET6) + sock.bind((host, 0)) + has_ipv6 = True + except Exception: + pass + + if sock: + sock.close() + return has_ipv6 + + +HAS_IPV6 = _has_ipv6("::1") diff --git a/openpype/hosts/fusion/vendor/urllib3/util/proxy.py b/openpype/hosts/fusion/vendor/urllib3/util/proxy.py new file mode 100644 index 0000000000..34f884d5b3 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/proxy.py @@ -0,0 +1,56 @@ +from .ssl_ import create_urllib3_context, resolve_cert_reqs, resolve_ssl_version + + +def connection_requires_http_tunnel( + proxy_url=None, proxy_config=None, destination_scheme=None +): + """ + Returns True if the connection requires an HTTP CONNECT through the proxy. + + :param URL proxy_url: + URL of the proxy. + :param ProxyConfig proxy_config: + Proxy configuration from poolmanager.py + :param str destination_scheme: + The scheme of the destination. (i.e https, http, etc) + """ + # If we're not using a proxy, no way to use a tunnel. + if proxy_url is None: + return False + + # HTTP destinations never require tunneling, we always forward. + if destination_scheme == "http": + return False + + # Support for forwarding with HTTPS proxies and HTTPS destinations. + if ( + proxy_url.scheme == "https" + and proxy_config + and proxy_config.use_forwarding_for_https + ): + return False + + # Otherwise always use a tunnel. + return True + + +def create_proxy_ssl_context( + ssl_version, cert_reqs, ca_certs=None, ca_cert_dir=None, ca_cert_data=None +): + """ + Generates a default proxy ssl context if one hasn't been provided by the + user. + """ + ssl_context = create_urllib3_context( + ssl_version=resolve_ssl_version(ssl_version), + cert_reqs=resolve_cert_reqs(cert_reqs), + ) + if ( + not ca_certs + and not ca_cert_dir + and not ca_cert_data + and hasattr(ssl_context, "load_default_certs") + ): + ssl_context.load_default_certs() + + return ssl_context diff --git a/openpype/hosts/fusion/vendor/urllib3/util/queue.py b/openpype/hosts/fusion/vendor/urllib3/util/queue.py new file mode 100644 index 0000000000..41784104ee --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/queue.py @@ -0,0 +1,22 @@ +import collections + +from ..packages import six +from ..packages.six.moves import queue + +if six.PY2: + # Queue is imported for side effects on MS Windows. See issue #229. + import Queue as _unused_module_Queue # noqa: F401 + + +class LifoQueue(queue.Queue): + def _init(self, _): + self.queue = collections.deque() + + def _qsize(self, len=len): + return len(self.queue) + + def _put(self, item): + self.queue.append(item) + + def _get(self): + return self.queue.pop() diff --git a/openpype/hosts/fusion/vendor/urllib3/util/request.py b/openpype/hosts/fusion/vendor/urllib3/util/request.py new file mode 100644 index 0000000000..25103383ec --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/request.py @@ -0,0 +1,143 @@ +from __future__ import absolute_import + +from base64 import b64encode + +from ..exceptions import UnrewindableBodyError +from ..packages.six import b, integer_types + +# Pass as a value within ``headers`` to skip +# emitting some HTTP headers that are added automatically. +# The only headers that are supported are ``Accept-Encoding``, +# ``Host``, and ``User-Agent``. +SKIP_HEADER = "@@@SKIP_HEADER@@@" +SKIPPABLE_HEADERS = frozenset(["accept-encoding", "host", "user-agent"]) + +ACCEPT_ENCODING = "gzip,deflate" +try: + import brotli as _unused_module_brotli # noqa: F401 +except ImportError: + pass +else: + ACCEPT_ENCODING += ",br" + +_FAILEDTELL = object() + + +def make_headers( + keep_alive=None, + accept_encoding=None, + user_agent=None, + basic_auth=None, + proxy_basic_auth=None, + disable_cache=None, +): + """ + Shortcuts for generating request headers. + + :param keep_alive: + If ``True``, adds 'connection: keep-alive' header. + + :param accept_encoding: + Can be a boolean, list, or string. + ``True`` translates to 'gzip,deflate'. + List will get joined by comma. + String will be used as provided. + + :param user_agent: + String representing the user-agent you want, such as + "python-urllib3/0.6" + + :param basic_auth: + Colon-separated username:password string for 'authorization: basic ...' + auth header. + + :param proxy_basic_auth: + Colon-separated username:password string for 'proxy-authorization: basic ...' + auth header. + + :param disable_cache: + If ``True``, adds 'cache-control: no-cache' header. + + Example:: + + >>> make_headers(keep_alive=True, user_agent="Batman/1.0") + {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} + >>> make_headers(accept_encoding=True) + {'accept-encoding': 'gzip,deflate'} + """ + headers = {} + if accept_encoding: + if isinstance(accept_encoding, str): + pass + elif isinstance(accept_encoding, list): + accept_encoding = ",".join(accept_encoding) + else: + accept_encoding = ACCEPT_ENCODING + headers["accept-encoding"] = accept_encoding + + if user_agent: + headers["user-agent"] = user_agent + + if keep_alive: + headers["connection"] = "keep-alive" + + if basic_auth: + headers["authorization"] = "Basic " + b64encode(b(basic_auth)).decode("utf-8") + + if proxy_basic_auth: + headers["proxy-authorization"] = "Basic " + b64encode( + b(proxy_basic_auth) + ).decode("utf-8") + + if disable_cache: + headers["cache-control"] = "no-cache" + + return headers + + +def set_file_position(body, pos): + """ + If a position is provided, move file to that point. + Otherwise, we'll attempt to record a position for future use. + """ + if pos is not None: + rewind_body(body, pos) + elif getattr(body, "tell", None) is not None: + try: + pos = body.tell() + except (IOError, OSError): + # This differentiates from None, allowing us to catch + # a failed `tell()` later when trying to rewind the body. + pos = _FAILEDTELL + + return pos + + +def rewind_body(body, body_pos): + """ + Attempt to rewind body to a certain position. + Primarily used for request redirects and retries. + + :param body: + File-like object that supports seek. + + :param int pos: + Position to seek to in file. + """ + body_seek = getattr(body, "seek", None) + if body_seek is not None and isinstance(body_pos, integer_types): + try: + body_seek(body_pos) + except (IOError, OSError): + raise UnrewindableBodyError( + "An error occurred when rewinding request body for redirect/retry." + ) + elif body_pos is _FAILEDTELL: + raise UnrewindableBodyError( + "Unable to record file position for rewinding " + "request body during a redirect/retry." + ) + else: + raise ValueError( + "body_pos must be of type integer, instead it was %s." % type(body_pos) + ) diff --git a/openpype/hosts/fusion/vendor/urllib3/util/response.py b/openpype/hosts/fusion/vendor/urllib3/util/response.py new file mode 100644 index 0000000000..5ea609cced --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/response.py @@ -0,0 +1,107 @@ +from __future__ import absolute_import + +from email.errors import MultipartInvariantViolationDefect, StartBoundaryNotFoundDefect + +from ..exceptions import HeaderParsingError +from ..packages.six.moves import http_client as httplib + + +def is_fp_closed(obj): + """ + Checks whether a given file-like object is closed. + + :param obj: + The file-like object to check. + """ + + try: + # Check `isclosed()` first, in case Python3 doesn't set `closed`. + # GH Issue #928 + return obj.isclosed() + except AttributeError: + pass + + try: + # Check via the official file-like-object way. + return obj.closed + except AttributeError: + pass + + try: + # Check if the object is a container for another file-like object that + # gets released on exhaustion (e.g. HTTPResponse). + return obj.fp is None + except AttributeError: + pass + + raise ValueError("Unable to determine whether fp is closed.") + + +def assert_header_parsing(headers): + """ + Asserts whether all headers have been successfully parsed. + Extracts encountered errors from the result of parsing headers. + + Only works on Python 3. + + :param http.client.HTTPMessage headers: Headers to verify. + + :raises urllib3.exceptions.HeaderParsingError: + If parsing errors are found. + """ + + # This will fail silently if we pass in the wrong kind of parameter. + # To make debugging easier add an explicit check. + if not isinstance(headers, httplib.HTTPMessage): + raise TypeError("expected httplib.Message, got {0}.".format(type(headers))) + + defects = getattr(headers, "defects", None) + get_payload = getattr(headers, "get_payload", None) + + unparsed_data = None + if get_payload: + # get_payload is actually email.message.Message.get_payload; + # we're only interested in the result if it's not a multipart message + if not headers.is_multipart(): + payload = get_payload() + + if isinstance(payload, (bytes, str)): + unparsed_data = payload + if defects: + # httplib is assuming a response body is available + # when parsing headers even when httplib only sends + # header data to parse_headers() This results in + # defects on multipart responses in particular. + # See: https://github.com/urllib3/urllib3/issues/800 + + # So we ignore the following defects: + # - StartBoundaryNotFoundDefect: + # The claimed start boundary was never found. + # - MultipartInvariantViolationDefect: + # A message claimed to be a multipart but no subparts were found. + defects = [ + defect + for defect in defects + if not isinstance( + defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect) + ) + ] + + if defects or unparsed_data: + raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data) + + +def is_response_to_head(response): + """ + Checks whether the request of a response has been a HEAD-request. + Handles the quirks of AppEngine. + + :param http.client.HTTPResponse response: + Response to check if the originating request + used 'HEAD' as a method. + """ + # FIXME: Can we do this somehow without accessing private httplib _method? + method = response._method + if isinstance(method, int): # Platform-specific: Appengine + return method == 3 + return method.upper() == "HEAD" diff --git a/openpype/hosts/fusion/vendor/urllib3/util/retry.py b/openpype/hosts/fusion/vendor/urllib3/util/retry.py new file mode 100644 index 0000000000..c7dc42f1d6 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/retry.py @@ -0,0 +1,602 @@ +from __future__ import absolute_import + +import email +import logging +import re +import time +import warnings +from collections import namedtuple +from itertools import takewhile + +from ..exceptions import ( + ConnectTimeoutError, + InvalidHeader, + MaxRetryError, + ProtocolError, + ProxyError, + ReadTimeoutError, + ResponseError, +) +from ..packages import six + +log = logging.getLogger(__name__) + + +# Data structure for representing the metadata of requests that result in a retry. +RequestHistory = namedtuple( + "RequestHistory", ["method", "url", "error", "status", "redirect_location"] +) + + +# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. +_Default = object() + + +class _RetryMeta(type): + @property + def DEFAULT_METHOD_WHITELIST(cls): + warnings.warn( + "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " + "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", + DeprecationWarning, + ) + return cls.DEFAULT_ALLOWED_METHODS + + @DEFAULT_METHOD_WHITELIST.setter + def DEFAULT_METHOD_WHITELIST(cls, value): + warnings.warn( + "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " + "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", + DeprecationWarning, + ) + cls.DEFAULT_ALLOWED_METHODS = value + + @property + def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): + warnings.warn( + "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " + "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", + DeprecationWarning, + ) + return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT + + @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter + def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): + warnings.warn( + "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " + "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", + DeprecationWarning, + ) + cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value + + +@six.add_metaclass(_RetryMeta) +class Retry(object): + """Retry configuration. + + Each retry attempt will create a new Retry object with updated values, so + they can be safely reused. + + Retries can be defined as a default for a pool:: + + retries = Retry(connect=5, read=2, redirect=5) + http = PoolManager(retries=retries) + response = http.request('GET', 'http://example.com/') + + Or per-request (which overrides the default for the pool):: + + response = http.request('GET', 'http://example.com/', retries=Retry(10)) + + Retries can be disabled by passing ``False``:: + + response = http.request('GET', 'http://example.com/', retries=False) + + Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless + retries are disabled, in which case the causing exception will be raised. + + :param int total: + Total number of retries to allow. Takes precedence over other counts. + + Set to ``None`` to remove this constraint and fall back on other + counts. + + Set to ``0`` to fail on the first retry. + + Set to ``False`` to disable and imply ``raise_on_redirect=False``. + + :param int connect: + How many connection-related errors to retry on. + + These are errors raised before the request is sent to the remote server, + which we assume has not triggered the server to process the request. + + Set to ``0`` to fail on the first retry of this type. + + :param int read: + How many times to retry on read errors. + + These errors are raised after the request was sent to the server, so the + request may have side-effects. + + Set to ``0`` to fail on the first retry of this type. + + :param int redirect: + How many redirects to perform. Limit this to avoid infinite redirect + loops. + + A redirect is a HTTP response with a status code 301, 302, 303, 307 or + 308. + + Set to ``0`` to fail on the first retry of this type. + + Set to ``False`` to disable and imply ``raise_on_redirect=False``. + + :param int status: + How many times to retry on bad status codes. + + These are retries made on responses, where status code matches + ``status_forcelist``. + + Set to ``0`` to fail on the first retry of this type. + + :param int other: + How many times to retry on other errors. + + Other errors are errors that are not connect, read, redirect or status errors. + These errors might be raised after the request was sent to the server, so the + request might have side-effects. + + Set to ``0`` to fail on the first retry of this type. + + If ``total`` is not set, it's a good idea to set this to 0 to account + for unexpected edge cases and avoid infinite retry loops. + + :param iterable allowed_methods: + Set of uppercased HTTP method verbs that we should retry on. + + By default, we only retry on methods which are considered to be + idempotent (multiple requests with the same parameters end with the + same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. + + Set to a ``False`` value to retry on any verb. + + .. warning:: + + Previously this parameter was named ``method_whitelist``, that + usage is deprecated in v1.26.0 and will be removed in v2.0. + + :param iterable status_forcelist: + A set of integer HTTP status codes that we should force a retry on. + A retry is initiated if the request method is in ``allowed_methods`` + and the response status code is in ``status_forcelist``. + + By default, this is disabled with ``None``. + + :param float backoff_factor: + A backoff factor to apply between attempts after the second try + (most errors are resolved immediately by a second try without a + delay). urllib3 will sleep for:: + + {backoff factor} * (2 ** ({number of total retries} - 1)) + + seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep + for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer + than :attr:`Retry.BACKOFF_MAX`. + + By default, backoff is disabled (set to 0). + + :param bool raise_on_redirect: Whether, if the number of redirects is + exhausted, to raise a MaxRetryError, or to return a response with a + response code in the 3xx range. + + :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: + whether we should raise an exception, or return a response, + if status falls in ``status_forcelist`` range and retries have + been exhausted. + + :param tuple history: The history of the request encountered during + each call to :meth:`~Retry.increment`. The list is in the order + the requests occurred. Each list item is of class :class:`RequestHistory`. + + :param bool respect_retry_after_header: + Whether to respect Retry-After header on status codes defined as + :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. + + :param iterable remove_headers_on_redirect: + Sequence of headers to remove from the request when a response + indicating a redirect is returned before firing off the redirected + request. + """ + + #: Default methods to be used for ``allowed_methods`` + DEFAULT_ALLOWED_METHODS = frozenset( + ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] + ) + + #: Default status codes to be used for ``status_forcelist`` + RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) + + #: Default headers to be used for ``remove_headers_on_redirect`` + DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) + + #: Maximum backoff time. + BACKOFF_MAX = 120 + + def __init__( + self, + total=10, + connect=None, + read=None, + redirect=None, + status=None, + other=None, + allowed_methods=_Default, + status_forcelist=None, + backoff_factor=0, + raise_on_redirect=True, + raise_on_status=True, + history=None, + respect_retry_after_header=True, + remove_headers_on_redirect=_Default, + # TODO: Deprecated, remove in v2.0 + method_whitelist=_Default, + ): + + if method_whitelist is not _Default: + if allowed_methods is not _Default: + raise ValueError( + "Using both 'allowed_methods' and " + "'method_whitelist' together is not allowed. " + "Instead only use 'allowed_methods'" + ) + warnings.warn( + "Using 'method_whitelist' with Retry is deprecated and " + "will be removed in v2.0. Use 'allowed_methods' instead", + DeprecationWarning, + stacklevel=2, + ) + allowed_methods = method_whitelist + if allowed_methods is _Default: + allowed_methods = self.DEFAULT_ALLOWED_METHODS + if remove_headers_on_redirect is _Default: + remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT + + self.total = total + self.connect = connect + self.read = read + self.status = status + self.other = other + + if redirect is False or total is False: + redirect = 0 + raise_on_redirect = False + + self.redirect = redirect + self.status_forcelist = status_forcelist or set() + self.allowed_methods = allowed_methods + self.backoff_factor = backoff_factor + self.raise_on_redirect = raise_on_redirect + self.raise_on_status = raise_on_status + self.history = history or tuple() + self.respect_retry_after_header = respect_retry_after_header + self.remove_headers_on_redirect = frozenset( + [h.lower() for h in remove_headers_on_redirect] + ) + + def new(self, **kw): + params = dict( + total=self.total, + connect=self.connect, + read=self.read, + redirect=self.redirect, + status=self.status, + other=self.other, + status_forcelist=self.status_forcelist, + backoff_factor=self.backoff_factor, + raise_on_redirect=self.raise_on_redirect, + raise_on_status=self.raise_on_status, + history=self.history, + remove_headers_on_redirect=self.remove_headers_on_redirect, + respect_retry_after_header=self.respect_retry_after_header, + ) + + # TODO: If already given in **kw we use what's given to us + # If not given we need to figure out what to pass. We decide + # based on whether our class has the 'method_whitelist' property + # and if so we pass the deprecated 'method_whitelist' otherwise + # we use 'allowed_methods'. Remove in v2.0 + if "method_whitelist" not in kw and "allowed_methods" not in kw: + if "method_whitelist" in self.__dict__: + warnings.warn( + "Using 'method_whitelist' with Retry is deprecated and " + "will be removed in v2.0. Use 'allowed_methods' instead", + DeprecationWarning, + ) + params["method_whitelist"] = self.allowed_methods + else: + params["allowed_methods"] = self.allowed_methods + + params.update(kw) + return type(self)(**params) + + @classmethod + def from_int(cls, retries, redirect=True, default=None): + """Backwards-compatibility for the old retries format.""" + if retries is None: + retries = default if default is not None else cls.DEFAULT + + if isinstance(retries, Retry): + return retries + + redirect = bool(redirect) and None + new_retries = cls(retries, redirect=redirect) + log.debug("Converted retries value: %r -> %r", retries, new_retries) + return new_retries + + def get_backoff_time(self): + """Formula for computing the current backoff + + :rtype: float + """ + # We want to consider only the last consecutive errors sequence (Ignore redirects). + consecutive_errors_len = len( + list( + takewhile(lambda x: x.redirect_location is None, reversed(self.history)) + ) + ) + if consecutive_errors_len <= 1: + return 0 + + backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) + return min(self.BACKOFF_MAX, backoff_value) + + def parse_retry_after(self, retry_after): + # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 + if re.match(r"^\s*[0-9]+\s*$", retry_after): + seconds = int(retry_after) + else: + retry_date_tuple = email.utils.parsedate_tz(retry_after) + if retry_date_tuple is None: + raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) + if retry_date_tuple[9] is None: # Python 2 + # Assume UTC if no timezone was specified + # On Python2.7, parsedate_tz returns None for a timezone offset + # instead of 0 if no timezone is given, where mktime_tz treats + # a None timezone offset as local time. + retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] + + retry_date = email.utils.mktime_tz(retry_date_tuple) + seconds = retry_date - time.time() + + if seconds < 0: + seconds = 0 + + return seconds + + def get_retry_after(self, response): + """Get the value of Retry-After in seconds.""" + + retry_after = response.getheader("Retry-After") + + if retry_after is None: + return None + + return self.parse_retry_after(retry_after) + + def sleep_for_retry(self, response=None): + retry_after = self.get_retry_after(response) + if retry_after: + time.sleep(retry_after) + return True + + return False + + def _sleep_backoff(self): + backoff = self.get_backoff_time() + if backoff <= 0: + return + time.sleep(backoff) + + def sleep(self, response=None): + """Sleep between retry attempts. + + This method will respect a server's ``Retry-After`` response header + and sleep the duration of the time requested. If that is not present, it + will use an exponential backoff. By default, the backoff factor is 0 and + this method will return immediately. + """ + + if self.respect_retry_after_header and response: + slept = self.sleep_for_retry(response) + if slept: + return + + self._sleep_backoff() + + def _is_connection_error(self, err): + """Errors when we're fairly sure that the server did not receive the + request, so it should be safe to retry. + """ + if isinstance(err, ProxyError): + err = err.original_error + return isinstance(err, ConnectTimeoutError) + + def _is_read_error(self, err): + """Errors that occur after the request has been started, so we should + assume that the server began processing it. + """ + return isinstance(err, (ReadTimeoutError, ProtocolError)) + + def _is_method_retryable(self, method): + """Checks if a given HTTP method should be retried upon, depending if + it is included in the allowed_methods + """ + # TODO: For now favor if the Retry implementation sets its own method_whitelist + # property outside of our constructor to avoid breaking custom implementations. + if "method_whitelist" in self.__dict__: + warnings.warn( + "Using 'method_whitelist' with Retry is deprecated and " + "will be removed in v2.0. Use 'allowed_methods' instead", + DeprecationWarning, + ) + allowed_methods = self.method_whitelist + else: + allowed_methods = self.allowed_methods + + if allowed_methods and method.upper() not in allowed_methods: + return False + return True + + def is_retry(self, method, status_code, has_retry_after=False): + """Is this method/status code retryable? (Based on allowlists and control + variables such as the number of total retries to allow, whether to + respect the Retry-After header, whether this header is present, and + whether the returned status code is on the list of status codes to + be retried upon on the presence of the aforementioned header) + """ + if not self._is_method_retryable(method): + return False + + if self.status_forcelist and status_code in self.status_forcelist: + return True + + return ( + self.total + and self.respect_retry_after_header + and has_retry_after + and (status_code in self.RETRY_AFTER_STATUS_CODES) + ) + + def is_exhausted(self): + """Are we out of retries?""" + retry_counts = ( + self.total, + self.connect, + self.read, + self.redirect, + self.status, + self.other, + ) + retry_counts = list(filter(None, retry_counts)) + if not retry_counts: + return False + + return min(retry_counts) < 0 + + def increment( + self, + method=None, + url=None, + response=None, + error=None, + _pool=None, + _stacktrace=None, + ): + """Return a new Retry object with incremented retry counters. + + :param response: A response object, or None, if the server did not + return a response. + :type response: :class:`~urllib3.response.HTTPResponse` + :param Exception error: An error encountered during the request, or + None if the response was received successfully. + + :return: A new ``Retry`` object. + """ + if self.total is False and error: + # Disabled, indicate to re-raise the error. + raise six.reraise(type(error), error, _stacktrace) + + total = self.total + if total is not None: + total -= 1 + + connect = self.connect + read = self.read + redirect = self.redirect + status_count = self.status + other = self.other + cause = "unknown" + status = None + redirect_location = None + + if error and self._is_connection_error(error): + # Connect retry? + if connect is False: + raise six.reraise(type(error), error, _stacktrace) + elif connect is not None: + connect -= 1 + + elif error and self._is_read_error(error): + # Read retry? + if read is False or not self._is_method_retryable(method): + raise six.reraise(type(error), error, _stacktrace) + elif read is not None: + read -= 1 + + elif error: + # Other retry? + if other is not None: + other -= 1 + + elif response and response.get_redirect_location(): + # Redirect retry? + if redirect is not None: + redirect -= 1 + cause = "too many redirects" + redirect_location = response.get_redirect_location() + status = response.status + + else: + # Incrementing because of a server error like a 500 in + # status_forcelist and the given method is in the allowed_methods + cause = ResponseError.GENERIC_ERROR + if response and response.status: + if status_count is not None: + status_count -= 1 + cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) + status = response.status + + history = self.history + ( + RequestHistory(method, url, error, status, redirect_location), + ) + + new_retry = self.new( + total=total, + connect=connect, + read=read, + redirect=redirect, + status=status_count, + other=other, + history=history, + ) + + if new_retry.is_exhausted(): + raise MaxRetryError(_pool, url, error or ResponseError(cause)) + + log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) + + return new_retry + + def __repr__(self): + return ( + "{cls.__name__}(total={self.total}, connect={self.connect}, " + "read={self.read}, redirect={self.redirect}, status={self.status})" + ).format(cls=type(self), self=self) + + def __getattr__(self, item): + if item == "method_whitelist": + # TODO: Remove this deprecated alias in v2.0 + warnings.warn( + "Using 'method_whitelist' with Retry is deprecated and " + "will be removed in v2.0. Use 'allowed_methods' instead", + DeprecationWarning, + ) + return self.allowed_methods + try: + return getattr(super(Retry, self), item) + except AttributeError: + return getattr(Retry, item) + + +# For backwards compatibility (equivalent to pre-v1.9): +Retry.DEFAULT = Retry(3) diff --git a/openpype/hosts/fusion/vendor/urllib3/util/ssl_.py b/openpype/hosts/fusion/vendor/urllib3/util/ssl_.py new file mode 100644 index 0000000000..8f867812a5 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/ssl_.py @@ -0,0 +1,495 @@ +from __future__ import absolute_import + +import hmac +import os +import sys +import warnings +from binascii import hexlify, unhexlify +from hashlib import md5, sha1, sha256 + +from ..exceptions import ( + InsecurePlatformWarning, + ProxySchemeUnsupported, + SNIMissingWarning, + SSLError, +) +from ..packages import six +from .url import BRACELESS_IPV6_ADDRZ_RE, IPV4_RE + +SSLContext = None +SSLTransport = None +HAS_SNI = False +IS_PYOPENSSL = False +IS_SECURETRANSPORT = False +ALPN_PROTOCOLS = ["http/1.1"] + +# Maps the length of a digest to a possible hash function producing this digest +HASHFUNC_MAP = {32: md5, 40: sha1, 64: sha256} + + +def _const_compare_digest_backport(a, b): + """ + Compare two digests of equal length in constant time. + + The digests must be of type str/bytes. + Returns True if the digests match, and False otherwise. + """ + result = abs(len(a) - len(b)) + for left, right in zip(bytearray(a), bytearray(b)): + result |= left ^ right + return result == 0 + + +_const_compare_digest = getattr(hmac, "compare_digest", _const_compare_digest_backport) + +try: # Test for SSL features + import ssl + from ssl import CERT_REQUIRED, wrap_socket +except ImportError: + pass + +try: + from ssl import HAS_SNI # Has SNI? +except ImportError: + pass + +try: + from .ssltransport import SSLTransport +except ImportError: + pass + + +try: # Platform-specific: Python 3.6 + from ssl import PROTOCOL_TLS + + PROTOCOL_SSLv23 = PROTOCOL_TLS +except ImportError: + try: + from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS + + PROTOCOL_SSLv23 = PROTOCOL_TLS + except ImportError: + PROTOCOL_SSLv23 = PROTOCOL_TLS = 2 + +try: + from ssl import PROTOCOL_TLS_CLIENT +except ImportError: + PROTOCOL_TLS_CLIENT = PROTOCOL_TLS + + +try: + from ssl import OP_NO_COMPRESSION, OP_NO_SSLv2, OP_NO_SSLv3 +except ImportError: + OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000 + OP_NO_COMPRESSION = 0x20000 + + +try: # OP_NO_TICKET was added in Python 3.6 + from ssl import OP_NO_TICKET +except ImportError: + OP_NO_TICKET = 0x4000 + + +# A secure default. +# Sources for more information on TLS ciphers: +# +# - https://wiki.mozilla.org/Security/Server_Side_TLS +# - https://www.ssllabs.com/projects/best-practices/index.html +# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ +# +# The general intent is: +# - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE), +# - prefer ECDHE over DHE for better performance, +# - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and +# security, +# - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common, +# - disable NULL authentication, MD5 MACs, DSS, and other +# insecure ciphers for security reasons. +# - NOTE: TLS 1.3 cipher suites are managed through a different interface +# not exposed by CPython (yet!) and are enabled by default if they're available. +DEFAULT_CIPHERS = ":".join( + [ + "ECDHE+AESGCM", + "ECDHE+CHACHA20", + "DHE+AESGCM", + "DHE+CHACHA20", + "ECDH+AESGCM", + "DH+AESGCM", + "ECDH+AES", + "DH+AES", + "RSA+AESGCM", + "RSA+AES", + "!aNULL", + "!eNULL", + "!MD5", + "!DSS", + ] +) + +try: + from ssl import SSLContext # Modern SSL? +except ImportError: + + class SSLContext(object): # Platform-specific: Python 2 + def __init__(self, protocol_version): + self.protocol = protocol_version + # Use default values from a real SSLContext + self.check_hostname = False + self.verify_mode = ssl.CERT_NONE + self.ca_certs = None + self.options = 0 + self.certfile = None + self.keyfile = None + self.ciphers = None + + def load_cert_chain(self, certfile, keyfile): + self.certfile = certfile + self.keyfile = keyfile + + def load_verify_locations(self, cafile=None, capath=None, cadata=None): + self.ca_certs = cafile + + if capath is not None: + raise SSLError("CA directories not supported in older Pythons") + + if cadata is not None: + raise SSLError("CA data not supported in older Pythons") + + def set_ciphers(self, cipher_suite): + self.ciphers = cipher_suite + + def wrap_socket(self, socket, server_hostname=None, server_side=False): + warnings.warn( + "A true SSLContext object is not available. This prevents " + "urllib3 from configuring SSL appropriately and may cause " + "certain SSL connections to fail. You can upgrade to a newer " + "version of Python to solve this. For more information, see " + "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings", + InsecurePlatformWarning, + ) + kwargs = { + "keyfile": self.keyfile, + "certfile": self.certfile, + "ca_certs": self.ca_certs, + "cert_reqs": self.verify_mode, + "ssl_version": self.protocol, + "server_side": server_side, + } + return wrap_socket(socket, ciphers=self.ciphers, **kwargs) + + +def assert_fingerprint(cert, fingerprint): + """ + Checks if given fingerprint matches the supplied certificate. + + :param cert: + Certificate as bytes object. + :param fingerprint: + Fingerprint as string of hexdigits, can be interspersed by colons. + """ + + fingerprint = fingerprint.replace(":", "").lower() + digest_length = len(fingerprint) + hashfunc = HASHFUNC_MAP.get(digest_length) + if not hashfunc: + raise SSLError("Fingerprint of invalid length: {0}".format(fingerprint)) + + # We need encode() here for py32; works on py2 and p33. + fingerprint_bytes = unhexlify(fingerprint.encode()) + + cert_digest = hashfunc(cert).digest() + + if not _const_compare_digest(cert_digest, fingerprint_bytes): + raise SSLError( + 'Fingerprints did not match. Expected "{0}", got "{1}".'.format( + fingerprint, hexlify(cert_digest) + ) + ) + + +def resolve_cert_reqs(candidate): + """ + Resolves the argument to a numeric constant, which can be passed to + the wrap_socket function/method from the ssl module. + Defaults to :data:`ssl.CERT_REQUIRED`. + If given a string it is assumed to be the name of the constant in the + :mod:`ssl` module or its abbreviation. + (So you can specify `REQUIRED` instead of `CERT_REQUIRED`. + If it's neither `None` nor a string we assume it is already the numeric + constant which can directly be passed to wrap_socket. + """ + if candidate is None: + return CERT_REQUIRED + + if isinstance(candidate, str): + res = getattr(ssl, candidate, None) + if res is None: + res = getattr(ssl, "CERT_" + candidate) + return res + + return candidate + + +def resolve_ssl_version(candidate): + """ + like resolve_cert_reqs + """ + if candidate is None: + return PROTOCOL_TLS + + if isinstance(candidate, str): + res = getattr(ssl, candidate, None) + if res is None: + res = getattr(ssl, "PROTOCOL_" + candidate) + return res + + return candidate + + +def create_urllib3_context( + ssl_version=None, cert_reqs=None, options=None, ciphers=None +): + """All arguments have the same meaning as ``ssl_wrap_socket``. + + By default, this function does a lot of the same work that + ``ssl.create_default_context`` does on Python 3.4+. It: + + - Disables SSLv2, SSLv3, and compression + - Sets a restricted set of server ciphers + + If you wish to enable SSLv3, you can do:: + + from urllib3.util import ssl_ + context = ssl_.create_urllib3_context() + context.options &= ~ssl_.OP_NO_SSLv3 + + You can do the same to enable compression (substituting ``COMPRESSION`` + for ``SSLv3`` in the last line above). + + :param ssl_version: + The desired protocol version to use. This will default to + PROTOCOL_SSLv23 which will negotiate the highest protocol that both + the server and your installation of OpenSSL support. + :param cert_reqs: + Whether to require the certificate verification. This defaults to + ``ssl.CERT_REQUIRED``. + :param options: + Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``, + ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``, and ``ssl.OP_NO_TICKET``. + :param ciphers: + Which cipher suites to allow the server to select. + :returns: + Constructed SSLContext object with specified options + :rtype: SSLContext + """ + # PROTOCOL_TLS is deprecated in Python 3.10 + if not ssl_version or ssl_version == PROTOCOL_TLS: + ssl_version = PROTOCOL_TLS_CLIENT + + context = SSLContext(ssl_version) + + context.set_ciphers(ciphers or DEFAULT_CIPHERS) + + # Setting the default here, as we may have no ssl module on import + cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs + + if options is None: + options = 0 + # SSLv2 is easily broken and is considered harmful and dangerous + options |= OP_NO_SSLv2 + # SSLv3 has several problems and is now dangerous + options |= OP_NO_SSLv3 + # Disable compression to prevent CRIME attacks for OpenSSL 1.0+ + # (issue #309) + options |= OP_NO_COMPRESSION + # TLSv1.2 only. Unless set explicitly, do not request tickets. + # This may save some bandwidth on wire, and although the ticket is encrypted, + # there is a risk associated with it being on wire, + # if the server is not rotating its ticketing keys properly. + options |= OP_NO_TICKET + + context.options |= options + + # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is + # necessary for conditional client cert authentication with TLS 1.3. + # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older + # versions of Python. We only enable on Python 3.7.4+ or if certificate + # verification is enabled to work around Python issue #37428 + # See: https://bugs.python.org/issue37428 + if (cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4)) and getattr( + context, "post_handshake_auth", None + ) is not None: + context.post_handshake_auth = True + + def disable_check_hostname(): + if ( + getattr(context, "check_hostname", None) is not None + ): # Platform-specific: Python 3.2 + # We do our own verification, including fingerprints and alternative + # hostnames. So disable it here + context.check_hostname = False + + # The order of the below lines setting verify_mode and check_hostname + # matter due to safe-guards SSLContext has to prevent an SSLContext with + # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more + # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used + # or not so we don't know the initial state of the freshly created SSLContext. + if cert_reqs == ssl.CERT_REQUIRED: + context.verify_mode = cert_reqs + disable_check_hostname() + else: + disable_check_hostname() + context.verify_mode = cert_reqs + + # Enable logging of TLS session keys via defacto standard environment variable + # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values. + if hasattr(context, "keylog_filename"): + sslkeylogfile = os.environ.get("SSLKEYLOGFILE") + if sslkeylogfile: + context.keylog_filename = sslkeylogfile + + return context + + +def ssl_wrap_socket( + sock, + keyfile=None, + certfile=None, + cert_reqs=None, + ca_certs=None, + server_hostname=None, + ssl_version=None, + ciphers=None, + ssl_context=None, + ca_cert_dir=None, + key_password=None, + ca_cert_data=None, + tls_in_tls=False, +): + """ + All arguments except for server_hostname, ssl_context, and ca_cert_dir have + the same meaning as they do when using :func:`ssl.wrap_socket`. + + :param server_hostname: + When SNI is supported, the expected hostname of the certificate + :param ssl_context: + A pre-made :class:`SSLContext` object. If none is provided, one will + be created using :func:`create_urllib3_context`. + :param ciphers: + A string of ciphers we wish the client to support. + :param ca_cert_dir: + A directory containing CA certificates in multiple separate files, as + supported by OpenSSL's -CApath flag or the capath argument to + SSLContext.load_verify_locations(). + :param key_password: + Optional password if the keyfile is encrypted. + :param ca_cert_data: + Optional string containing CA certificates in PEM format suitable for + passing as the cadata parameter to SSLContext.load_verify_locations() + :param tls_in_tls: + Use SSLTransport to wrap the existing socket. + """ + context = ssl_context + if context is None: + # Note: This branch of code and all the variables in it are no longer + # used by urllib3 itself. We should consider deprecating and removing + # this code. + context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers) + + if ca_certs or ca_cert_dir or ca_cert_data: + try: + context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data) + except (IOError, OSError) as e: + raise SSLError(e) + + elif ssl_context is None and hasattr(context, "load_default_certs"): + # try to load OS default certs; works well on Windows (require Python3.4+) + context.load_default_certs() + + # Attempt to detect if we get the goofy behavior of the + # keyfile being encrypted and OpenSSL asking for the + # passphrase via the terminal and instead error out. + if keyfile and key_password is None and _is_key_file_encrypted(keyfile): + raise SSLError("Client private key is encrypted, password is required") + + if certfile: + if key_password is None: + context.load_cert_chain(certfile, keyfile) + else: + context.load_cert_chain(certfile, keyfile, key_password) + + try: + if hasattr(context, "set_alpn_protocols"): + context.set_alpn_protocols(ALPN_PROTOCOLS) + except NotImplementedError: # Defensive: in CI, we always have set_alpn_protocols + pass + + # If we detect server_hostname is an IP address then the SNI + # extension should not be used according to RFC3546 Section 3.1 + use_sni_hostname = server_hostname and not is_ipaddress(server_hostname) + # SecureTransport uses server_hostname in certificate verification. + send_sni = (use_sni_hostname and HAS_SNI) or ( + IS_SECURETRANSPORT and server_hostname + ) + # Do not warn the user if server_hostname is an invalid SNI hostname. + if not HAS_SNI and use_sni_hostname: + warnings.warn( + "An HTTPS request has been made, but the SNI (Server Name " + "Indication) extension to TLS is not available on this platform. " + "This may cause the server to present an incorrect TLS " + "certificate, which can cause validation failures. You can upgrade to " + "a newer version of Python to solve this. For more information, see " + "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" + "#ssl-warnings", + SNIMissingWarning, + ) + + if send_sni: + ssl_sock = _ssl_wrap_socket_impl( + sock, context, tls_in_tls, server_hostname=server_hostname + ) + else: + ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls) + return ssl_sock + + +def is_ipaddress(hostname): + """Detects whether the hostname given is an IPv4 or IPv6 address. + Also detects IPv6 addresses with Zone IDs. + + :param str hostname: Hostname to examine. + :return: True if the hostname is an IP address, False otherwise. + """ + if not six.PY2 and isinstance(hostname, bytes): + # IDN A-label bytes are ASCII compatible. + hostname = hostname.decode("ascii") + return bool(IPV4_RE.match(hostname) or BRACELESS_IPV6_ADDRZ_RE.match(hostname)) + + +def _is_key_file_encrypted(key_file): + """Detects if a key file is encrypted or not.""" + with open(key_file, "r") as f: + for line in f: + # Look for Proc-Type: 4,ENCRYPTED + if "ENCRYPTED" in line: + return True + + return False + + +def _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname=None): + if tls_in_tls: + if not SSLTransport: + # Import error, ssl is not available. + raise ProxySchemeUnsupported( + "TLS in TLS requires support for the 'ssl' module" + ) + + SSLTransport._validate_ssl_context_for_tls_in_tls(ssl_context) + return SSLTransport(sock, ssl_context, server_hostname) + + if server_hostname: + return ssl_context.wrap_socket(sock, server_hostname=server_hostname) + else: + return ssl_context.wrap_socket(sock) diff --git a/openpype/hosts/fusion/vendor/urllib3/util/ssltransport.py b/openpype/hosts/fusion/vendor/urllib3/util/ssltransport.py new file mode 100644 index 0000000000..c2186bced9 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/ssltransport.py @@ -0,0 +1,221 @@ +import io +import socket +import ssl + +from urllib3.exceptions import ProxySchemeUnsupported +from urllib3.packages import six + +SSL_BLOCKSIZE = 16384 + + +class SSLTransport: + """ + The SSLTransport wraps an existing socket and establishes an SSL connection. + + Contrary to Python's implementation of SSLSocket, it allows you to chain + multiple TLS connections together. It's particularly useful if you need to + implement TLS within TLS. + + The class supports most of the socket API operations. + """ + + @staticmethod + def _validate_ssl_context_for_tls_in_tls(ssl_context): + """ + Raises a ProxySchemeUnsupported if the provided ssl_context can't be used + for TLS in TLS. + + The only requirement is that the ssl_context provides the 'wrap_bio' + methods. + """ + + if not hasattr(ssl_context, "wrap_bio"): + if six.PY2: + raise ProxySchemeUnsupported( + "TLS in TLS requires SSLContext.wrap_bio() which isn't " + "supported on Python 2" + ) + else: + raise ProxySchemeUnsupported( + "TLS in TLS requires SSLContext.wrap_bio() which isn't " + "available on non-native SSLContext" + ) + + def __init__( + self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True + ): + """ + Create an SSLTransport around socket using the provided ssl_context. + """ + self.incoming = ssl.MemoryBIO() + self.outgoing = ssl.MemoryBIO() + + self.suppress_ragged_eofs = suppress_ragged_eofs + self.socket = socket + + self.sslobj = ssl_context.wrap_bio( + self.incoming, self.outgoing, server_hostname=server_hostname + ) + + # Perform initial handshake. + self._ssl_io_loop(self.sslobj.do_handshake) + + def __enter__(self): + return self + + def __exit__(self, *_): + self.close() + + def fileno(self): + return self.socket.fileno() + + def read(self, len=1024, buffer=None): + return self._wrap_ssl_read(len, buffer) + + def recv(self, len=1024, flags=0): + if flags != 0: + raise ValueError("non-zero flags not allowed in calls to recv") + return self._wrap_ssl_read(len) + + def recv_into(self, buffer, nbytes=None, flags=0): + if flags != 0: + raise ValueError("non-zero flags not allowed in calls to recv_into") + if buffer and (nbytes is None): + nbytes = len(buffer) + elif nbytes is None: + nbytes = 1024 + return self.read(nbytes, buffer) + + def sendall(self, data, flags=0): + if flags != 0: + raise ValueError("non-zero flags not allowed in calls to sendall") + count = 0 + with memoryview(data) as view, view.cast("B") as byte_view: + amount = len(byte_view) + while count < amount: + v = self.send(byte_view[count:]) + count += v + + def send(self, data, flags=0): + if flags != 0: + raise ValueError("non-zero flags not allowed in calls to send") + response = self._ssl_io_loop(self.sslobj.write, data) + return response + + def makefile( + self, mode="r", buffering=None, encoding=None, errors=None, newline=None + ): + """ + Python's httpclient uses makefile and buffered io when reading HTTP + messages and we need to support it. + + This is unfortunately a copy and paste of socket.py makefile with small + changes to point to the socket directly. + """ + if not set(mode) <= {"r", "w", "b"}: + raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,)) + + writing = "w" in mode + reading = "r" in mode or not writing + assert reading or writing + binary = "b" in mode + rawmode = "" + if reading: + rawmode += "r" + if writing: + rawmode += "w" + raw = socket.SocketIO(self, rawmode) + self.socket._io_refs += 1 + if buffering is None: + buffering = -1 + if buffering < 0: + buffering = io.DEFAULT_BUFFER_SIZE + if buffering == 0: + if not binary: + raise ValueError("unbuffered streams must be binary") + return raw + if reading and writing: + buffer = io.BufferedRWPair(raw, raw, buffering) + elif reading: + buffer = io.BufferedReader(raw, buffering) + else: + assert writing + buffer = io.BufferedWriter(raw, buffering) + if binary: + return buffer + text = io.TextIOWrapper(buffer, encoding, errors, newline) + text.mode = mode + return text + + def unwrap(self): + self._ssl_io_loop(self.sslobj.unwrap) + + def close(self): + self.socket.close() + + def getpeercert(self, binary_form=False): + return self.sslobj.getpeercert(binary_form) + + def version(self): + return self.sslobj.version() + + def cipher(self): + return self.sslobj.cipher() + + def selected_alpn_protocol(self): + return self.sslobj.selected_alpn_protocol() + + def selected_npn_protocol(self): + return self.sslobj.selected_npn_protocol() + + def shared_ciphers(self): + return self.sslobj.shared_ciphers() + + def compression(self): + return self.sslobj.compression() + + def settimeout(self, value): + self.socket.settimeout(value) + + def gettimeout(self): + return self.socket.gettimeout() + + def _decref_socketios(self): + self.socket._decref_socketios() + + def _wrap_ssl_read(self, len, buffer=None): + try: + return self._ssl_io_loop(self.sslobj.read, len, buffer) + except ssl.SSLError as e: + if e.errno == ssl.SSL_ERROR_EOF and self.suppress_ragged_eofs: + return 0 # eof, return 0. + else: + raise + + def _ssl_io_loop(self, func, *args): + """Performs an I/O loop between incoming/outgoing and the socket.""" + should_loop = True + ret = None + + while should_loop: + errno = None + try: + ret = func(*args) + except ssl.SSLError as e: + if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): + # WANT_READ, and WANT_WRITE are expected, others are not. + raise e + errno = e.errno + + buf = self.outgoing.read() + self.socket.sendall(buf) + + if errno is None: + should_loop = False + elif errno == ssl.SSL_ERROR_WANT_READ: + buf = self.socket.recv(SSL_BLOCKSIZE) + if buf: + self.incoming.write(buf) + else: + self.incoming.write_eof() + return ret diff --git a/openpype/hosts/fusion/vendor/urllib3/util/timeout.py b/openpype/hosts/fusion/vendor/urllib3/util/timeout.py new file mode 100644 index 0000000000..ff69593b05 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/timeout.py @@ -0,0 +1,268 @@ +from __future__ import absolute_import + +import time + +# The default socket timeout, used by httplib to indicate that no timeout was +# specified by the user +from socket import _GLOBAL_DEFAULT_TIMEOUT + +from ..exceptions import TimeoutStateError + +# A sentinel value to indicate that no timeout was specified by the user in +# urllib3 +_Default = object() + + +# Use time.monotonic if available. +current_time = getattr(time, "monotonic", time.time) + + +class Timeout(object): + """Timeout configuration. + + Timeouts can be defined as a default for a pool: + + .. code-block:: python + + timeout = Timeout(connect=2.0, read=7.0) + http = PoolManager(timeout=timeout) + response = http.request('GET', 'http://example.com/') + + Or per-request (which overrides the default for the pool): + + .. code-block:: python + + response = http.request('GET', 'http://example.com/', timeout=Timeout(10)) + + Timeouts can be disabled by setting all the parameters to ``None``: + + .. code-block:: python + + no_timeout = Timeout(connect=None, read=None) + response = http.request('GET', 'http://example.com/, timeout=no_timeout) + + + :param total: + This combines the connect and read timeouts into one; the read timeout + will be set to the time leftover from the connect attempt. In the + event that both a connect timeout and a total are specified, or a read + timeout and a total are specified, the shorter timeout will be applied. + + Defaults to None. + + :type total: int, float, or None + + :param connect: + The maximum amount of time (in seconds) to wait for a connection + attempt to a server to succeed. Omitting the parameter will default the + connect timeout to the system default, probably `the global default + timeout in socket.py + `_. + None will set an infinite timeout for connection attempts. + + :type connect: int, float, or None + + :param read: + The maximum amount of time (in seconds) to wait between consecutive + read operations for a response from the server. Omitting the parameter + will default the read timeout to the system default, probably `the + global default timeout in socket.py + `_. + None will set an infinite timeout. + + :type read: int, float, or None + + .. note:: + + Many factors can affect the total amount of time for urllib3 to return + an HTTP response. + + For example, Python's DNS resolver does not obey the timeout specified + on the socket. Other factors that can affect total request time include + high CPU load, high swap, the program running at a low priority level, + or other behaviors. + + In addition, the read and total timeouts only measure the time between + read operations on the socket connecting the client and the server, + not the total amount of time for the request to return a complete + response. For most requests, the timeout is raised because the server + has not sent the first byte in the specified time. This is not always + the case; if a server streams one byte every fifteen seconds, a timeout + of 20 seconds will not trigger, even though the request will take + several minutes to complete. + + If your goal is to cut off any request after a set amount of wall clock + time, consider having a second "watcher" thread to cut off a slow + request. + """ + + #: A sentinel object representing the default timeout value + DEFAULT_TIMEOUT = _GLOBAL_DEFAULT_TIMEOUT + + def __init__(self, total=None, connect=_Default, read=_Default): + self._connect = self._validate_timeout(connect, "connect") + self._read = self._validate_timeout(read, "read") + self.total = self._validate_timeout(total, "total") + self._start_connect = None + + def __repr__(self): + return "%s(connect=%r, read=%r, total=%r)" % ( + type(self).__name__, + self._connect, + self._read, + self.total, + ) + + # __str__ provided for backwards compatibility + __str__ = __repr__ + + @classmethod + def _validate_timeout(cls, value, name): + """Check that a timeout attribute is valid. + + :param value: The timeout value to validate + :param name: The name of the timeout attribute to validate. This is + used to specify in error messages. + :return: The validated and casted version of the given value. + :raises ValueError: If it is a numeric value less than or equal to + zero, or the type is not an integer, float, or None. + """ + if value is _Default: + return cls.DEFAULT_TIMEOUT + + if value is None or value is cls.DEFAULT_TIMEOUT: + return value + + if isinstance(value, bool): + raise ValueError( + "Timeout cannot be a boolean value. It must " + "be an int, float or None." + ) + try: + float(value) + except (TypeError, ValueError): + raise ValueError( + "Timeout value %s was %s, but it must be an " + "int, float or None." % (name, value) + ) + + try: + if value <= 0: + raise ValueError( + "Attempted to set %s timeout to %s, but the " + "timeout cannot be set to a value less " + "than or equal to 0." % (name, value) + ) + except TypeError: + # Python 3 + raise ValueError( + "Timeout value %s was %s, but it must be an " + "int, float or None." % (name, value) + ) + + return value + + @classmethod + def from_float(cls, timeout): + """Create a new Timeout from a legacy timeout value. + + The timeout value used by httplib.py sets the same timeout on the + connect(), and recv() socket requests. This creates a :class:`Timeout` + object that sets the individual timeouts to the ``timeout`` value + passed to this function. + + :param timeout: The legacy timeout value. + :type timeout: integer, float, sentinel default object, or None + :return: Timeout object + :rtype: :class:`Timeout` + """ + return Timeout(read=timeout, connect=timeout) + + def clone(self): + """Create a copy of the timeout object + + Timeout properties are stored per-pool but each request needs a fresh + Timeout object to ensure each one has its own start/stop configured. + + :return: a copy of the timeout object + :rtype: :class:`Timeout` + """ + # We can't use copy.deepcopy because that will also create a new object + # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to + # detect the user default. + return Timeout(connect=self._connect, read=self._read, total=self.total) + + def start_connect(self): + """Start the timeout clock, used during a connect() attempt + + :raises urllib3.exceptions.TimeoutStateError: if you attempt + to start a timer that has been started already. + """ + if self._start_connect is not None: + raise TimeoutStateError("Timeout timer has already been started.") + self._start_connect = current_time() + return self._start_connect + + def get_connect_duration(self): + """Gets the time elapsed since the call to :meth:`start_connect`. + + :return: Elapsed time in seconds. + :rtype: float + :raises urllib3.exceptions.TimeoutStateError: if you attempt + to get duration for a timer that hasn't been started. + """ + if self._start_connect is None: + raise TimeoutStateError( + "Can't get connect duration for timer that has not started." + ) + return current_time() - self._start_connect + + @property + def connect_timeout(self): + """Get the value to use when setting a connection timeout. + + This will be a positive float or integer, the value None + (never timeout), or the default system timeout. + + :return: Connect timeout. + :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None + """ + if self.total is None: + return self._connect + + if self._connect is None or self._connect is self.DEFAULT_TIMEOUT: + return self.total + + return min(self._connect, self.total) + + @property + def read_timeout(self): + """Get the value for the read timeout. + + This assumes some time has elapsed in the connection timeout and + computes the read timeout appropriately. + + If self.total is set, the read timeout is dependent on the amount of + time taken by the connect timeout. If the connection time has not been + established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be + raised. + + :return: Value to use for the read timeout. + :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None + :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect` + has not yet been called on this object. + """ + if ( + self.total is not None + and self.total is not self.DEFAULT_TIMEOUT + and self._read is not None + and self._read is not self.DEFAULT_TIMEOUT + ): + # In case the connect timeout has not yet been established. + if self._start_connect is None: + return self._read + return max(0, min(self.total - self.get_connect_duration(), self._read)) + elif self.total is not None and self.total is not self.DEFAULT_TIMEOUT: + return max(0, self.total - self.get_connect_duration()) + else: + return self._read diff --git a/openpype/hosts/fusion/vendor/urllib3/util/url.py b/openpype/hosts/fusion/vendor/urllib3/util/url.py new file mode 100644 index 0000000000..81a03da9e3 --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/url.py @@ -0,0 +1,432 @@ +from __future__ import absolute_import + +import re +from collections import namedtuple + +from ..exceptions import LocationParseError +from ..packages import six + +url_attrs = ["scheme", "auth", "host", "port", "path", "query", "fragment"] + +# We only want to normalize urls with an HTTP(S) scheme. +# urllib3 infers URLs without a scheme (None) to be http. +NORMALIZABLE_SCHEMES = ("http", "https", None) + +# Almost all of these patterns were derived from the +# 'rfc3986' module: https://github.com/python-hyper/rfc3986 +PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}") +SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)") +URI_RE = re.compile( + r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" + r"(?://([^\\/?#]*))?" + r"([^?#]*)" + r"(?:\?([^#]*))?" + r"(?:#(.*))?$", + re.UNICODE | re.DOTALL, +) + +IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}" +HEX_PAT = "[0-9A-Fa-f]{1,4}" +LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT) +_subs = {"hex": HEX_PAT, "ls32": LS32_PAT} +_variations = [ + # 6( h16 ":" ) ls32 + "(?:%(hex)s:){6}%(ls32)s", + # "::" 5( h16 ":" ) ls32 + "::(?:%(hex)s:){5}%(ls32)s", + # [ h16 ] "::" 4( h16 ":" ) ls32 + "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s", + # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 + "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s", + # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 + "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s", + # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 + "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s", + # [ *4( h16 ":" ) h16 ] "::" ls32 + "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s", + # [ *5( h16 ":" ) h16 ] "::" h16 + "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s", + # [ *6( h16 ":" ) h16 ] "::" + "(?:(?:%(hex)s:){0,6}%(hex)s)?::", +] + +UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._!\-~" +IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" +ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" +IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" +REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*" +TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$") + +IPV4_RE = re.compile("^" + IPV4_PAT + "$") +IPV6_RE = re.compile("^" + IPV6_PAT + "$") +IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$") +BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$") +ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$") + +_HOST_PORT_PAT = ("^(%s|%s|%s)(?::([0-9]{0,5}))?$") % ( + REG_NAME_PAT, + IPV4_PAT, + IPV6_ADDRZ_PAT, +) +_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL) + +UNRESERVED_CHARS = set( + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~" +) +SUB_DELIM_CHARS = set("!$&'()*+,;=") +USERINFO_CHARS = UNRESERVED_CHARS | SUB_DELIM_CHARS | {":"} +PATH_CHARS = USERINFO_CHARS | {"@", "/"} +QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {"?"} + + +class Url(namedtuple("Url", url_attrs)): + """ + Data structure for representing an HTTP URL. Used as a return value for + :func:`parse_url`. Both the scheme and host are normalized as they are + both case-insensitive according to RFC 3986. + """ + + __slots__ = () + + def __new__( + cls, + scheme=None, + auth=None, + host=None, + port=None, + path=None, + query=None, + fragment=None, + ): + if path and not path.startswith("/"): + path = "/" + path + if scheme is not None: + scheme = scheme.lower() + return super(Url, cls).__new__( + cls, scheme, auth, host, port, path, query, fragment + ) + + @property + def hostname(self): + """For backwards-compatibility with urlparse. We're nice like that.""" + return self.host + + @property + def request_uri(self): + """Absolute path including the query string.""" + uri = self.path or "/" + + if self.query is not None: + uri += "?" + self.query + + return uri + + @property + def netloc(self): + """Network location including host and port""" + if self.port: + return "%s:%d" % (self.host, self.port) + return self.host + + @property + def url(self): + """ + Convert self into a url + + This function should more or less round-trip with :func:`.parse_url`. The + returned url may not be exactly the same as the url inputted to + :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls + with a blank port will have : removed). + + Example: :: + + >>> U = parse_url('http://google.com/mail/') + >>> U.url + 'http://google.com/mail/' + >>> Url('http', 'username:password', 'host.com', 80, + ... '/path', 'query', 'fragment').url + 'http://username:password@host.com:80/path?query#fragment' + """ + scheme, auth, host, port, path, query, fragment = self + url = u"" + + # We use "is not None" we want things to happen with empty strings (or 0 port) + if scheme is not None: + url += scheme + u"://" + if auth is not None: + url += auth + u"@" + if host is not None: + url += host + if port is not None: + url += u":" + str(port) + if path is not None: + url += path + if query is not None: + url += u"?" + query + if fragment is not None: + url += u"#" + fragment + + return url + + def __str__(self): + return self.url + + +def split_first(s, delims): + """ + .. deprecated:: 1.25 + + Given a string and an iterable of delimiters, split on the first found + delimiter. Return two split parts and the matched delimiter. + + If not found, then the first part is the full input string. + + Example:: + + >>> split_first('foo/bar?baz', '?/=') + ('foo', 'bar?baz', '/') + >>> split_first('foo/bar?baz', '123') + ('foo/bar?baz', '', None) + + Scales linearly with number of delims. Not ideal for large number of delims. + """ + min_idx = None + min_delim = None + for d in delims: + idx = s.find(d) + if idx < 0: + continue + + if min_idx is None or idx < min_idx: + min_idx = idx + min_delim = d + + if min_idx is None or min_idx < 0: + return s, "", None + + return s[:min_idx], s[min_idx + 1 :], min_delim + + +def _encode_invalid_chars(component, allowed_chars, encoding="utf-8"): + """Percent-encodes a URI component without reapplying + onto an already percent-encoded component. + """ + if component is None: + return component + + component = six.ensure_text(component) + + # Normalize existing percent-encoded bytes. + # Try to see if the component we're encoding is already percent-encoded + # so we can skip all '%' characters but still encode all others. + component, percent_encodings = PERCENT_RE.subn( + lambda match: match.group(0).upper(), component + ) + + uri_bytes = component.encode("utf-8", "surrogatepass") + is_percent_encoded = percent_encodings == uri_bytes.count(b"%") + encoded_component = bytearray() + + for i in range(0, len(uri_bytes)): + # Will return a single character bytestring on both Python 2 & 3 + byte = uri_bytes[i : i + 1] + byte_ord = ord(byte) + if (is_percent_encoded and byte == b"%") or ( + byte_ord < 128 and byte.decode() in allowed_chars + ): + encoded_component += byte + continue + encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper())) + + return encoded_component.decode(encoding) + + +def _remove_path_dot_segments(path): + # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code + segments = path.split("/") # Turn the path into a list of segments + output = [] # Initialize the variable to use to store output + + for segment in segments: + # '.' is the current directory, so ignore it, it is superfluous + if segment == ".": + continue + # Anything other than '..', should be appended to the output + elif segment != "..": + output.append(segment) + # In this case segment == '..', if we can, we should pop the last + # element + elif output: + output.pop() + + # If the path starts with '/' and the output is empty or the first string + # is non-empty + if path.startswith("/") and (not output or output[0]): + output.insert(0, "") + + # If the path starts with '/.' or '/..' ensure we add one more empty + # string to add a trailing '/' + if path.endswith(("/.", "/..")): + output.append("") + + return "/".join(output) + + +def _normalize_host(host, scheme): + if host: + if isinstance(host, six.binary_type): + host = six.ensure_str(host) + + if scheme in NORMALIZABLE_SCHEMES: + is_ipv6 = IPV6_ADDRZ_RE.match(host) + if is_ipv6: + match = ZONE_ID_RE.search(host) + if match: + start, end = match.span(1) + zone_id = host[start:end] + + if zone_id.startswith("%25") and zone_id != "%25": + zone_id = zone_id[3:] + else: + zone_id = zone_id[1:] + zone_id = "%" + _encode_invalid_chars(zone_id, UNRESERVED_CHARS) + return host[:start].lower() + zone_id + host[end:] + else: + return host.lower() + elif not IPV4_RE.match(host): + return six.ensure_str( + b".".join([_idna_encode(label) for label in host.split(".")]) + ) + return host + + +def _idna_encode(name): + if name and any([ord(x) > 128 for x in name]): + try: + import idna + except ImportError: + six.raise_from( + LocationParseError("Unable to parse URL without the 'idna' module"), + None, + ) + try: + return idna.encode(name.lower(), strict=True, std3_rules=True) + except idna.IDNAError: + six.raise_from( + LocationParseError(u"Name '%s' is not a valid IDNA label" % name), None + ) + return name.lower().encode("ascii") + + +def _encode_target(target): + """Percent-encodes a request target so that there are no invalid characters""" + path, query = TARGET_RE.match(target).groups() + target = _encode_invalid_chars(path, PATH_CHARS) + query = _encode_invalid_chars(query, QUERY_CHARS) + if query is not None: + target += "?" + query + return target + + +def parse_url(url): + """ + Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is + performed to parse incomplete urls. Fields not provided will be None. + This parser is RFC 3986 compliant. + + The parser logic and helper functions are based heavily on + work done in the ``rfc3986`` module. + + :param str url: URL to parse into a :class:`.Url` namedtuple. + + Partly backwards-compatible with :mod:`urlparse`. + + Example:: + + >>> parse_url('http://google.com/mail/') + Url(scheme='http', host='google.com', port=None, path='/mail/', ...) + >>> parse_url('google.com:80') + Url(scheme=None, host='google.com', port=80, path=None, ...) + >>> parse_url('/foo?bar') + Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) + """ + if not url: + # Empty + return Url() + + source_url = url + if not SCHEME_RE.search(url): + url = "//" + url + + try: + scheme, authority, path, query, fragment = URI_RE.match(url).groups() + normalize_uri = scheme is None or scheme.lower() in NORMALIZABLE_SCHEMES + + if scheme: + scheme = scheme.lower() + + if authority: + auth, _, host_port = authority.rpartition("@") + auth = auth or None + host, port = _HOST_PORT_RE.match(host_port).groups() + if auth and normalize_uri: + auth = _encode_invalid_chars(auth, USERINFO_CHARS) + if port == "": + port = None + else: + auth, host, port = None, None, None + + if port is not None: + port = int(port) + if not (0 <= port <= 65535): + raise LocationParseError(url) + + host = _normalize_host(host, scheme) + + if normalize_uri and path: + path = _remove_path_dot_segments(path) + path = _encode_invalid_chars(path, PATH_CHARS) + if normalize_uri and query: + query = _encode_invalid_chars(query, QUERY_CHARS) + if normalize_uri and fragment: + fragment = _encode_invalid_chars(fragment, FRAGMENT_CHARS) + + except (ValueError, AttributeError): + return six.raise_from(LocationParseError(source_url), None) + + # For the sake of backwards compatibility we put empty + # string values for path if there are any defined values + # beyond the path in the URL. + # TODO: Remove this when we break backwards compatibility. + if not path: + if query is not None or fragment is not None: + path = "" + else: + path = None + + # Ensure that each part of the URL is a `str` for + # backwards compatibility. + if isinstance(url, six.text_type): + ensure_func = six.ensure_text + else: + ensure_func = six.ensure_str + + def ensure_type(x): + return x if x is None else ensure_func(x) + + return Url( + scheme=ensure_type(scheme), + auth=ensure_type(auth), + host=ensure_type(host), + port=port, + path=ensure_type(path), + query=ensure_type(query), + fragment=ensure_type(fragment), + ) + + +def get_host(url): + """ + Deprecated. Use :func:`parse_url` instead. + """ + p = parse_url(url) + return p.scheme or "http", p.hostname, p.port diff --git a/openpype/hosts/fusion/vendor/urllib3/util/wait.py b/openpype/hosts/fusion/vendor/urllib3/util/wait.py new file mode 100644 index 0000000000..c280646c7b --- /dev/null +++ b/openpype/hosts/fusion/vendor/urllib3/util/wait.py @@ -0,0 +1,153 @@ +import errno +import select +import sys +from functools import partial + +try: + from time import monotonic +except ImportError: + from time import time as monotonic + +__all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] + + +class NoWayToWaitForSocketError(Exception): + pass + + +# How should we wait on sockets? +# +# There are two types of APIs you can use for waiting on sockets: the fancy +# modern stateful APIs like epoll/kqueue, and the older stateless APIs like +# select/poll. The stateful APIs are more efficient when you have a lots of +# sockets to keep track of, because you can set them up once and then use them +# lots of times. But we only ever want to wait on a single socket at a time +# and don't want to keep track of state, so the stateless APIs are actually +# more efficient. So we want to use select() or poll(). +# +# Now, how do we choose between select() and poll()? On traditional Unixes, +# select() has a strange calling convention that makes it slow, or fail +# altogether, for high-numbered file descriptors. The point of poll() is to fix +# that, so on Unixes, we prefer poll(). +# +# On Windows, there is no poll() (or at least Python doesn't provide a wrapper +# for it), but that's OK, because on Windows, select() doesn't have this +# strange calling convention; plain select() works fine. +# +# So: on Windows we use select(), and everywhere else we use poll(). We also +# fall back to select() in case poll() is somehow broken or missing. + +if sys.version_info >= (3, 5): + # Modern Python, that retries syscalls by default + def _retry_on_intr(fn, timeout): + return fn(timeout) + + +else: + # Old and broken Pythons. + def _retry_on_intr(fn, timeout): + if timeout is None: + deadline = float("inf") + else: + deadline = monotonic() + timeout + + while True: + try: + return fn(timeout) + # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 + except (OSError, select.error) as e: + # 'e.args[0]' incantation works for both OSError and select.error + if e.args[0] != errno.EINTR: + raise + else: + timeout = deadline - monotonic() + if timeout < 0: + timeout = 0 + if timeout == float("inf"): + timeout = None + continue + + +def select_wait_for_socket(sock, read=False, write=False, timeout=None): + if not read and not write: + raise RuntimeError("must specify at least one of read=True, write=True") + rcheck = [] + wcheck = [] + if read: + rcheck.append(sock) + if write: + wcheck.append(sock) + # When doing a non-blocking connect, most systems signal success by + # marking the socket writable. Windows, though, signals success by marked + # it as "exceptional". We paper over the difference by checking the write + # sockets for both conditions. (The stdlib selectors module does the same + # thing.) + fn = partial(select.select, rcheck, wcheck, wcheck) + rready, wready, xready = _retry_on_intr(fn, timeout) + return bool(rready or wready or xready) + + +def poll_wait_for_socket(sock, read=False, write=False, timeout=None): + if not read and not write: + raise RuntimeError("must specify at least one of read=True, write=True") + mask = 0 + if read: + mask |= select.POLLIN + if write: + mask |= select.POLLOUT + poll_obj = select.poll() + poll_obj.register(sock, mask) + + # For some reason, poll() takes timeout in milliseconds + def do_poll(t): + if t is not None: + t *= 1000 + return poll_obj.poll(t) + + return bool(_retry_on_intr(do_poll, timeout)) + + +def null_wait_for_socket(*args, **kwargs): + raise NoWayToWaitForSocketError("no select-equivalent available") + + +def _have_working_poll(): + # Apparently some systems have a select.poll that fails as soon as you try + # to use it, either due to strange configuration or broken monkeypatching + # from libraries like eventlet/greenlet. + try: + poll_obj = select.poll() + _retry_on_intr(poll_obj.poll, 0) + except (AttributeError, OSError): + return False + else: + return True + + +def wait_for_socket(*args, **kwargs): + # We delay choosing which implementation to use until the first time we're + # called. We could do it at import time, but then we might make the wrong + # decision if someone goes wild with monkeypatching select.poll after + # we're imported. + global wait_for_socket + if _have_working_poll(): + wait_for_socket = poll_wait_for_socket + elif hasattr(select, "select"): + wait_for_socket = select_wait_for_socket + else: # Platform-specific: Appengine. + wait_for_socket = null_wait_for_socket + return wait_for_socket(*args, **kwargs) + + +def wait_for_read(sock, timeout=None): + """Waits for reading to be available on a given socket. + Returns True if the socket is readable, or False if the timeout expired. + """ + return wait_for_socket(sock, read=True, timeout=timeout) + + +def wait_for_write(sock, timeout=None): + """Waits for writing to be available on a given socket. + Returns True if the socket is readable, or False if the timeout expired. + """ + return wait_for_socket(sock, write=True, timeout=timeout) diff --git a/openpype/hosts/harmony/api/README.md b/openpype/hosts/harmony/api/README.md index 12f21f551a..be3920fe29 100644 --- a/openpype/hosts/harmony/api/README.md +++ b/openpype/hosts/harmony/api/README.md @@ -610,7 +610,7 @@ class ImageSequenceLoader(load.LoaderPlugin): def update(self, container, representation): node = container.pop("node") - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id(project_name, representation["parent"]) files = [] for f in version["data"]["files"]: diff --git a/openpype/hosts/harmony/plugins/load/load_background.py b/openpype/hosts/harmony/plugins/load/load_background.py index c28a87791e..853d347c2e 100644 --- a/openpype/hosts/harmony/plugins/load/load_background.py +++ b/openpype/hosts/harmony/plugins/load/load_background.py @@ -238,7 +238,8 @@ class BackgroundLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): - with open(self.fname) as json_file: + path = self.filepath_from_context(context) + with open(path) as json_file: data = json.load(json_file) layers = list() @@ -251,7 +252,7 @@ class BackgroundLoader(load.LoaderPlugin): if layer.get("filename"): layers.append(layer["filename"]) - bg_folder = os.path.dirname(self.fname) + bg_folder = os.path.dirname(path) subset_name = context["subset"]["name"] # read_node_name += "_{}".format(uuid.uuid4()) diff --git a/openpype/hosts/harmony/plugins/load/load_imagesequence.py b/openpype/hosts/harmony/plugins/load/load_imagesequence.py index b95d25f507..754f82e5d5 100644 --- a/openpype/hosts/harmony/plugins/load/load_imagesequence.py +++ b/openpype/hosts/harmony/plugins/load/load_imagesequence.py @@ -34,7 +34,7 @@ class ImageSequenceLoader(load.LoaderPlugin): data (dict, optional): Additional data passed into loader. """ - fname = Path(self.fname) + fname = Path(self.filepath_from_context(context)) self_name = self.__class__.__name__ collections, remainder = clique.assemble( os.listdir(fname.parent.as_posix()) diff --git a/openpype/hosts/harmony/plugins/load/load_template.py b/openpype/hosts/harmony/plugins/load/load_template.py index f3c69a9104..a78a1bf1ec 100644 --- a/openpype/hosts/harmony/plugins/load/load_template.py +++ b/openpype/hosts/harmony/plugins/load/load_template.py @@ -82,7 +82,6 @@ class TemplateLoader(load.LoaderPlugin): node = harmony.find_node_by_name(node_name, "GROUP") self_name = self.__class__.__name__ - update_and_replace = False if is_representation_from_latest(representation): self._set_green(node) else: diff --git a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py index f6b26eb3e8..af825c052a 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py +++ b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py @@ -5,7 +5,6 @@ from pathlib import Path import attr from openpype.lib import get_formatted_current_time -from openpype.pipeline import legacy_io from openpype.pipeline import publish from openpype.pipeline.publish import RenderInstance import openpype.hosts.harmony.api as harmony @@ -99,6 +98,8 @@ class CollectFarmRender(publish.AbstractCollectRender): self_name = self.__class__.__name__ + asset_name = context.data["asset"] + for node in context.data["allNodes"]: data = harmony.read(node) @@ -141,18 +142,18 @@ class CollectFarmRender(publish.AbstractCollectRender): source=context.data["currentFile"], label=node.split("/")[1], subset=subset_name, - asset=legacy_io.Session["AVALON_ASSET"], + asset=asset_name, task=task_name, attachTo=False, setMembers=[node], publish=info[4], - review=False, renderer=None, priority=50, name=node.split("/")[1], family="render.farm", families=["render.farm"], + farm=True, resolutionWidth=context.data["resolutionWidth"], resolutionHeight=context.data["resolutionHeight"], @@ -173,7 +174,6 @@ class CollectFarmRender(publish.AbstractCollectRender): outputFormat=info[1], outputStartFrame=info[3], leadingZeros=info[2], - toBeRenderedOn='deadline', ignoreFrameHandleCheck=True ) diff --git a/openpype/hosts/harmony/plugins/publish/collect_palettes.py b/openpype/hosts/harmony/plugins/publish/collect_palettes.py index bbd60d1c55..e19057e302 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_palettes.py +++ b/openpype/hosts/harmony/plugins/publish/collect_palettes.py @@ -1,6 +1,5 @@ # -*- coding: utf-8 -*- """Collect palettes from Harmony.""" -import os import json import re @@ -32,6 +31,7 @@ class CollectPalettes(pyblish.api.ContextPlugin): if (not any([re.search(pattern, task_name) for pattern in self.allowed_tasks])): return + asset_name = context.data["asset"] for name, id in palettes.items(): instance = context.create_instance(name) @@ -39,7 +39,7 @@ class CollectPalettes(pyblish.api.ContextPlugin): "id": id, "family": "harmony.palette", 'families': [], - "asset": os.environ["AVALON_ASSET"], + "asset": asset_name, "subset": "{}{}".format("palette", name) }) self.log.info( diff --git a/openpype/hosts/harmony/plugins/publish/collect_workfile.py b/openpype/hosts/harmony/plugins/publish/collect_workfile.py index 3624147435..4492ab37a5 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_workfile.py +++ b/openpype/hosts/harmony/plugins/publish/collect_workfile.py @@ -36,5 +36,5 @@ class CollectWorkfile(pyblish.api.ContextPlugin): "family": family, "families": [family], "representations": [], - "asset": os.environ["AVALON_ASSET"] + "asset": context.data["asset"] }) diff --git a/openpype/hosts/harmony/plugins/publish/extract_render.py b/openpype/hosts/harmony/plugins/publish/extract_render.py index 38b09902c1..5825d95a4a 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_render.py +++ b/openpype/hosts/harmony/plugins/publish/extract_render.py @@ -94,15 +94,14 @@ class ExtractRender(pyblish.api.InstancePlugin): # Generate thumbnail. thumbnail_path = os.path.join(path, "thumbnail.png") - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") - args = [ - ffmpeg_path, + args = openpype.lib.get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", os.path.join(path, list(collections[0])[0]), "-vf", "scale=300:-1", "-vframes", "1", thumbnail_path - ] + ) process = subprocess.Popen( args, stdout=subprocess.PIPE, diff --git a/openpype/hosts/harmony/plugins/publish/extract_template.py b/openpype/hosts/harmony/plugins/publish/extract_template.py index 458bf25a3c..e75459fe1e 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_template.py +++ b/openpype/hosts/harmony/plugins/publish/extract_template.py @@ -75,7 +75,7 @@ class ExtractTemplate(publish.Extractor): instance.data["representations"] = [representation] instance.data["version_name"] = "{}_{}".format( - instance.data["subset"], os.environ["AVALON_TASK"]) + instance.data["subset"], instance.context.data["task"]) def get_backdrops(self, node: str) -> list: """Get backdrops for the node. diff --git a/openpype/hosts/harmony/plugins/publish/validate_instances.py b/openpype/hosts/harmony/plugins/publish/validate_instances.py index ac367082ef..7183de6048 100644 --- a/openpype/hosts/harmony/plugins/publish/validate_instances.py +++ b/openpype/hosts/harmony/plugins/publish/validate_instances.py @@ -1,8 +1,7 @@ -import os - import pyblish.api import openpype.hosts.harmony.api as harmony +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -30,7 +29,7 @@ class ValidateInstanceRepair(pyblish.api.Action): for instance in instances: data = harmony.read(instance.data["setMembers"][0]) - data["asset"] = os.environ["AVALON_ASSET"] + data["asset"] = get_current_asset_name() harmony.imprint(instance.data["setMembers"][0], data) @@ -44,7 +43,7 @@ class ValidateInstance(pyblish.api.InstancePlugin): def process(self, instance): instance_asset = instance.data["asset"] - current_asset = os.environ["AVALON_ASSET"] + current_asset = get_current_asset_name() msg = ( "Instance asset is not the same as current asset:" f"\nInstance: {instance_asset}\nCurrent: {current_asset}" diff --git a/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py b/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py index 6e4c6955e4..866f12076a 100644 --- a/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py +++ b/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py @@ -67,7 +67,9 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin): expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\ expected_settings["handleEnd"] - if (any(re.search(pattern, os.getenv('AVALON_TASK')) + task_name = instance.context.data["task"] + + if (any(re.search(pattern, task_name) for pattern in self.skip_resolution_check)): self.log.info("Skipping resolution check because of " "task name and pattern {}".format( diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py index 09d73f5cc2..bf719160d1 100644 --- a/openpype/hosts/hiero/api/lib.py +++ b/openpype/hosts/hiero/api/lib.py @@ -22,9 +22,7 @@ except ImportError: from openpype.client import get_project from openpype.settings import get_project_settings -from openpype.pipeline import ( - get_current_project_name, legacy_io, Anatomy -) +from openpype.pipeline import Anatomy, get_current_project_name from openpype.pipeline.load import filter_containers from openpype.lib import Logger from . import tags @@ -626,7 +624,7 @@ def get_publish_attribute(tag): def sync_avalon_data_to_workfile(): # import session to get project dir - project_name = legacy_io.Session["AVALON_PROJECT"] + project_name = get_current_project_name() anatomy = Anatomy(project_name) work_template = anatomy.templates["work"]["path"] @@ -821,7 +819,7 @@ class PublishAction(QtWidgets.QAction): # # create root node and save all metadata # root_node = hiero.core.nuke.RootNode() # -# anatomy = Anatomy(os.environ["AVALON_PROJECT"]) +# anatomy = Anatomy(get_current_project_name()) # work_template = anatomy.templates["work"]["path"] # root_path = anatomy.root_value_for_template(work_template) # @@ -1041,7 +1039,7 @@ def _set_hrox_project_knobs(doc, **knobs): def apply_colorspace_project(): - project_name = os.getenv("AVALON_PROJECT") + project_name = get_current_project_name() # get path the the active projects project = get_current_project(remove_untitled=True) current_file = project.path() @@ -1110,7 +1108,7 @@ def apply_colorspace_project(): def apply_colorspace_clips(): - project_name = os.getenv("AVALON_PROJECT") + project_name = get_current_project_name() project = get_current_project(remove_untitled=True) clips = project.clips() @@ -1264,7 +1262,7 @@ def check_inventory_versions(track_items=None): if not containers: return - project_name = legacy_io.active_project() + project_name = get_current_project_name() filter_result = filter_containers(containers, project_name) for container in filter_result.latest: set_track_color(container["_item"], clip_color_last) diff --git a/openpype/hosts/hiero/api/menu.py b/openpype/hosts/hiero/api/menu.py index 6baeb38cc0..9967e9c875 100644 --- a/openpype/hosts/hiero/api/menu.py +++ b/openpype/hosts/hiero/api/menu.py @@ -4,12 +4,18 @@ import sys import hiero.core from hiero.ui import findMenuAction +from qtpy import QtGui + from openpype.lib import Logger -from openpype.pipeline import legacy_io from openpype.tools.utils import host_tools +from openpype.settings import get_project_settings +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name +) from . import tags -from openpype.settings import get_project_settings log = Logger.get_logger(__name__) @@ -17,6 +23,13 @@ self = sys.modules[__name__] self._change_context_menu = None +def get_context_label(): + return "{}, {}".format( + get_current_asset_name(), + get_current_task_name() + ) + + def update_menu_task_label(): """Update the task label in Avalon menu to current session""" @@ -27,10 +40,7 @@ def update_menu_task_label(): log.warning("Can't find menuItem: {}".format(object_name)) return - label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + label = get_context_label() menu = found_menu.menu() self._change_context_menu = label @@ -43,7 +53,6 @@ def menu_install(): """ - from qtpy import QtGui from . import ( publish, launch_workfiles_app, reload_config, apply_colorspace_project, apply_colorspace_clips @@ -56,10 +65,7 @@ def menu_install(): menu_name = os.environ['AVALON_LABEL'] - context_label = "{0}, {1}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + context_label = get_context_label() self._change_context_menu = context_label @@ -154,7 +160,7 @@ def add_scripts_menu(): return # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_settings = get_project_settings(get_current_project_name()) config = project_settings["hiero"]["scriptsmenu"]["definition"] _menu = project_settings["hiero"]["scriptsmenu"]["name"] diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py index a3f8a6c524..52f96261b2 100644 --- a/openpype/hosts/hiero/api/plugin.py +++ b/openpype/hosts/hiero/api/plugin.py @@ -12,6 +12,7 @@ from openpype.settings import get_current_project_settings from openpype.lib import Logger from openpype.pipeline import LoaderPlugin, LegacyCreator from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.load import get_representation_path_from_context from . import lib log = Logger.get_logger(__name__) @@ -316,20 +317,6 @@ class Spacer(QtWidgets.QWidget): self.setLayout(layout) -def get_reference_node_parents(ref): - """Return all parent reference nodes of reference node - - Args: - ref (str): reference node. - - Returns: - list: The upstream parent reference nodes. - - """ - parents = [] - return parents - - class SequenceLoader(LoaderPlugin): """A basic SequenceLoader for Resolve @@ -393,7 +380,7 @@ class ClipLoader: active_bin = None data = dict() - def __init__(self, cls, context, **options): + def __init__(self, cls, context, path, **options): """ Initialize object Arguments: @@ -406,6 +393,7 @@ class ClipLoader: self.__dict__.update(cls.__dict__) self.context = context self.active_project = lib.get_current_project() + self.fname = path # try to get value from options or evaluate key value for `handles` self.with_handles = options.get("handles") or bool( @@ -467,7 +455,7 @@ class ClipLoader: self.data["track_name"] = "_".join([subset, representation]) self.data["versionData"] = self.context["version"]["data"] # gets file path - file = self.fname + file = get_representation_path_from_context(self.context) if not file: repr_id = repr["_id"] log.warning( diff --git a/openpype/hosts/hiero/api/tags.py b/openpype/hosts/hiero/api/tags.py index cb7bc14edb..02d8205414 100644 --- a/openpype/hosts/hiero/api/tags.py +++ b/openpype/hosts/hiero/api/tags.py @@ -5,7 +5,7 @@ import hiero from openpype.client import get_project, get_assets from openpype.lib import Logger -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name log = Logger.get_logger(__name__) @@ -142,7 +142,7 @@ def add_tags_to_workfile(): nks_pres_tags = tag_data() # Get project task types. - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) tasks = project_doc["config"]["tasks"] nks_pres_tags["[Tasks]"] = {} diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py index c9bebfa8b2..05bd12d185 100644 --- a/openpype/hosts/hiero/plugins/load/load_clip.py +++ b/openpype/hosts/hiero/plugins/load/load_clip.py @@ -3,8 +3,8 @@ from openpype.client import ( get_last_version_by_subset_id ) from openpype.pipeline import ( - legacy_io, get_representation_path, + get_current_project_name, ) from openpype.lib.transcoding import ( VIDEO_EXTENSIONS, @@ -87,7 +87,8 @@ class LoadClip(phiero.SequenceLoader): }) # load clip to timeline and get main variables - track_item = phiero.ClipLoader(self, context, **options).load() + path = self.filepath_from_context(context) + track_item = phiero.ClipLoader(self, context, path, **options).load() namespace = namespace or track_item.name() version = context['version'] version_data = version.get("data", {}) @@ -147,7 +148,7 @@ class LoadClip(phiero.SequenceLoader): track_item = phiero.get_track_items( track_item_name=namespace).pop() - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc.get("data", {}) @@ -210,7 +211,7 @@ class LoadClip(phiero.SequenceLoader): @classmethod def set_item_color(cls, track_item, version_doc): - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] ) diff --git a/openpype/hosts/hiero/plugins/load/load_effects.py b/openpype/hosts/hiero/plugins/load/load_effects.py index b61cca9731..31147d013f 100644 --- a/openpype/hosts/hiero/plugins/load/load_effects.py +++ b/openpype/hosts/hiero/plugins/load/load_effects.py @@ -9,8 +9,8 @@ from openpype.client import ( from openpype.pipeline import ( AVALON_CONTAINER_ID, load, - legacy_io, - get_representation_path + get_representation_path, + get_current_project_name ) from openpype.hosts.hiero import api as phiero from openpype.lib import Logger @@ -59,7 +59,8 @@ class LoadEffects(load.LoaderPlugin): } # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context) + file = file.replace("\\", "/") if self._shared_loading( file, @@ -167,7 +168,7 @@ class LoadEffects(load.LoaderPlugin): namespace = container['namespace'] # get timeline in out data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc["data"] clip_in = version_data["clipIn"] diff --git a/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py b/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py index d455ad4a4e..fcb1ab27a0 100644 --- a/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py +++ b/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py @@ -43,7 +43,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin): if review and review_track_index == _track_index: continue for sitem in sub_track_items: - effect = None # make sure this subtrack item is relative of track item if ((track_item not in sitem.linkedItems()) and (len(sitem.linkedItems()) > 0)): @@ -53,7 +52,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin): continue effect = self.add_effect(_track_index, sitem) - if effect: effects.update(effect) diff --git a/openpype/hosts/hiero/plugins/publish/extract_frames.py b/openpype/hosts/hiero/plugins/publish/extract_frames.py index f865d2fb39..803c338766 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_frames.py +++ b/openpype/hosts/hiero/plugins/publish/extract_frames.py @@ -2,7 +2,7 @@ import os import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -18,7 +18,7 @@ class ExtractFrames(publish.Extractor): movie_extensions = ["mov", "mp4"] def process(self, instance): - oiio_tool_path = get_oiio_tools_path() + oiio_tool_args = get_oiio_tool_args("oiiotool") staging_dir = self.staging_dir(instance) output_template = os.path.join(staging_dir, instance.data["name"]) sequence = instance.context.data["activeTimeline"] @@ -36,7 +36,7 @@ class ExtractFrames(publish.Extractor): output_path = output_template output_path += ".{:04d}.{}".format(int(frame), output_ext) - args = [oiio_tool_path] + args = list(oiio_tool_args) ext = os.path.splitext(input_path)[1][1:] if ext in self.movie_extensions: diff --git a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py index 1f477c1639..5a66581531 100644 --- a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py +++ b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py @@ -7,7 +7,6 @@ from qtpy.QtGui import QPixmap import hiero.ui -from openpype.pipeline import legacy_io from openpype.hosts.hiero.api.otio import hiero_export @@ -19,7 +18,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin): def process(self, context): - asset = legacy_io.Session["AVALON_ASSET"] + asset = context.data["asset"] subset = "workfile" active_timeline = hiero.ui.activeSequence() project = active_timeline.project() diff --git a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py index 5f96533052..767f7c30f7 100644 --- a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py +++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py @@ -1,6 +1,5 @@ from pyblish import api from openpype.client import get_assets -from openpype.pipeline import legacy_io class CollectAssetBuilds(api.ContextPlugin): @@ -18,7 +17,7 @@ class CollectAssetBuilds(api.ContextPlugin): hosts = ["hiero"] def process(self, context): - project_name = legacy_io.active_project() + project_name = context.data["projectName"] asset_builds = {} for asset in get_assets(project_name): if asset["data"]["entityType"] == "AssetBuild": diff --git a/openpype/hosts/houdini/api/action.py b/openpype/hosts/houdini/api/action.py index b1519ddd1d..77966d6d5c 100644 --- a/openpype/hosts/houdini/api/action.py +++ b/openpype/hosts/houdini/api/action.py @@ -42,3 +42,42 @@ class SelectInvalidAction(pyblish.api.Action): node.setCurrent(True) else: self.log.info("No invalid nodes found.") + + +class SelectROPAction(pyblish.api.Action): + """Select ROP. + + It's used to select the associated ROPs with the errored instances. + """ + + label = "Select ROP" + on = "failed" # This action is only available on a failed plug-in + icon = "mdi.cursor-default-click" + + def process(self, context, plugin): + errored_instances = get_errored_instances_from_context(context, plugin) + + # Get the invalid nodes for the plug-ins + self.log.info("Finding ROP nodes..") + rop_nodes = list() + for instance in errored_instances: + node_path = instance.data.get("instance_node") + if not node_path: + continue + + node = hou.node(node_path) + if not node: + continue + + rop_nodes.append(node) + + hou.clearAllSelected() + if rop_nodes: + self.log.info("Selecting ROP nodes: {}".format( + ", ".join(node.path() for node in rop_nodes) + )) + for node in rop_nodes: + node.setSelected(True) + node.setCurrent(True) + else: + self.log.info("No ROP nodes found.") diff --git a/openpype/hosts/houdini/api/colorspace.py b/openpype/hosts/houdini/api/colorspace.py index 7047644225..cc40b9df1c 100644 --- a/openpype/hosts/houdini/api/colorspace.py +++ b/openpype/hosts/houdini/api/colorspace.py @@ -1,7 +1,7 @@ import attr import hou from openpype.hosts.houdini.api.lib import get_color_management_preferences - +from openpype.pipeline.colorspace import get_display_view_colorspace_name @attr.s class LayerMetadata(object): @@ -54,3 +54,16 @@ class ARenderProduct(object): ) ] return colorspace_data + + +def get_default_display_view_colorspace(): + """Returns the colorspace attribute of the default (display, view) pair. + + It's used for 'ociocolorspace' parm in OpenGL Node.""" + + prefs = get_color_management_preferences() + return get_display_view_colorspace_name( + config_path=prefs["config"], + display=prefs["display"], + view=prefs["view"] + ) diff --git a/openpype/hosts/houdini/api/creator_node_shelves.py b/openpype/hosts/houdini/api/creator_node_shelves.py index 7c6122cffe..1f9fef7417 100644 --- a/openpype/hosts/houdini/api/creator_node_shelves.py +++ b/openpype/hosts/houdini/api/creator_node_shelves.py @@ -57,28 +57,31 @@ def create_interactive(creator_identifier, **kwargs): list: The created instances. """ - - # TODO Use Qt instead - result, variant = hou.ui.readInput('Define variant name', - buttons=("Ok", "Cancel"), - initial_contents='Main', - title="Define variant", - help="Set the variant for the " - "publish instance", - close_choice=1) - if result == 1: - # User interrupted - return - variant = variant.strip() - if not variant: - raise RuntimeError("Empty variant value entered.") - host = registered_host() context = CreateContext(host) creator = context.manual_creators.get(creator_identifier) if not creator: - raise RuntimeError("Invalid creator identifier: " - "{}".format(creator_identifier)) + raise RuntimeError("Invalid creator identifier: {}".format( + creator_identifier) + ) + + # TODO Use Qt instead + result, variant = hou.ui.readInput( + "Define variant name", + buttons=("Ok", "Cancel"), + initial_contents=creator.get_default_variant(), + title="Define variant", + help="Set the variant for the publish instance", + close_choice=1 + ) + + if result == 1: + # User interrupted + return + + variant = variant.strip() + if not variant: + raise RuntimeError("Empty variant value entered.") # TODO: Once more elaborate unique create behavior should exist per Creator # instead of per network editor area then we should move this from here diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index a32e9d8d61..3db18ca69a 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -1,6 +1,7 @@ # -*- coding: utf-8 -*- import sys import os +import errno import re import uuid import logging @@ -9,10 +10,15 @@ import json import six +from openpype.lib import StringTemplate from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io -from openpype.pipeline.context_tools import get_current_project_asset - +from openpype.settings import get_current_project_settings +from openpype.pipeline import get_current_project_name, get_current_asset_name +from openpype.pipeline.context_tools import ( + get_current_context_template_data, + get_current_project_asset +) +from openpype.widgets import popup import hou @@ -22,9 +28,12 @@ log = logging.getLogger(__name__) JSON_PREFIX = "JSON:::" -def get_asset_fps(): +def get_asset_fps(asset_doc=None): """Return current asset fps.""" - return get_current_project_asset()["data"].get("fps") + + if asset_doc is None: + asset_doc = get_current_project_asset(fields=["data.fps"]) + return asset_doc["data"]["fps"] def set_id(node, unique_id, overwrite=False): @@ -78,8 +87,8 @@ def generate_ids(nodes, asset_id=None): """ if asset_id is None: - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() # Get the asset ID from the database for the asset of current context asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) @@ -157,8 +166,6 @@ def validate_fps(): if current_fps != fps: - from openpype.widgets import popup - # Find main window parent = hou.ui.mainQtWindow() if parent is None: @@ -472,14 +479,19 @@ def maintained_selection(): def reset_framerange(): - """Set frame range to current asset""" + """Set frame range and FPS to current asset""" - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + # Get asset data + project_name = get_current_project_name() + asset_name = get_current_asset_name() # Get the asset ID from the database for the asset of current context asset_doc = get_asset_by_name(project_name, asset_name) asset_data = asset_doc["data"] + # Get FPS + fps = get_asset_fps(asset_doc) + + # Get Start and End Frames frame_start = asset_data.get("frameStart") frame_end = asset_data.get("frameEnd") @@ -493,6 +505,9 @@ def reset_framerange(): frame_start -= int(handle_start) frame_end += int(handle_end) + # Set frame range and FPS + print("Setting scene FPS to {}".format(int(fps))) + set_scene_fps(fps) hou.playbar.setFrameRange(frame_start, frame_end) hou.playbar.setPlaybackRange(frame_start, frame_end) hou.setFrame(frame_start) @@ -638,3 +653,197 @@ def get_color_management_preferences(): "display": hou.Color.ocio_defaultDisplay(), "view": hou.Color.ocio_defaultView() } + + +def get_obj_node_output(obj_node): + """Find output node. + + If the node has any output node return the + output node with the minimum `outputidx`. + When no output is present return the node + with the display flag set. If no output node is + detected then None is returned. + + Arguments: + node (hou.Node): The node to retrieve a single + the output node for. + + Returns: + Optional[hou.Node]: The child output node. + + """ + + outputs = obj_node.subnetOutputs() + if not outputs: + return + + elif len(outputs) == 1: + return outputs[0] + + else: + return min(outputs, + key=lambda node: node.evalParm('outputidx')) + + +def get_output_children(output_node, include_sops=True): + """Recursively return a list of all output nodes + contained in this node including this node. + + It works in a similar manner to output_node.allNodes(). + """ + out_list = [output_node] + + if output_node.childTypeCategory() == hou.objNodeTypeCategory(): + for child in output_node.children(): + out_list += get_output_children(child, include_sops=include_sops) + + elif include_sops and \ + output_node.childTypeCategory() == hou.sopNodeTypeCategory(): + out = get_obj_node_output(output_node) + if out: + out_list += [out] + + return out_list + + +def get_resolution_from_doc(doc): + """Get resolution from the given asset document. """ + + if not doc or "data" not in doc: + print("Entered document is not valid. \"{}\"".format(str(doc))) + return None + + resolution_width = doc["data"].get("resolutionWidth") + resolution_height = doc["data"].get("resolutionHeight") + + # Make sure both width and height are set + if resolution_width is None or resolution_height is None: + print("No resolution information found for \"{}\"".format(doc["name"])) + return None + + return int(resolution_width), int(resolution_height) + + +def set_camera_resolution(camera, asset_doc=None): + """Apply resolution to camera from asset document of the publish""" + + if not asset_doc: + asset_doc = get_current_project_asset() + + resolution = get_resolution_from_doc(asset_doc) + + if resolution: + print("Setting camera resolution: {} -> {}x{}".format( + camera.name(), resolution[0], resolution[1] + )) + camera.parm("resx").set(resolution[0]) + camera.parm("resy").set(resolution[1]) + + +def get_camera_from_container(container): + """Get camera from container node. """ + + cameras = container.recursiveGlob( + "*", + filter=hou.nodeTypeFilter.ObjCamera, + include_subnets=False + ) + + assert len(cameras) == 1, "Camera instance must have only one camera" + return cameras[0] + + +def get_context_var_changes(): + """get context var changes.""" + + houdini_vars_to_update = {} + + project_settings = get_current_project_settings() + houdini_vars_settings = \ + project_settings["houdini"]["general"]["update_houdini_var_context"] + + if not houdini_vars_settings["enabled"]: + return houdini_vars_to_update + + houdini_vars = houdini_vars_settings["houdini_vars"] + + # No vars specified - nothing to do + if not houdini_vars: + return houdini_vars_to_update + + # Get Template data + template_data = get_current_context_template_data() + + # Set Houdini Vars + for item in houdini_vars: + # For consistency reasons we always force all vars to be uppercase + # Also remove any leading, and trailing whitespaces. + var = item["var"].strip().upper() + + # get and resolve template in value + item_value = StringTemplate.format_template( + item["value"], + template_data + ) + + if var == "JOB" and item_value == "": + # sync $JOB to $HIP if $JOB is empty + item_value = os.environ["HIP"] + + if item["is_directory"]: + item_value = item_value.replace("\\", "/") + + current_value = hou.hscript("echo -n `${}`".format(var))[0] + + if current_value != item_value: + houdini_vars_to_update[var] = ( + current_value, item_value, item["is_directory"] + ) + + return houdini_vars_to_update + + +def update_houdini_vars_context(): + """Update asset context variables""" + + for var, (_old, new, is_directory) in get_context_var_changes().items(): + if is_directory: + try: + os.makedirs(new) + except OSError as e: + if e.errno != errno.EEXIST: + print( + "Failed to create ${} dir. Maybe due to " + "insufficient permissions.".format(var) + ) + + hou.hscript("set {}={}".format(var, new)) + os.environ[var] = new + print("Updated ${} to {}".format(var, new)) + + +def update_houdini_vars_context_dialog(): + """Show pop-up to update asset context variables""" + update_vars = get_context_var_changes() + if not update_vars: + # Nothing to change + print("Nothing to change, Houdini vars are already up to date.") + return + + message = "\n".join( + "${}: {} -> {}".format(var, old or "None", new or "None") + for var, (old, new, _is_directory) in update_vars.items() + ) + + # TODO: Use better UI! + parent = hou.ui.mainQtWindow() + dialog = popup.Popup(parent=parent) + dialog.setModal(True) + dialog.setWindowTitle("Houdini scene has outdated asset variables") + dialog.setMessage(message) + dialog.setButtonText("Fix") + + # on_show is the Fix button clicked callback + dialog.on_clicked.connect(update_houdini_vars_context) + + dialog.show() diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 8a26bbb504..f8db45c56b 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -14,6 +14,7 @@ import pyblish.api from openpype.pipeline import ( register_creator_plugin_path, register_loader_plugin_path, + register_inventory_action_path, AVALON_CONTAINER_ID, ) from openpype.pipeline.load import any_outdated_containers @@ -25,7 +26,6 @@ from openpype.lib import ( emit_event, ) -from .lib import get_asset_fps log = logging.getLogger("openpype.hosts.houdini") @@ -56,6 +56,7 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) register_creator_plugin_path(CREATE_PATH) + register_inventory_action_path(INVENTORY_PATH) log.info("Installing callbacks ... ") # register_event_callback("init", on_init) @@ -299,11 +300,36 @@ def on_save(): log.info("Running callback on save..") + # update houdini vars + lib.update_houdini_vars_context_dialog() + nodes = lib.get_id_required_nodes() for node, new_id in lib.generate_ids(nodes): lib.set_id(node, new_id, overwrite=False) +def _show_outdated_content_popup(): + # Get main window + parent = lib.get_main_window() + if parent is None: + log.info("Skipping outdated content pop-up " + "because Houdini window can't be found.") + else: + from openpype.widgets import popup + + # Show outdated pop-up + def _on_show_inventory(): + from openpype.tools.utils import host_tools + host_tools.show_scene_inventory(parent=parent) + + dialog = popup.Popup(parent=parent) + dialog.setWindowTitle("Houdini scene has outdated content") + dialog.setMessage("There are outdated containers in " + "your Houdini scene.") + dialog.on_clicked.connect(_on_show_inventory) + dialog.show() + + def on_open(): if not hou.isUIAvailable(): @@ -312,33 +338,26 @@ def on_open(): log.info("Running callback on open..") + # update houdini vars + lib.update_houdini_vars_context_dialog() + # Validate FPS after update_task_from_path to # ensure it is using correct FPS for the asset lib.validate_fps() if any_outdated_containers(): - from openpype.widgets import popup - - log.warning("Scene has outdated content.") - - # Get main window parent = lib.get_main_window() if parent is None: - log.info("Skipping outdated content pop-up " - "because Houdini window can't be found.") + # When opening Houdini with last workfile on launch the UI hasn't + # initialized yet completely when the `on_open` callback triggers. + # We defer the dialog popup to wait for the UI to become available. + # We assume it will open because `hou.isUIAvailable()` returns True + import hdefereval + hdefereval.executeDeferred(_show_outdated_content_popup) else: + _show_outdated_content_popup() - # Show outdated pop-up - def _on_show_inventory(): - from openpype.tools.utils import host_tools - host_tools.show_scene_inventory(parent=parent) - - dialog = popup.Popup(parent=parent) - dialog.setWindowTitle("Houdini scene has outdated content") - dialog.setMessage("There are outdated containers in " - "your Houdini scene.") - dialog.on_clicked.connect(_on_show_inventory) - dialog.show() + log.warning("Scene has outdated content.") def on_new(): @@ -385,12 +404,8 @@ def _set_context_settings(): None """ - # Set new scene fps - fps = get_asset_fps() - print("Setting scene FPS to %i" % fps) - lib.set_scene_fps(fps) - lib.reset_framerange() + lib.update_houdini_vars_context() def on_pyblish_instance_toggled(instance, new_value, old_value): diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 1e7eaa7e22..a0a7dcc2e4 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -167,9 +167,12 @@ class HoudiniCreatorBase(object): class HoudiniCreator(NewCreator, HoudiniCreatorBase): """Base class for most of the Houdini creator plugins.""" selected_nodes = [] + settings_name = None def create(self, subset_name, instance_data, pre_create_data): try: + self.selected_nodes = [] + if pre_create_data.get("use_selection"): self.selected_nodes = hou.selectedNodes() @@ -184,13 +187,14 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): self.customize_node_look(instance_node) instance_data["instance_node"] = instance_node.path() + instance_data["instance_id"] = instance_node.path() instance = CreatedInstance( self.family, subset_name, instance_data, self) self._add_instance_to_context(instance) - imprint(instance_node, instance.data_to_store()) + self.imprint(instance_node, instance.data_to_store()) return instance except hou.Error as er: @@ -219,25 +223,41 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): self.cache_subsets(self.collection_shared_data) for instance in self.collection_shared_data[ "houdini_cached_subsets"].get(self.identifier, []): + + node_data = read(instance) + + # Node paths are always the full node path since that is unique + # Because it's the node's path it's not written into attributes + # but explicitly collected + node_path = instance.path() + node_data["instance_id"] = node_path + node_data["instance_node"] = node_path + created_instance = CreatedInstance.from_existing( - read(instance), self + node_data, self ) self._add_instance_to_context(created_instance) def update_instances(self, update_list): for created_inst, changes in update_list: instance_node = hou.node(created_inst.get("instance_node")) - new_values = { key: changes[key].new_value for key in changes.changed_keys } - imprint( + self.imprint( instance_node, new_values, update=True ) + def imprint(self, node, values, update=False): + # Never store instance node and instance id since that data comes + # from the node's path + values.pop("instance_node", None) + values.pop("instance_id", None) + imprint(node, values, update=update) + def remove_instances(self, instances): """Remove specified instance from the scene. @@ -292,3 +312,21 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): """ return [hou.ropNodeTypeCategory()] + + def apply_settings(self, project_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["houdini"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) diff --git a/openpype/hosts/houdini/api/shelves.py b/openpype/hosts/houdini/api/shelves.py index 6e0f367f62..21e44e494a 100644 --- a/openpype/hosts/houdini/api/shelves.py +++ b/openpype/hosts/houdini/api/shelves.py @@ -4,6 +4,7 @@ import logging import platform from openpype.settings import get_project_settings +from openpype.pipeline import get_current_project_name import hou @@ -17,7 +18,8 @@ def generate_shelves(): current_os = platform.system().lower() # load configuration of houdini shelves - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) shelves_set_config = project_settings["houdini"]["shelves"] if not shelves_set_config: diff --git a/openpype/hosts/houdini/hooks/set_paths.py b/openpype/hosts/houdini/hooks/set_paths.py index 04a33b1643..b23659e23b 100644 --- a/openpype/hosts/houdini/hooks/set_paths.py +++ b/openpype/hosts/houdini/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["houdini"] + app_groups = {"houdini"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/houdini/plugins/create/convert_legacy.py b/openpype/hosts/houdini/plugins/create/convert_legacy.py index e549c9dc26..86103e3369 100644 --- a/openpype/hosts/houdini/plugins/create/convert_legacy.py +++ b/openpype/hosts/houdini/plugins/create/convert_legacy.py @@ -69,6 +69,8 @@ class HoudiniLegacyConvertor(SubsetConvertorPlugin): "creator_identifier": self.family_to_id[family], "instance_node": subset.path() } + if family == "pointcache": + data["families"] = ["abc"] self.log.info("Converting {} to {}".format( subset.path(), self.family_to_id[family])) imprint(subset, data) diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py index 8b310753d0..12d08f7d83 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py @@ -10,9 +10,10 @@ class CreateArnoldAss(plugin.HoudiniCreator): label = "Arnold ASS" family = "ass" icon = "magic" - defaults = ["Main"] # Default extension: `.ass` or `.ass.gz` + # however calling HoudiniCreator.create() + # will override it by the value in the project settings ext = ".ass" def create(self, subset_name, instance_data, pre_create_data): diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py index bddf26dbd5..b58c377a20 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py @@ -1,5 +1,5 @@ from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateArnoldRop(plugin.HoudiniCreator): @@ -9,7 +9,6 @@ class CreateArnoldRop(plugin.HoudiniCreator): label = "Arnold ROP" family = "arnold_rop" icon = "magic" - defaults = ["master"] # Default extension ext = "exr" @@ -24,7 +23,7 @@ class CreateArnoldRop(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 1 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateArnoldRop, self).create( subset_name, @@ -64,6 +63,9 @@ class CreateArnoldRop(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_bgeo.py b/openpype/hosts/houdini/plugins/create/create_bgeo.py new file mode 100644 index 0000000000..a3f31e7e94 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_bgeo.py @@ -0,0 +1,92 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating pointcache bgeo files.""" +from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance, CreatorError +from openpype.lib import EnumDef + + +class CreateBGEO(plugin.HoudiniCreator): + """BGEO pointcache creator.""" + identifier = "io.openpype.creators.houdini.bgeo" + label = "PointCache (Bgeo)" + family = "pointcache" + icon = "gears" + + def create(self, subset_name, instance_data, pre_create_data): + import hou + + instance_data.pop("active", None) + + instance_data.update({"node_type": "geometry"}) + + instance = super(CreateBGEO, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + file_path = "{}{}".format( + hou.text.expandString("$HIP/pyblish/"), + "{}.$F4.{}".format( + subset_name, + pre_create_data.get("bgeo_type") or "bgeo.sc") + ) + parms = { + "sopoutput": file_path + } + + instance_node.parm("trange").set(1) + if self.selected_nodes: + # if selection is on SOP level, use it + if isinstance(self.selected_nodes[0], hou.SopNode): + parms["soppath"] = self.selected_nodes[0].path() + else: + # try to find output node with the lowest index + outputs = [ + child for child in self.selected_nodes[0].children() + if child.type().name() == "output" + ] + if not outputs: + instance_node.setParms(parms) + raise CreatorError(( + "Missing output node in SOP level for the selection. " + "Please select correct SOP path in created instance." + )) + outputs.sort(key=lambda output: output.evalParm("outputidx")) + parms["soppath"] = outputs[0].path() + + instance_node.setParms(parms) + + def get_pre_create_attr_defs(self): + attrs = super().get_pre_create_attr_defs() + bgeo_enum = [ + { + "value": "bgeo", + "label": "uncompressed bgeo (.bgeo)" + }, + { + "value": "bgeosc", + "label": "BLOSC compressed bgeo (.bgeosc)" + }, + { + "value": "bgeo.sc", + "label": "BLOSC compressed bgeo (.bgeo.sc)" + }, + { + "value": "bgeo.gz", + "label": "GZ compressed bgeo (.bgeo.gz)" + }, + { + "value": "bgeo.lzma", + "label": "LZMA compressed bgeo (.bgeo.lzma)" + }, + { + "value": "bgeo.bz2", + "label": "BZip2 compressed bgeo (.bgeo.bz2)" + } + ] + + return attrs + [ + EnumDef("bgeo_type", bgeo_enum, label="BGEO Options"), + ] diff --git a/openpype/hosts/houdini/plugins/create/create_hda.py b/openpype/hosts/houdini/plugins/create/create_hda.py index 5f95b2efb4..c4093bfbc6 100644 --- a/openpype/hosts/houdini/plugins/create/create_hda.py +++ b/openpype/hosts/houdini/plugins/create/create_hda.py @@ -4,7 +4,6 @@ from openpype.client import ( get_asset_by_name, get_subsets, ) -from openpype.pipeline import legacy_io from openpype.hosts.houdini.api import plugin @@ -21,7 +20,7 @@ class CreateHDA(plugin.HoudiniCreator): # type: (str) -> bool """Check if existing subset name versions already exists.""" # Get all subsets of the current asset - project_name = legacy_io.active_project() + project_name = self.project_name asset_doc = get_asset_by_name( project_name, self.data["asset"], fields=["_id"] ) diff --git a/openpype/hosts/houdini/plugins/create/create_karma_rop.py b/openpype/hosts/houdini/plugins/create/create_karma_rop.py index edfb992e1a..4e1360ca45 100644 --- a/openpype/hosts/houdini/plugins/create/create_karma_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_karma_rop.py @@ -11,7 +11,6 @@ class CreateKarmaROP(plugin.HoudiniCreator): label = "Karma ROP" family = "karma_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateKarmaROP, self).create( subset_name, @@ -67,6 +66,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): camera = None for node in self.selected_nodes: if node.type().name() == "cam": + camera = node.path() has_camera = pre_create_data.get("cam_res") if has_camera: res_x = node.evalParm("resx") @@ -96,6 +96,9 @@ class CreateKarmaROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py index 5ca53e96de..d2f0e735a8 100644 --- a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py @@ -11,7 +11,6 @@ class CreateMantraROP(plugin.HoudiniCreator): label = "Mantra ROP" family = "mantra_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ class CreateMantraROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateMantraROP, self).create( subset_name, @@ -76,6 +75,9 @@ class CreateMantraROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index df74070fee..7eaf2aff2b 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -1,7 +1,6 @@ # -*- coding: utf-8 -*- """Creator plugin for creating pointcache alembics.""" from openpype.hosts.houdini.api import plugin -from openpype.pipeline import CreatedInstance import hou @@ -9,20 +8,18 @@ import hou class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" identifier = "io.openpype.creators.houdini.pointcache" - label = "Point Cache" + label = "PointCache (Abc)" family = "pointcache" icon = "gears" def create(self, subset_name, instance_data, pre_create_data): - import hou - instance_data.pop("active", None) instance_data.update({"node_type": "alembic"}) instance = super(CreatePointCache, self).create( subset_name, instance_data, - pre_create_data) # type: CreatedInstance + pre_create_data) instance_node = hou.node(instance.get("instance_node")) parms = { @@ -37,13 +34,44 @@ class CreatePointCache(plugin.HoudiniCreator): } if self.selected_nodes: - parms["sop_path"] = self.selected_nodes[0].path() + selected_node = self.selected_nodes[0] - # try to find output node - for child in self.selected_nodes[0].children(): - if child.type().name() == "output": - parms["sop_path"] = child.path() - break + # Although Houdini allows ObjNode path on `sop_path` for the + # the ROP node we prefer it set to the SopNode path explicitly + + # Allow sop level paths (e.g. /obj/geo1/box1) + if isinstance(selected_node, hou.SopNode): + parms["sop_path"] = selected_node.path() + self.log.debug( + "Valid SopNode selection, 'SOP Path' in ROP will be set to '%s'." + % selected_node.path() + ) + + # Allow object level paths to Geometry nodes (e.g. /obj/geo1) + # but do not allow other object level nodes types like cameras, etc. + elif isinstance(selected_node, hou.ObjNode) and \ + selected_node.type().name() in ["geo"]: + + # get the output node with the minimum + # 'outputidx' or the node with display flag + sop_path = self.get_obj_output(selected_node) + + if sop_path: + parms["sop_path"] = sop_path.path() + self.log.debug( + "Valid ObjNode selection, 'SOP Path' in ROP will be set to " + "the child path '%s'." + % sop_path.path() + ) + + if not parms.get("sop_path", None): + self.log.debug( + "Selection isn't valid. 'SOP Path' in ROP will be empty." + ) + else: + self.log.debug( + "No Selection. 'SOP Path' in ROP will be empty." + ) instance_node.setParms(parms) instance_node.parm("trange").set(1) @@ -57,3 +85,23 @@ class CreatePointCache(plugin.HoudiniCreator): hou.ropNodeTypeCategory(), hou.sopNodeTypeCategory() ] + + def get_obj_output(self, obj_node): + """Find output node with the smallest 'outputidx'.""" + + outputs = obj_node.subnetOutputs() + + # if obj_node is empty + if not outputs: + return + + # if obj_node has one output child whether its + # sop output node or a node with the render flag + elif len(outputs) == 1: + return outputs[0] + + # if there are more than one, then it have multiple ouput nodes + # return the one with the minimum 'outputidx' + else: + return min(outputs, + key=lambda node: node.evalParm('outputidx')) diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py index 8b6a68437b..b814dd9d57 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py @@ -33,7 +33,7 @@ class CreateRedshiftProxy(plugin.HoudiniCreator): instance_node = hou.node(instance.get("instance_node")) parms = { - "RS_archive_file": '$HIP/pyblish/`{}.$F4.rs'.format(subset_name), + "RS_archive_file": '$HIP/pyblish/{}.$F4.rs'.format(subset_name), } if self.selected_nodes: diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 4576e9a721..1b8826a932 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -3,7 +3,7 @@ import hou # noqa from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateRedshiftROP(plugin.HoudiniCreator): @@ -13,7 +13,6 @@ class CreateRedshiftROP(plugin.HoudiniCreator): label = "Redshift ROP" family = "redshift_rop" icon = "magic" - defaults = ["master"] ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -23,7 +22,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateRedshiftROP, self).create( subset_name, @@ -100,6 +99,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_review.py b/openpype/hosts/houdini/plugins/create/create_review.py index ab06b30c35..60c34a358b 100644 --- a/openpype/hosts/houdini/plugins/create/create_review.py +++ b/openpype/hosts/houdini/plugins/create/create_review.py @@ -3,6 +3,9 @@ from openpype.hosts.houdini.api import plugin from openpype.lib import EnumDef, BoolDef, NumberDef +import os +import hou + class CreateReview(plugin.HoudiniCreator): """Review with OpenGL ROP""" @@ -13,7 +16,6 @@ class CreateReview(plugin.HoudiniCreator): icon = "video-camera" def create(self, subset_name, instance_data, pre_create_data): - import hou instance_data.pop("active", None) instance_data.update({"node_type": "opengl"}) @@ -82,6 +84,11 @@ class CreateReview(plugin.HoudiniCreator): instance_node.setParms(parms) + # Set OCIO Colorspace to the default output colorspace + # if there's OCIO + if os.getenv("OCIO"): + self.set_colorcorrect_to_default_view_space(instance_node) + to_lock = ["id", "family"] self.lock_parameters(instance_node, to_lock) @@ -123,3 +130,23 @@ class CreateReview(plugin.HoudiniCreator): minimum=0.0001, decimals=3) ] + + def set_colorcorrect_to_default_view_space(self, + instance_node): + """Set ociocolorspace to the default output space.""" + from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa + + # set Color Correction parameter to OpenColorIO + instance_node.setParms({"colorcorrect": 2}) + + # Get default view space for ociocolorspace parm. + default_view_space = get_default_display_view_colorspace() + instance_node.setParms( + {"ociocolorspace": default_view_space} + ) + + self.log.debug( + "'OCIO Colorspace' parm on '{}' has been set to " + "the default view color space '{}'" + .format(instance_node, default_view_space) + ) diff --git a/openpype/hosts/houdini/plugins/create/create_staticmesh.py b/openpype/hosts/houdini/plugins/create/create_staticmesh.py new file mode 100644 index 0000000000..ea0b36f03f --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_staticmesh.py @@ -0,0 +1,143 @@ +# -*- coding: utf-8 -*- +"""Creator for Unreal Static Meshes.""" +from openpype.hosts.houdini.api import plugin +from openpype.lib import BoolDef, EnumDef + +import hou + + +class CreateStaticMesh(plugin.HoudiniCreator): + """Static Meshes as FBX. """ + + identifier = "io.openpype.creators.houdini.staticmesh.fbx" + label = "Static Mesh (FBX)" + family = "staticMesh" + icon = "fa5s.cubes" + + default_variants = ["Main"] + + def create(self, subset_name, instance_data, pre_create_data): + + instance_data.update({"node_type": "filmboxfbx"}) + + instance = super(CreateStaticMesh, self).create( + subset_name, + instance_data, + pre_create_data) + + # get the created rop node + instance_node = hou.node(instance.get("instance_node")) + + # prepare parms + output_path = hou.text.expandString( + "$HIP/pyblish/{}.fbx".format(subset_name) + ) + + parms = { + "startnode": self.get_selection(), + "sopoutput": output_path, + # vertex cache format + "vcformat": pre_create_data.get("vcformat"), + "convertunits": pre_create_data.get("convertunits"), + # set render range to use frame range start-end frame + "trange": 1, + "createsubnetroot": pre_create_data.get("createsubnetroot") + } + + # set parms + instance_node.setParms(parms) + + # Lock any parameters in this list + to_lock = ["family", "id"] + self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.sopNodeTypeCategory() + ] + + def get_pre_create_attr_defs(self): + """Add settings for users. """ + + attrs = super(CreateStaticMesh, self).get_pre_create_attr_defs() + createsubnetroot = BoolDef("createsubnetroot", + tooltip="Create an extra root for the " + "Export node when it's a " + "subnetwork. This causes the " + "exporting subnetwork node to be " + "represented in the FBX file.", + default=False, + label="Create Root for Subnet") + vcformat = EnumDef("vcformat", + items={ + 0: "Maya Compatible (MC)", + 1: "3DS MAX Compatible (PC2)" + }, + default=0, + label="Vertex Cache Format") + convert_units = BoolDef("convertunits", + tooltip="When on, the FBX is converted" + "from the current Houdini " + "system units to the native " + "FBX unit of centimeters.", + default=False, + label="Convert Units") + + return attrs + [createsubnetroot, vcformat, convert_units] + + def get_dynamic_data( + self, variant, task_name, asset_doc, project_name, host_name, instance + ): + """ + The default subset name templates for Unreal include {asset} and thus + we should pass that along as dynamic data. + """ + dynamic_data = super(CreateStaticMesh, self).get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name, instance + ) + dynamic_data["asset"] = asset_doc["name"] + return dynamic_data + + def get_selection(self): + """Selection Logic. + + how self.selected_nodes should be processed to get + the desirable node from selection. + + Returns: + str : node path + """ + + selection = "" + + if self.selected_nodes: + selected_node = self.selected_nodes[0] + + # Accept sop level nodes (e.g. /obj/geo1/box1) + if isinstance(selected_node, hou.SopNode): + selection = selected_node.path() + self.log.debug( + "Valid SopNode selection, 'Export' in filmboxfbx" + " will be set to '%s'.", selected_node + ) + + # Accept object level nodes (e.g. /obj/geo1) + elif isinstance(selected_node, hou.ObjNode): + selection = selected_node.path() + self.log.debug( + "Valid ObjNode selection, 'Export' in filmboxfbx " + "will be set to the child path '%s'.", selection + ) + + else: + self.log.debug( + "Selection isn't valid. 'Export' in " + "filmboxfbx will be empty." + ) + else: + self.log.debug( + "No Selection. 'Export' in filmboxfbx will be empty." + ) + + return selection diff --git a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py index c015cebd49..9c96e48e3a 100644 --- a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py +++ b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py @@ -33,7 +33,7 @@ class CreateVDBCache(plugin.HoudiniCreator): } if self.selected_nodes: - parms["soppath"] = self.selected_nodes[0].path() + parms["soppath"] = self.get_sop_node_path(self.selected_nodes[0]) instance_node.setParms(parms) @@ -42,3 +42,63 @@ class CreateVDBCache(plugin.HoudiniCreator): hou.ropNodeTypeCategory(), hou.sopNodeTypeCategory() ] + + def get_sop_node_path(self, selected_node): + """Get Sop Path of the selected node. + + Although Houdini allows ObjNode path on `sop_path` for the + the ROP node, we prefer it set to the SopNode path explicitly. + """ + + # Allow sop level paths (e.g. /obj/geo1/box1) + if isinstance(selected_node, hou.SopNode): + self.log.debug( + "Valid SopNode selection, 'SOP Path' in ROP will" + " be set to '%s'.", selected_node.path() + ) + return selected_node.path() + + # Allow object level paths to Geometry nodes (e.g. /obj/geo1) + # but do not allow other object level nodes types like cameras, etc. + elif isinstance(selected_node, hou.ObjNode) and \ + selected_node.type().name() == "geo": + + # Try to find output node. + sop_node = self.get_obj_output(selected_node) + if sop_node: + self.log.debug( + "Valid ObjNode selection, 'SOP Path' in ROP will " + "be set to the child path '%s'.", sop_node.path() + ) + return sop_node.path() + + self.log.debug( + "Selection isn't valid. 'SOP Path' in ROP will be empty." + ) + return "" + + def get_obj_output(self, obj_node): + """Try to find output node. + + If any output nodes are present, return the output node with + the minimum 'outputidx' + If no output nodes are present, return the node with display flag + If no nodes are present at all, return None + """ + + outputs = obj_node.subnetOutputs() + + # if obj_node is empty + if not outputs: + return + + # if obj_node has one output child whether its + # sop output node or a node with the render flag + elif len(outputs) == 1: + return outputs[0] + + # if there are more than one, then it has multiple output nodes + # return the one with the minimum 'outputidx' + else: + return min(outputs, + key=lambda node: node.evalParm('outputidx')) diff --git a/openpype/hosts/houdini/plugins/create/create_vray_rop.py b/openpype/hosts/houdini/plugins/create/create_vray_rop.py index 1de9be4ed6..793a544fdf 100644 --- a/openpype/hosts/houdini/plugins/create/create_vray_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_vray_rop.py @@ -14,8 +14,6 @@ class CreateVrayROP(plugin.HoudiniCreator): label = "VRay ROP" family = "vray_rop" icon = "magic" - defaults = ["master"] - ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -25,7 +23,7 @@ class CreateVrayROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateVrayROP, self).create( subset_name, @@ -139,6 +137,9 @@ class CreateVrayROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_workfile.py b/openpype/hosts/houdini/plugins/create/create_workfile.py index 1a8537adcd..cc45a6c2a8 100644 --- a/openpype/hosts/houdini/plugins/create/create_workfile.py +++ b/openpype/hosts/houdini/plugins/create/create_workfile.py @@ -4,7 +4,6 @@ from openpype.hosts.houdini.api import plugin from openpype.hosts.houdini.api.lib import read, imprint from openpype.hosts.houdini.api.pipeline import CONTEXT_CONTAINER from openpype.pipeline import CreatedInstance, AutoCreator -from openpype.pipeline import legacy_io from openpype.client import get_asset_by_name import hou @@ -27,9 +26,9 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator): ), None) project_name = self.project_name - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.host_name if current_instance is None: asset_doc = get_asset_by_name(project_name, asset_name) diff --git a/openpype/hosts/houdini/plugins/inventory/set_camera_resolution.py b/openpype/hosts/houdini/plugins/inventory/set_camera_resolution.py new file mode 100644 index 0000000000..18ececb019 --- /dev/null +++ b/openpype/hosts/houdini/plugins/inventory/set_camera_resolution.py @@ -0,0 +1,26 @@ +from openpype.pipeline import InventoryAction +from openpype.hosts.houdini.api.lib import ( + get_camera_from_container, + set_camera_resolution +) +from openpype.pipeline.context_tools import get_current_project_asset + + +class SetCameraResolution(InventoryAction): + + label = "Set Camera Resolution" + icon = "desktop" + color = "orange" + + @staticmethod + def is_compatible(container): + return ( + container.get("loader") == "CameraLoader" + ) + + def process(self, containers): + asset_doc = get_current_project_asset() + for container in containers: + node = container["node"] + camera = get_camera_from_container(node) + set_camera_resolution(camera, asset_doc) diff --git a/openpype/hosts/houdini/plugins/load/load_alembic.py b/openpype/hosts/houdini/plugins/load/load_alembic.py index c6f0ebf2f9..48bd730ebe 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic.py @@ -20,7 +20,8 @@ class AbcLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py index 47d2e1b896..3a577f72b4 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py @@ -21,7 +21,8 @@ class AbcArchiveLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_bgeo.py b/openpype/hosts/houdini/plugins/load/load_bgeo.py index 86e8675c02..489bf944ed 100644 --- a/openpype/hosts/houdini/plugins/load/load_bgeo.py +++ b/openpype/hosts/houdini/plugins/load/load_bgeo.py @@ -34,7 +34,6 @@ class BgeoLoader(load.LoaderPlugin): # Create a new geo node container = obj.createNode("geo", node_name=node_name) - is_sequence = bool(context["representation"]["context"].get("frame")) # Remove the file node, it only loads static meshes # Houdini 17 has removed the file node from the geo node @@ -43,9 +42,10 @@ class BgeoLoader(load.LoaderPlugin): file_node.destroy() # Explicitly create a file node + path = self.filepath_from_context(context) file_node = container.createNode("file", node_name=node_name) file_node.setParms( - {"file": self.format_path(self.fname, context["representation"])}) + {"file": self.format_path(path, context["representation"])}) # Set display on last node file_node.setDisplayFlag(True) diff --git a/openpype/hosts/houdini/plugins/load/load_camera.py b/openpype/hosts/houdini/plugins/load/load_camera.py index 6365508f4e..e16146a267 100644 --- a/openpype/hosts/houdini/plugins/load/load_camera.py +++ b/openpype/hosts/houdini/plugins/load/load_camera.py @@ -4,6 +4,13 @@ from openpype.pipeline import ( ) from openpype.hosts.houdini.api import pipeline +from openpype.hosts.houdini.api.lib import ( + set_camera_resolution, + get_camera_from_container +) + +import hou + ARCHIVE_EXPRESSION = ('__import__("_alembic_hom_extensions")' '.alembicGetCameraDict') @@ -25,7 +32,15 @@ def transfer_non_default_values(src, dest, ignore=None): channel expression and ignore certain Parm types. """ - import hou + + ignore_types = { + hou.parmTemplateType.Toggle, + hou.parmTemplateType.Menu, + hou.parmTemplateType.Button, + hou.parmTemplateType.FolderSet, + hou.parmTemplateType.Separator, + hou.parmTemplateType.Label, + } src.updateParmStates() @@ -62,14 +77,6 @@ def transfer_non_default_values(src, dest, ignore=None): continue # Ignore folders, separators, etc. - ignore_types = { - hou.parmTemplateType.Toggle, - hou.parmTemplateType.Menu, - hou.parmTemplateType.Button, - hou.parmTemplateType.FolderSet, - hou.parmTemplateType.Separator, - hou.parmTemplateType.Label, - } if parm.parmTemplate().type() in ignore_types: continue @@ -90,12 +97,8 @@ class CameraLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): - import os - import hou - # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) - file_path = file_path.replace("\\", "/") + file_path = self.filepath_from_context(context).replace("\\", "/") # Get the root node obj = hou.node("/obj") @@ -105,19 +108,21 @@ class CameraLoader(load.LoaderPlugin): node_name = "{}_{}".format(namespace, name) if namespace else name # Create a archive node - container = self.create_and_connect(obj, "alembicarchive", node_name) + node = self.create_and_connect(obj, "alembicarchive", node_name) # TODO: add FPS of project / asset - container.setParms({"fileName": file_path, - "channelRef": True}) + node.setParms({"fileName": file_path, "channelRef": True}) # Apply some magic - container.parm("buildHierarchy").pressButton() - container.moveToGoodPosition() + node.parm("buildHierarchy").pressButton() + node.moveToGoodPosition() # Create an alembic xform node - nodes = [container] + nodes = [node] + camera = get_camera_from_container(node) + self._match_maya_render_mask(camera) + set_camera_resolution(camera, asset_doc=context["asset"]) self[:] = nodes return pipeline.containerise(node_name, @@ -142,14 +147,14 @@ class CameraLoader(load.LoaderPlugin): # Store the cam temporarily next to the Alembic Archive # so that we can preserve parm values the user set on it # after build hierarchy was triggered. - old_camera = self._get_camera(node) + old_camera = get_camera_from_container(node) temp_camera = old_camera.copyTo(node.parent()) # Rebuild node.parm("buildHierarchy").pressButton() # Apply values to the new camera - new_camera = self._get_camera(node) + new_camera = get_camera_from_container(node) transfer_non_default_values(temp_camera, new_camera, # The hidden uniform scale attribute @@ -157,6 +162,9 @@ class CameraLoader(load.LoaderPlugin): # "icon_scale" just skip that completely ignore={"scale"}) + self._match_maya_render_mask(new_camera) + set_camera_resolution(new_camera) + temp_camera.destroy() def remove(self, container): @@ -164,15 +172,6 @@ class CameraLoader(load.LoaderPlugin): node = container["node"] node.destroy() - def _get_camera(self, node): - import hou - cameras = node.recursiveGlob("*", - filter=hou.nodeTypeFilter.ObjCamera, - include_subnets=False) - - assert len(cameras) == 1, "Camera instance must have only one camera" - return cameras[0] - def create_and_connect(self, node, node_type, name=None): """Create a node within a node which and connect it to the input @@ -193,5 +192,20 @@ class CameraLoader(load.LoaderPlugin): new_node.moveToGoodPosition() return new_node - def switch(self, container, representation): - self.update(container, representation) + def _match_maya_render_mask(self, camera): + """Workaround to match Maya render mask in Houdini""" + + # print("Setting match maya render mask ") + parm = camera.parm("aperture") + expression = parm.expression() + expression = expression.replace("return ", "aperture = ") + expression += """ +# Match maya render mask (logic from Houdini's own FBX importer) +node = hou.pwd() +resx = node.evalParm('resx') +resy = node.evalParm('resy') +aspect = node.evalParm('aspect') +aperture *= min(1, (resx / resy * aspect) / 1.5) +return aperture +""" + parm.setExpression(expression, language=hou.exprLanguage.Python) diff --git a/openpype/hosts/houdini/plugins/load/load_fbx.py b/openpype/hosts/houdini/plugins/load/load_fbx.py new file mode 100644 index 0000000000..cac22d62d4 --- /dev/null +++ b/openpype/hosts/houdini/plugins/load/load_fbx.py @@ -0,0 +1,139 @@ +# -*- coding: utf-8 -*- +"""Fbx Loader for houdini. """ +from openpype.pipeline import ( + load, + get_representation_path, +) +from openpype.hosts.houdini.api import pipeline + + +class FbxLoader(load.LoaderPlugin): + """Load fbx files. """ + + label = "Load FBX" + icon = "code-fork" + color = "orange" + + order = -10 + + families = ["staticMesh", "fbx"] + representations = ["fbx"] + + def load(self, context, name=None, namespace=None, data=None): + + # get file path from context + file_path = self.filepath_from_context(context) + file_path = file_path.replace("\\", "/") + + # get necessary data + namespace, node_name = self.get_node_name(context, name, namespace) + + # create load tree + nodes = self.create_load_node_tree(file_path, node_name, name) + + self[:] = nodes + + # Call containerise function which does some automations for you + # like moving created nodes to the AVALON_CONTAINERS subnetwork + containerised_nodes = pipeline.containerise( + node_name, + namespace, + nodes, + context, + self.__class__.__name__, + suffix="", + ) + + return containerised_nodes + + def update(self, container, representation): + + node = container["node"] + try: + file_node = next( + n for n in node.children() if n.type().name() == "file" + ) + except StopIteration: + self.log.error("Could not find node of type `file`") + return + + # Update the file path from representation + file_path = get_representation_path(representation) + file_path = file_path.replace("\\", "/") + + file_node.setParms({"file": file_path}) + + # Update attribute + node.setParms({"representation": str(representation["_id"])}) + + def remove(self, container): + + node = container["node"] + node.destroy() + + def switch(self, container, representation): + self.update(container, representation) + + def get_node_name(self, context, name=None, namespace=None): + """Define node name.""" + + if not namespace: + namespace = context["asset"]["name"] + + if namespace: + node_name = "{}_{}".format(namespace, name) + else: + node_name = name + + return namespace, node_name + + def create_load_node_tree(self, file_path, node_name, subset_name): + """Create Load network. + + you can start building your tree at any obj level. + it'll be much easier to build it in the root obj level. + + Afterwards, your tree will be automatically moved to + '/obj/AVALON_CONTAINERS' subnetwork. + """ + import hou + + # Get the root obj level + obj = hou.node("/obj") + + # Create a new obj geo node + parent_node = obj.createNode("geo", node_name=node_name) + + # In older houdini, + # when reating a new obj geo node, a default file node will be + # automatically created. + # so, we will delete it if exists. + file_node = parent_node.node("file1") + if file_node: + file_node.destroy() + + # Create a new file node + file_node = parent_node.createNode("file", node_name=node_name) + file_node.setParms({"file": file_path}) + + # Create attribute delete + attribdelete_name = "attribdelete_{}".format(subset_name) + attribdelete = parent_node.createNode("attribdelete", + node_name=attribdelete_name) + attribdelete.setParms({"ptdel": "fbx_*"}) + attribdelete.setInput(0, file_node) + + # Create a Null node + null_name = "OUT_{}".format(subset_name) + null = parent_node.createNode("null", node_name=null_name) + null.setInput(0, attribdelete) + + # Ensure display flag is on the file_node input node and not on the OUT + # node to optimize "debug" displaying in the viewport. + file_node.setDisplayFlag(True) + + # Set new position for children nodes + parent_node.layoutChildren() + + # Return all the nodes + return [parent_node, file_node, attribdelete, null] diff --git a/openpype/hosts/houdini/plugins/load/load_hda.py b/openpype/hosts/houdini/plugins/load/load_hda.py index 2438570c6e..9630716253 100644 --- a/openpype/hosts/houdini/plugins/load/load_hda.py +++ b/openpype/hosts/houdini/plugins/load/load_hda.py @@ -21,7 +21,8 @@ class HdaLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node @@ -58,6 +59,9 @@ class HdaLoader(load.LoaderPlugin): def_paths = [d.libraryFilePath() for d in defs] new = def_paths.index(file_path) defs[new].setIsPreferred(True) + hda_node.setParms({ + "representation": str(representation["_id"]) + }) def remove(self, container): node = container["node"] diff --git a/openpype/hosts/houdini/plugins/load/load_image.py b/openpype/hosts/houdini/plugins/load/load_image.py index 26bc569c53..663a93e48b 100644 --- a/openpype/hosts/houdini/plugins/load/load_image.py +++ b/openpype/hosts/houdini/plugins/load/load_image.py @@ -55,7 +55,8 @@ class ImageLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") file_path = self._get_file_sequence(file_path) diff --git a/openpype/hosts/houdini/plugins/load/load_usd_layer.py b/openpype/hosts/houdini/plugins/load/load_usd_layer.py index 1f0ec25128..1528cf549f 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_layer.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_layer.py @@ -26,7 +26,8 @@ class USDSublayerLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_usd_reference.py b/openpype/hosts/houdini/plugins/load/load_usd_reference.py index f66d05395e..8402ad072c 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_reference.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_reference.py @@ -26,7 +26,8 @@ class USDReferenceLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_vdb.py b/openpype/hosts/houdini/plugins/load/load_vdb.py index 87900502c5..bcc4f200d3 100644 --- a/openpype/hosts/houdini/plugins/load/load_vdb.py +++ b/openpype/hosts/houdini/plugins/load/load_vdb.py @@ -40,8 +40,9 @@ class VdbLoader(load.LoaderPlugin): # Explicitly create a file node file_node = container.createNode("file", node_name=node_name) + path = self.filepath_from_context(context) file_node.setParms( - {"file": self.format_path(self.fname, context["representation"])}) + {"file": self.format_path(path, context["representation"])}) # Set display on last node file_node.setDisplayFlag(True) diff --git a/openpype/hosts/houdini/plugins/load/show_usdview.py b/openpype/hosts/houdini/plugins/load/show_usdview.py index 2737bc40fa..7b03a0738a 100644 --- a/openpype/hosts/houdini/plugins/load/show_usdview.py +++ b/openpype/hosts/houdini/plugins/load/show_usdview.py @@ -20,7 +20,8 @@ class ShowInUsdview(load.LoaderPlugin): usdview = find_executable("usdview") - filepath = os.path.normpath(self.fname) + filepath = self.filepath_from_context(context) + filepath = os.path.normpath(filepath) filepath = filepath.replace("\\", "/") if not os.path.exists(filepath): diff --git a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py index 614785487f..43b8428c60 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py @@ -50,7 +50,7 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): num_aovs = rop.evalParm("ar_aovs") for index in range(1, num_aovs + 1): # Skip disabled AOVs - if not rop.evalParm("ar_enable_aovP{}".format(index)): + if not rop.evalParm("ar_enable_aov{}".format(index)): continue if rop.evalParm("ar_aov_exr_enable_layer_name{}".format(index)): diff --git a/openpype/hosts/houdini/plugins/publish/collect_frames.py b/openpype/hosts/houdini/plugins/publish/collect_frames.py index 91a3d9d170..01df809d4c 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_frames.py +++ b/openpype/hosts/houdini/plugins/publish/collect_frames.py @@ -13,7 +13,8 @@ class CollectFrames(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.01 label = "Collect Frames" - families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"] + families = ["vdbcache", "imagesequence", "ass", + "redshiftproxy", "review", "bgeo"] def process(self, instance): @@ -32,9 +33,9 @@ class CollectFrames(pyblish.api.InstancePlugin): output = output_parm.eval() _, ext = lib.splitext( - output, - allowed_multidot_extensions=[".ass.gz"] - ) + output, allowed_multidot_extensions=[ + ".ass.gz", ".bgeo.sc", ".bgeo.gz", + ".bgeo.lzma", ".bgeo.bz2"]) file_name = os.path.basename(output) result = file_name @@ -76,7 +77,7 @@ class CollectFrames(pyblish.api.InstancePlugin): frame = match.group(1) padding = len(frame) - # Get the parts of the filename surrounding the frame number + # Get the parts of the filename surrounding the frame number, # so we can put our own frame numbers in. span = match.span(1) prefix = match.string[: span[0]] diff --git a/openpype/hosts/houdini/plugins/publish/collect_output_node.py b/openpype/hosts/houdini/plugins/publish/collect_output_node.py index 601ed17b39..bca3d9fdc1 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/collect_output_node.py @@ -1,5 +1,7 @@ import pyblish.api +from openpype.pipeline.publish import KnownPublishError + class CollectOutputSOPPath(pyblish.api.InstancePlugin): """Collect the out node's SOP/COP Path value.""" @@ -12,7 +14,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin): "imagesequence", "usd", "usdrender", - "redshiftproxy" + "redshiftproxy", + "staticMesh" ] hosts = ["houdini"] @@ -57,9 +60,13 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin): elif node_type == "Redshift_Proxy_Output": out_node = node.parm("RS_archive_sopPath").evalAsNode() + + elif node_type == "filmboxfbx": + out_node = node.parm("startnode").evalAsNode() + else: - raise ValueError( - "ROP node type '%s' is" " not supported." % node_type + raise KnownPublishError( + "ROP node type '{}' is not supported.".format(node_type) ) if not out_node: diff --git a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py new file mode 100644 index 0000000000..3323e97c20 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py @@ -0,0 +1,21 @@ +"""Collector for pointcache types. + +This will add additional family to pointcache instance based on +the creator_identifier parameter. +""" +import pyblish.api + + +class CollectPointcacheType(pyblish.api.InstancePlugin): + """Collect data type for pointcache instance.""" + + order = pyblish.api.CollectorOrder + hosts = ["houdini"] + families = ["pointcache"] + label = "Collect type of pointcache" + + def process(self, instance): + if instance.data["creator_identifier"] == "io.openpype.creators.houdini.bgeo": # noqa: E501 + instance.data["families"] += ["bgeo"] + elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.pointcache": # noqa: E501 + instance.data["families"] += ["abc"] diff --git a/openpype/hosts/houdini/plugins/publish/collect_staticmesh_type.py b/openpype/hosts/houdini/plugins/publish/collect_staticmesh_type.py new file mode 100644 index 0000000000..db9efec7a1 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_staticmesh_type.py @@ -0,0 +1,20 @@ +# -*- coding: utf-8 -*- +"""Collector for staticMesh types. """ + +import pyblish.api + + +class CollectStaticMeshType(pyblish.api.InstancePlugin): + """Collect data type for fbx instance.""" + + hosts = ["houdini"] + families = ["staticMesh"] + label = "Collect type of staticMesh" + + order = pyblish.api.CollectorOrder + + def process(self, instance): + + if instance.data["creator_identifier"] == "io.openpype.creators.houdini.staticmesh.fbx": # noqa: E501 + # Marking this instance as FBX triggers the FBX extractor. + instance.data["families"] += ["fbx"] diff --git a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py index 81274c670e..14a8e3c056 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py +++ b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py @@ -1,7 +1,6 @@ import pyblish.api from openpype.client import get_subset_by_name, get_asset_by_name -from openpype.pipeline import legacy_io import openpype.lib.usdlib as usdlib @@ -51,7 +50,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin): self.log.debug("Add bootstrap for: %s" % bootstrap) - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] asset = get_asset_by_name(project_name, instance.data["asset"]) assert asset, "Asset must exist: %s" % asset diff --git a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py index d4fe37f993..277f922ba4 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py @@ -80,14 +80,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin): def get_beauty_render_product(self, prefix, suffix=""): """Return the beauty output filename if render element enabled """ + # Remove aov suffix from the product: `prefix.aov_suffix` -> `prefix` aov_parm = ".{}".format(suffix) - beauty_product = None - if aov_parm in prefix: - beauty_product = prefix.replace(aov_parm, "") - else: - beauty_product = prefix - - return beauty_product + return prefix.replace(aov_parm, "") def get_render_element_name(self, node, prefix, suffix=""): """Return the output filename using the AOV prefix and suffix diff --git a/openpype/hosts/houdini/plugins/publish/extract_alembic.py b/openpype/hosts/houdini/plugins/publish/extract_alembic.py index cb2d4ef424..bdd19b23d4 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_alembic.py +++ b/openpype/hosts/houdini/plugins/publish/extract_alembic.py @@ -13,7 +13,7 @@ class ExtractAlembic(publish.Extractor): order = pyblish.api.ExtractorOrder label = "Extract Alembic" hosts = ["houdini"] - families = ["pointcache", "camera"] + families = ["abc", "camera"] def process(self, instance): diff --git a/openpype/hosts/houdini/plugins/publish/extract_bgeo.py b/openpype/hosts/houdini/plugins/publish/extract_bgeo.py new file mode 100644 index 0000000000..c9625ec880 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/extract_bgeo.py @@ -0,0 +1,53 @@ +import os + +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.houdini.api.lib import render_rop +from openpype.hosts.houdini.api import lib + +import hou + + +class ExtractBGEO(publish.Extractor): + + order = pyblish.api.ExtractorOrder + label = "Extract BGEO" + hosts = ["houdini"] + families = ["bgeo"] + + def process(self, instance): + + ropnode = hou.node(instance.data["instance_node"]) + + # Get the filename from the filename parameter + output = ropnode.evalParm("sopoutput") + staging_dir, file_name = os.path.split(output) + instance.data["stagingDir"] = staging_dir + + # We run the render + self.log.info("Writing bgeo files '{}' to '{}'.".format( + file_name, staging_dir)) + + # write files + render_rop(ropnode) + + output = instance.data["frames"] + + _, ext = lib.splitext( + output[0], allowed_multidot_extensions=[ + ".ass.gz", ".bgeo.sc", ".bgeo.gz", + ".bgeo.lzma", ".bgeo.bz2"]) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "bgeo", + "ext": ext.lstrip("."), + "files": output, + "stagingDir": staging_dir, + "frameStart": instance.data["frameStart"], + "frameEnd": instance.data["frameEnd"] + } + instance.data["representations"].append(representation) diff --git a/openpype/hosts/houdini/plugins/publish/extract_fbx.py b/openpype/hosts/houdini/plugins/publish/extract_fbx.py new file mode 100644 index 0000000000..7993b3352f --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/extract_fbx.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +"""Fbx Extractor for houdini. """ + +import os +import pyblish.api +from openpype.pipeline import publish +from openpype.hosts.houdini.api.lib import render_rop + +import hou + + +class ExtractFBX(publish.Extractor): + + label = "Extract FBX" + families = ["fbx"] + hosts = ["houdini"] + + order = pyblish.api.ExtractorOrder + 0.1 + + def process(self, instance): + + # get rop node + ropnode = hou.node(instance.data.get("instance_node")) + output_file = ropnode.evalParm("sopoutput") + + # get staging_dir and file_name + staging_dir = os.path.normpath(os.path.dirname(output_file)) + file_name = os.path.basename(output_file) + + # render rop + self.log.debug("Writing FBX '%s' to '%s'", file_name, staging_dir) + render_rop(ropnode) + + # prepare representation + representation = { + "name": "fbx", + "ext": "fbx", + "files": file_name, + "stagingDir": staging_dir + } + + # A single frame may also be rendered without start/end frame. + if "frameStart" in instance.data and "frameEnd" in instance.data: + representation["frameStart"] = instance.data["frameStart"] + representation["frameEnd"] = instance.data["frameEnd"] + + # set value type for 'representations' key to list + if "representations" not in instance.data: + instance.data["representations"] = [] + + # update instance data + instance.data["stagingDir"] = staging_dir + instance.data["representations"].append(representation) diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py index 8422a3bc3e..d6193f13c1 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py +++ b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py @@ -14,7 +14,6 @@ from openpype.client import ( ) from openpype.pipeline import ( get_representation_path, - legacy_io, publish, ) import openpype.hosts.houdini.api.usd as hou_usdlib @@ -250,7 +249,7 @@ class ExtractUSDLayered(publish.Extractor): # Set up the dependency for publish if they have new content # compared to previous publishes - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for dependency in active_dependencies: dependency_fname = dependency.data["usdFilename"] diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py index 2493b28bc1..3569de7693 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py @@ -2,7 +2,7 @@ import pyblish.api from openpype.lib import version_up from openpype.pipeline import registered_host -from openpype.action import get_errored_plugins_from_data +from openpype.pipeline.publish import get_errored_plugins_from_context from openpype.hosts.houdini.api import HoudiniHost from openpype.pipeline.publish import KnownPublishError @@ -27,7 +27,7 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin): def process(self, context): - errored_plugins = get_errored_plugins_from_data(context) + errored_plugins = get_errored_plugins_from_context(context) if any( plugin.__name__ == "HoudiniSubmitPublishDeadline" for plugin in errored_plugins @@ -40,9 +40,10 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin): # Filename must not have changed since collecting host = registered_host() # type: HoudiniHost current_file = host.current_file() - assert ( - context.data["currentFile"] == current_file - ), "Collected filename mismatches from current scene name." + if context.data["currentFile"] != current_file: + raise KnownPublishError( + "Collected filename mismatches from current scene name." + ) new_filepath = version_up(current_file) host.save_workfile(new_filepath) diff --git a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py index bef8db45a4..af9e080466 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py +++ b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py @@ -17,7 +17,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Primitive to Detail (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py index 44d58cfa36..40114bc40e 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py @@ -18,7 +18,7 @@ class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Alembic ROP Face Sets" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py index b0cf4cdc58..47c47e4ea2 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py @@ -14,7 +14,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Input Node (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py index 4878738ed3..79387fbef5 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py +++ b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py @@ -1,5 +1,6 @@ import pyblish.api +from openpype.pipeline.publish import PublishValidationError from openpype.hosts.houdini.api import lib import hou @@ -30,7 +31,7 @@ class ValidateAnimationSettings(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "Output settings do no match for '%s'" % instance ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_fbx_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_fbx_output_node.py new file mode 100644 index 0000000000..894dad7d72 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_fbx_output_node.py @@ -0,0 +1,140 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from openpype.hosts.houdini.api.action import ( + SelectInvalidAction, + SelectROPAction, +) +from openpype.hosts.houdini.api.lib import get_obj_node_output +import hou + + +class ValidateFBXOutputNode(pyblish.api.InstancePlugin): + """Validate the instance Output Node. + + This will ensure: + - The Output Node Path is set. + - The Output Node Path refers to an existing object. + - The Output Node is a Sop or Obj node. + - The Output Node has geometry data. + - The Output Node doesn't include invalid primitive types. + """ + + order = pyblish.api.ValidatorOrder + families = ["fbx"] + hosts = ["houdini"] + label = "Validate FBX Output Node" + actions = [SelectROPAction, SelectInvalidAction] + + def process(self, instance): + + invalid = self.get_invalid(instance) + if invalid: + nodes = [n.path() for n in invalid] + raise PublishValidationError( + "See log for details. " + "Invalid nodes: {0}".format(nodes), + title="Invalid output node(s)" + ) + + @classmethod + def get_invalid(cls, instance): + output_node = instance.data.get("output_node") + + # Check if The Output Node Path is set and + # refers to an existing object. + if output_node is None: + rop_node = hou.node(instance.data["instance_node"]) + cls.log.error( + "Output node in '%s' does not exist. " + "Ensure a valid output path is set.", rop_node.path() + ) + + return [rop_node] + + # Check if the Output Node is a Sop or an Obj node + # also, list all sop output nodes inside as well as + # invalid empty nodes. + all_out_sops = [] + invalid = [] + + # if output_node is an ObjSubnet or an ObjNetwork + if output_node.childTypeCategory() == hou.objNodeTypeCategory(): + for node in output_node.allSubChildren(): + if node.type().name() == "geo": + out = get_obj_node_output(node) + if out: + all_out_sops.append(out) + else: + invalid.append(node) # empty_objs + cls.log.error( + "Geo Obj Node '%s' is empty!", + node.path() + ) + if not all_out_sops: + invalid.append(output_node) # empty_objs + cls.log.error( + "Output Node '%s' is empty!", + node.path() + ) + + # elif output_node is an ObjNode + elif output_node.type().name() == "geo": + out = get_obj_node_output(output_node) + if out: + all_out_sops.append(out) + else: + invalid.append(node) # empty_objs + cls.log.error( + "Output Node '%s' is empty!", + node.path() + ) + + # elif output_node is a SopNode + elif output_node.type().category().name() == "Sop": + all_out_sops.append(output_node) + + # Then it's a wrong node type + else: + cls.log.error( + "Output node %s is not a SOP or OBJ Geo or OBJ SubNet node. " + "Instead found category type: %s %s", + output_node.path(), output_node.type().category().name(), + output_node.type().name() + ) + return [output_node] + + # Check if all output sop nodes have geometry + # and don't contain invalid prims + invalid_prim_types = ["VDB", "Volume"] + for sop_node in all_out_sops: + # Empty Geometry test + if not hasattr(sop_node, "geometry"): + invalid.append(sop_node) # empty_geometry + cls.log.error( + "Sop node '%s' doesn't include any prims.", + sop_node.path() + ) + continue + + frame = instance.data.get("frameStart", 0) + geo = sop_node.geometryAtFrame(frame) + if len(geo.iterPrims()) == 0: + invalid.append(sop_node) # empty_geometry + cls.log.error( + "Sop node '%s' doesn't include any prims.", + sop_node.path() + ) + continue + + # Invalid Prims test + for prim_type in invalid_prim_types: + if geo.countPrimType(prim_type) > 0: + invalid.append(sop_node) # invalid_prims + cls.log.error( + "Sop node '%s' includes invalid prims of type '%s'.", + sop_node.path(), prim_type + ) + + if invalid: + return invalid diff --git a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py index 4584e78f4f..6594d10851 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py +++ b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py @@ -19,12 +19,11 @@ class ValidateFileExtension(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder - families = ["pointcache", "camera", "vdbcache"] + families = ["camera", "vdbcache"] hosts = ["houdini"] label = "Output File Extension" family_extensions = { - "pointcache": ".abc", "camera": ".abc", "vdbcache": ".vdb", } diff --git a/openpype/hosts/houdini/plugins/publish/validate_mesh_is_static.py b/openpype/hosts/houdini/plugins/publish/validate_mesh_is_static.py new file mode 100644 index 0000000000..b499682e0b --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_mesh_is_static.py @@ -0,0 +1,59 @@ +# -*- coding: utf-8 -*- +"""Validator for correct naming of Static Meshes.""" +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import ValidateContentsOrder + +from openpype.hosts.houdini.api.action import SelectInvalidAction +from openpype.hosts.houdini.api.lib import get_output_children + + +class ValidateMeshIsStatic(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate mesh is static. + + It checks if output node is time dependent. + """ + + families = ["staticMesh"] + hosts = ["houdini"] + label = "Validate Mesh is Static" + order = ValidateContentsOrder + 0.1 + actions = [SelectInvalidAction] + + def process(self, instance): + + invalid = self.get_invalid(instance) + if invalid: + nodes = [n.path() for n in invalid] + raise PublishValidationError( + "See log for details. " + "Invalid nodes: {0}".format(nodes) + ) + + @classmethod + def get_invalid(cls, instance): + + invalid = [] + + output_node = instance.data.get("output_node") + if output_node is None: + cls.log.debug( + "No Output Node, skipping check.." + ) + return + + all_outputs = get_output_children(output_node) + + for output in all_outputs: + if output.isTimeDependent(): + invalid.append(output) + cls.log.error( + "Output node '%s' is time dependent.", + output.path() + ) + + return invalid diff --git a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py index cd5e724ab3..471fa5b6d1 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py @@ -1,10 +1,19 @@ # -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder from openpype.pipeline import PublishValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) + import hou +class AddDefaultPathAction(RepairAction): + label = "Add a default path attribute" + icon = "mdi.pencil-plus-outline" + + class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): """Validate all primitives build hierarchy from attribute when enabled. @@ -15,15 +24,17 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): """ order = ValidateContentsOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Prims Hierarchy Path" + actions = [AddDefaultPathAction] def process(self, instance): invalid = self.get_invalid(instance) if invalid: + nodes = [n.path() for n in invalid] raise PublishValidationError( - "See log for details. " "Invalid nodes: {0}".format(invalid), + "See log for details. " "Invalid nodes: {0}".format(nodes), title=self.label ) @@ -36,10 +47,10 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): if output_node is None: cls.log.error( "SOP Output node in '%s' does not exist. " - "Ensure a valid SOP output path is set." % rop_node.path() + "Ensure a valid SOP output path is set.", rop_node.path() ) - return [rop_node.path()] + return [rop_node] build_from_path = rop_node.parm("build_from_path").eval() if not build_from_path: @@ -56,9 +67,9 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): "value set, but 'Build Hierarchy from Attribute'" "is enabled." ) - return [rop_node.path()] + return [rop_node] - cls.log.debug("Checking for attribute: %s" % path_attr) + cls.log.debug("Checking for attribute: %s", path_attr) if not hasattr(output_node, "geometry"): # In the case someone has explicitly set an Object @@ -89,17 +100,17 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): if not attrib: cls.log.info( "Geometry Primitives are missing " - "path attribute: `%s`" % path_attr + "path attribute: `%s`", path_attr ) - return [output_node.path()] + return [output_node] # Ensure at least a single string value is present if not attrib.strings(): cls.log.info( "Primitive path attribute has no " - "string values: %s" % path_attr + "string values: %s", path_attr ) - return [output_node.path()] + return [output_node] paths = geo.primStringAttribValues(path_attr) # Ensure all primitives are set to a valid path @@ -109,6 +120,65 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): num_prims = len(geo.iterPrims()) # faster than len(geo.prims()) cls.log.info( "Prims have no value for attribute `%s` " - "(%s of %s prims)" % (path_attr, len(invalid_prims), num_prims) + "(%s of %s prims)", path_attr, len(invalid_prims), num_prims + ) + return [output_node] + + @classmethod + def repair(cls, instance): + """Add a default path attribute Action. + + It is a helper action more than a repair action, + used to add a default single value for the path. + """ + + rop_node = hou.node(instance.data["instance_node"]) + output_node = rop_node.parm("sop_path").evalAsNode() + + if not output_node: + cls.log.debug( + "Action isn't performed, invalid SOP Path on %s", + rop_node + ) + return + + # This check to prevent the action from running multiple times. + # git_invalid only returns [output_node] when + # path attribute is the problem + if cls.get_invalid(instance) != [output_node]: + return + + path_attr = rop_node.parm("path_attrib").eval() + + path_node = output_node.parent().createNode("name", "AUTO_PATH") + path_node.parm("attribname").set(path_attr) + path_node.parm("name1").set('`opname("..")`/`opname("..")`Shape') + + cls.log.debug( + "'%s' was created. It adds '%s' with a default single value", + path_node, path_attr + ) + + path_node.setGenericFlag(hou.nodeFlag.DisplayComment, True) + path_node.setComment( + 'Auto path node was created automatically by ' + '"Add a default path attribute"' + '\nFeel free to modify or replace it.' + ) + + if output_node.type().name() in ["null", "output"]: + # Connect before + path_node.setFirstInput(output_node.input(0)) + path_node.moveToGoodPosition() + output_node.setFirstInput(path_node) + output_node.moveToGoodPosition() + else: + # Connect after + path_node.setFirstInput(output_node) + rop_node.parm("sop_path").set(path_node.path()) + path_node.moveToGoodPosition() + + cls.log.debug( + "SOP path on '%s' updated to new output node '%s'", + rop_node, path_node ) - return [output_node.path()] diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py index 4e8e5fc0e8..4f71d79382 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py +++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py @@ -36,11 +36,11 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin): if node.parm("shellexec").eval(): self.raise_error("Must not execute in shell") if node.parm("prerender").eval() != cmd: - self.raise_error(("REMOTE_PUBLISH node does not have " - "correct prerender script.")) + self.raise_error("REMOTE_PUBLISH node does not have " + "correct prerender script.") if node.parm("lprerender").eval() != "python": - self.raise_error(("REMOTE_PUBLISH node prerender script " - "type not set to 'python'")) + self.raise_error("REMOTE_PUBLISH node prerender script " + "type not set to 'python'") @classmethod def repair(cls, context): @@ -48,5 +48,4 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin): lib.create_remote_publish_node(force=True) def raise_error(self, message): - self.log.error(message) - raise PublishValidationError(message, title=self.label) + raise PublishValidationError(message) diff --git a/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py b/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py new file mode 100644 index 0000000000..03ecd1b052 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py @@ -0,0 +1,90 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import RepairAction +from openpype.hosts.houdini.api.action import SelectROPAction + +import os +import hou + + +class SetDefaultViewSpaceAction(RepairAction): + label = "Set default view colorspace" + icon = "mdi.monitor" + + +class ValidateReviewColorspace(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate Review Colorspace parameters. + + It checks if 'OCIO Colorspace' parameter was set to valid value. + """ + + order = pyblish.api.ValidatorOrder + 0.1 + families = ["review"] + hosts = ["houdini"] + label = "Validate Review Colorspace" + actions = [SetDefaultViewSpaceAction, SelectROPAction] + + optional = True + + def process(self, instance): + + if not self.is_active(instance.data): + return + + if os.getenv("OCIO") is None: + self.log.debug( + "Using Houdini's Default Color Management, " + " skipping check.." + ) + return + + rop_node = hou.node(instance.data["instance_node"]) + if rop_node.evalParm("colorcorrect") != 2: + # any colorspace settings other than default requires + # 'Color Correct' parm to be set to 'OpenColorIO' + raise PublishValidationError( + "'Color Correction' parm on '{}' ROP must be set to" + " 'OpenColorIO'".format(rop_node.path()) + ) + + if rop_node.evalParm("ociocolorspace") not in \ + hou.Color.ocio_spaces(): + + raise PublishValidationError( + "Invalid value: Colorspace name doesn't exist.\n" + "Check 'OCIO Colorspace' parameter on '{}' ROP" + .format(rop_node.path()) + ) + + @classmethod + def repair(cls, instance): + """Set Default View Space Action. + + It is a helper action more than a repair action, + used to set colorspace on opengl node to the default view. + """ + from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa + + rop_node = hou.node(instance.data["instance_node"]) + + if rop_node.evalParm("colorcorrect") != 2: + rop_node.setParms({"colorcorrect": 2}) + cls.log.debug( + "'Color Correction' parm on '{}' has been set to" + " 'OpenColorIO'".format(rop_node.path()) + ) + + # Get default view colorspace name + default_view_space = get_default_display_view_colorspace() + + rop_node.setParms({"ociocolorspace": default_view_space}) + cls.log.info( + "'OCIO Colorspace' parm on '{}' has been set to " + "the default view color space '{}'" + .format(rop_node, default_view_space) + ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py index ed7f438729..9590e37d26 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py @@ -1,6 +1,12 @@ # -*- coding: utf-8 -*- import pyblish.api from openpype.pipeline import PublishValidationError +from openpype.hosts.houdini.api.action import ( + SelectInvalidAction, + SelectROPAction, +) + +import hou class ValidateSopOutputNode(pyblish.api.InstancePlugin): @@ -18,7 +24,8 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): order = pyblish.api.ValidatorOrder families = ["pointcache", "vdbcache"] hosts = ["houdini"] - label = "Validate Output Node" + label = "Validate Output Node (SOP)" + actions = [SelectROPAction, SelectInvalidAction] def process(self, instance): @@ -31,9 +38,6 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - - import hou - output_node = instance.data.get("output_node") if output_node is None: @@ -43,7 +47,7 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): "Ensure a valid SOP output path is set." % node.path() ) - return [node.path()] + return [node] # Output node must be a Sop node. if not isinstance(output_node, hou.SopNode): @@ -53,7 +57,7 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): "instead found category type: %s" % (output_node.path(), output_node.type().category().name()) ) - return [output_node.path()] + return [output_node] # For the sake of completeness also assert the category type # is Sop to avoid potential edge case scenarios even though @@ -73,11 +77,11 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): except hou.Error as exc: cls.log.error("Cook failed: %s" % exc) cls.log.error(output_node.errors()[0]) - return [output_node.path()] + return [output_node] # Ensure the output node has at least Geometry data if not output_node.geometry(): cls.log.error( "Output node `%s` has no geometry data." % output_node.path() ) - return [output_node.path()] + return [output_node] diff --git a/openpype/hosts/houdini/plugins/publish/validate_subset_name.py b/openpype/hosts/houdini/plugins/publish/validate_subset_name.py new file mode 100644 index 0000000000..bb3648f361 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_subset_name.py @@ -0,0 +1,93 @@ +# -*- coding: utf-8 -*- +"""Validator for correct naming of Static Meshes.""" +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) +from openpype.hosts.houdini.api.action import SelectInvalidAction +from openpype.pipeline.create import get_subset_name + +import hou + + +class FixSubsetNameAction(RepairAction): + label = "Fix Subset Name" + + +class ValidateSubsetName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate Subset name. + + """ + + families = ["staticMesh"] + hosts = ["houdini"] + label = "Validate Subset Name" + order = ValidateContentsOrder + 0.1 + actions = [FixSubsetNameAction, SelectInvalidAction] + + optional = True + + def process(self, instance): + + if not self.is_active(instance.data): + return + + invalid = self.get_invalid(instance) + if invalid: + nodes = [n.path() for n in invalid] + raise PublishValidationError( + "See log for details. " + "Invalid nodes: {0}".format(nodes) + ) + + @classmethod + def get_invalid(cls, instance): + + invalid = [] + + rop_node = hou.node(instance.data["instance_node"]) + + # Check subset name + subset_name = get_subset_name( + family=instance.data["family"], + variant=instance.data["variant"], + task_name=instance.data["task"], + asset_doc=instance.data["assetEntity"], + dynamic_data={"asset": instance.data["asset"]} + ) + + if instance.data.get("subset") != subset_name: + invalid.append(rop_node) + cls.log.error( + "Invalid subset name on rop node '%s' should be '%s'.", + rop_node.path(), subset_name + ) + + return invalid + + @classmethod + def repair(cls, instance): + rop_node = hou.node(instance.data["instance_node"]) + + # Check subset name + subset_name = get_subset_name( + family=instance.data["family"], + variant=instance.data["variant"], + task_name=instance.data["task"], + asset_doc=instance.data["assetEntity"], + dynamic_data={"asset": instance.data["asset"]} + ) + + instance.data["subset"] = subset_name + rop_node.parm("subset").set(subset_name) + + cls.log.debug( + "Subset name on rop node '%s' has been set to '%s'.", + rop_node.path(), subset_name + ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/houdini/plugins/publish/validate_unreal_staticmesh_naming.py new file mode 100644 index 0000000000..ae3c7e5602 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_unreal_staticmesh_naming.py @@ -0,0 +1,97 @@ +# -*- coding: utf-8 -*- +"""Validator for correct naming of Static Meshes.""" +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import ValidateContentsOrder + +from openpype.hosts.houdini.api.action import SelectInvalidAction +from openpype.hosts.houdini.api.lib import get_output_children + +import hou + + +class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate name of Unreal Static Mesh. + + This validator checks if output node name has a collision prefix: + - UBX + - UCP + - USP + - UCX + + This validator also checks if subset name is correct + - {static mesh prefix}_{Asset-Name}{Variant}. + + """ + + families = ["staticMesh"] + hosts = ["houdini"] + label = "Unreal Static Mesh Name (FBX)" + order = ValidateContentsOrder + 0.1 + actions = [SelectInvalidAction] + + optional = True + collision_prefixes = [] + static_mesh_prefix = "" + + @classmethod + def apply_settings(cls, project_settings, system_settings): + + settings = ( + project_settings["houdini"]["create"]["CreateStaticMesh"] + ) + cls.collision_prefixes = settings["collision_prefixes"] + cls.static_mesh_prefix = settings["static_mesh_prefix"] + + def process(self, instance): + + if not self.is_active(instance.data): + return + + invalid = self.get_invalid(instance) + if invalid: + nodes = [n.path() for n in invalid] + raise PublishValidationError( + "See log for details. " + "Invalid nodes: {0}".format(nodes) + ) + + @classmethod + def get_invalid(cls, instance): + + invalid = [] + + rop_node = hou.node(instance.data["instance_node"]) + output_node = instance.data.get("output_node") + if output_node is None: + cls.log.debug( + "No Output Node, skipping check.." + ) + return + + if rop_node.evalParm("buildfrompath"): + # This validator doesn't support naming check if + # building hierarchy from path' is used + cls.log.info( + "Using 'Build Hierarchy from Path Attribute', skipping check.." + ) + return + + # Check nodes names + all_outputs = get_output_children(output_node, include_sops=False) + for output in all_outputs: + for prefix in cls.collision_prefixes: + if output.name().startswith(prefix): + invalid.append(output) + cls.log.error( + "Invalid node name: Node '%s' " + "includes a collision prefix '%s'", + output.path(), prefix + ) + break + + return invalid diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py index 02c44ab94e..1daa96f2b9 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py @@ -24,7 +24,7 @@ class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin): if not os.path.isabs(filepath): invalid.append( - "Output file path is not " "absolute path: %s" % filepath + "Output file path is not absolute path: %s" % filepath ) if invalid: diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py index c4f118ac3b..0db782d545 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py @@ -4,7 +4,6 @@ import re import pyblish.api from openpype.client import get_subset_by_name -from openpype.pipeline import legacy_io from openpype.pipeline.publish import ValidateContentsOrder from openpype.pipeline import PublishValidationError @@ -18,7 +17,7 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin): label = "USD Shade model exists" def process(self, instance): - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] asset_name = instance.data["asset"] subset = instance.data["subset"] diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py index 543c8e1407..afe05e3173 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -7,8 +7,6 @@ from openpype.pipeline import ( ) from openpype.pipeline.publish import RepairAction -from openpype.pipeline.publish import RepairAction - class ValidateWorkfilePaths( pyblish.api.InstancePlugin, OptionalPyblishPluginMixin): diff --git a/openpype/hosts/houdini/startup/MainMenuCommon.xml b/openpype/hosts/houdini/startup/MainMenuCommon.xml index 47a4653d5d..b2e32a70f9 100644 --- a/openpype/hosts/houdini/startup/MainMenuCommon.xml +++ b/openpype/hosts/houdini/startup/MainMenuCommon.xml @@ -2,7 +2,19 @@ - + + + + + + @@ -74,6 +86,14 @@ openpype.hosts.houdini.api.lib.reset_framerange() ]]> + + + + + diff --git a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py index 48019e0a82..310d057a11 100644 --- a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py +++ b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py @@ -5,7 +5,7 @@ import husdoutputprocessors.base as base import colorbleed.usdlib as usdlib from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy, get_current_project_name class AvalonURIOutputProcessor(base.OutputProcessorBase): @@ -122,7 +122,7 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase): """ - PROJECT = legacy_io.Session["AVALON_PROJECT"] + PROJECT = get_current_project_name() anatomy = Anatomy(PROJECT) asset_doc = get_asset_by_name(PROJECT, asset) if not asset_doc: diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index 995c35792a..8b70b3ced7 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -1,15 +1,35 @@ # -*- coding: utf-8 -*- """Library of functions useful for 3dsmax pipeline.""" import contextlib +import logging import json from typing import Any, Dict, Union import six +from openpype.pipeline import get_current_project_name, colorspace +from openpype.settings import get_project_settings from openpype.pipeline.context_tools import ( - get_current_project, get_current_project_asset,) + get_current_project, get_current_project_asset) +from openpype.style import load_stylesheet from pymxs import runtime as rt + JSON_PREFIX = "JSON::" +log = logging.getLogger("openpype.hosts.max") + + +def get_main_window(): + """Acquire Max's main window""" + from qtpy import QtWidgets + top_widgets = QtWidgets.QApplication.topLevelWidgets() + name = "QmaxApplicationWindow" + for widget in top_widgets: + if ( + widget.inherits("QMainWindow") + and widget.metaObject().className() == name + ): + return widget + raise RuntimeError('Count not find 3dsMax main window.') def imprint(node_name: str, data: dict) -> bool: @@ -78,6 +98,14 @@ def read(container) -> dict: value.startswith(JSON_PREFIX): with contextlib.suppress(json.JSONDecodeError): value = json.loads(value[len(JSON_PREFIX):]) + + # default value behavior + # convert maxscript boolean values + if value == "true": + value = True + elif value == "false": + value = False + data[key.strip()] = value data["instance_node"] = container.Name @@ -269,6 +297,7 @@ def set_context_setting(): """ reset_scene_resolution() reset_frame_range() + reset_colorspace() def get_max_version(): @@ -284,6 +313,14 @@ def get_max_version(): return max_info[7] +def is_headless(): + """Check if 3dsMax runs in batch mode. + If it returns True, it runs in 3dsbatch.exe + If it returns False, it runs in 3dsmax.exe + """ + return rt.maxops.isInNonInteractiveMode() + + @contextlib.contextmanager def viewport_camera(camera): original = rt.viewport.getCamera() @@ -304,3 +341,159 @@ def set_timeline(frameStart, frameEnd): """ rt.animationRange = rt.interval(frameStart, frameEnd) return rt.animationRange + + +def reset_colorspace(): + """OCIO Configuration + Supports in 3dsMax 2024+ + + """ + if int(get_max_version()) < 2024: + return + project_name = get_current_project_name() + colorspace_mgr = rt.ColorPipelineMgr + project_settings = get_project_settings(project_name) + + max_config_data = colorspace.get_imageio_config( + project_name, "max", project_settings) + if max_config_data: + ocio_config_path = max_config_data["path"] + colorspace_mgr = rt.ColorPipelineMgr + colorspace_mgr.Mode = rt.Name("OCIO_Custom") + colorspace_mgr.OCIOConfigPath = ocio_config_path + + colorspace_mgr.OCIOConfigPath = ocio_config_path + + +def check_colorspace(): + parent = get_main_window() + if parent is None: + log.info("Skipping outdated pop-up " + "because Max main window can't be found.") + if int(get_max_version()) >= 2024: + color_mgr = rt.ColorPipelineMgr + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) + max_config_data = colorspace.get_imageio_config( + project_name, "max", project_settings) + if max_config_data and color_mgr.Mode != rt.Name("OCIO_Custom"): + if not is_headless(): + from openpype.widgets import popup + dialog = popup.Popup(parent=parent) + dialog.setWindowTitle("Warning: Wrong OCIO Mode") + dialog.setMessage("This scene has wrong OCIO " + "Mode setting.") + dialog.setButtonText("Fix") + dialog.setStyleSheet(load_stylesheet()) + dialog.on_clicked.connect(reset_colorspace) + dialog.show() + +def unique_namespace(namespace, format="%02d", + prefix="", suffix="", con_suffix="CON"): + """Return unique namespace + + Arguments: + namespace (str): Name of namespace to consider + format (str, optional): Formatting of the given iteration number + suffix (str, optional): Only consider namespaces with this suffix. + con_suffix: max only, for finding the name of the master container + + >>> unique_namespace("bar") + # bar01 + >>> unique_namespace(":hello") + # :hello01 + >>> unique_namespace("bar:", suffix="_NS") + # bar01_NS: + + """ + + def current_namespace(): + current = namespace + # When inside a namespace Max adds no trailing : + if not current.endswith(":"): + current += ":" + return current + + # Always check against the absolute namespace root + # There's no clash with :x if we're defining namespace :a:x + ROOT = ":" if namespace.startswith(":") else current_namespace() + + # Strip trailing `:` tokens since we might want to add a suffix + start = ":" if namespace.startswith(":") else "" + end = ":" if namespace.endswith(":") else "" + namespace = namespace.strip(":") + if ":" in namespace: + # Split off any nesting that we don't uniqify anyway. + parents, namespace = namespace.rsplit(":", 1) + start += parents + ":" + ROOT += start + + iteration = 1 + increment_version = True + while increment_version: + nr_namespace = namespace + format % iteration + unique = prefix + nr_namespace + suffix + container_name = f"{unique}:{namespace}{con_suffix}" + if not rt.getNodeByName(container_name): + name_space = start + unique + end + increment_version = False + return name_space + else: + increment_version = True + iteration += 1 + + +def get_namespace(container_name): + """Get the namespace and name of the sub-container + + Args: + container_name (str): the name of master container + + Raises: + RuntimeError: when there is no master container found + + Returns: + namespace (str): namespace of the sub-container + name (str): name of the sub-container + """ + node = rt.getNodeByName(container_name) + if not node: + raise RuntimeError("Master Container Not Found..") + name = rt.getUserProp(node, "name") + namespace = rt.getUserProp(node, "namespace") + return namespace, name + + +def object_transform_set(container_children): + """A function which allows to store the transform of + previous loaded object(s) + Args: + container_children(list): A list of nodes + + Returns: + transform_set (dict): A dict with all transform data of + the previous loaded object(s) + """ + transform_set = {} + for node in container_children: + name = f"{node.name}.transform" + transform_set[name] = node.pos + name = f"{node.name}.scale" + transform_set[name] = node.scale + return transform_set + + +def get_plugins() -> list: + """Get all loaded plugins in 3dsMax + + Returns: + plugin_info_list: a list of loaded plugins + """ + manager = rt.PluginManager + count = manager.pluginDllCount + plugin_info_list = [] + for p in range(1, count + 1): + plugin_info = manager.pluginDllName(p) + plugin_info_list.append(plugin_info) + + return plugin_info_list diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py index 3074f8e170..90608737c2 100644 --- a/openpype/hosts/max/api/lib_renderproducts.py +++ b/openpype/hosts/max/api/lib_renderproducts.py @@ -7,15 +7,18 @@ import os from pymxs import runtime as rt from openpype.hosts.max.api.lib import get_current_renderer -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name from openpype.settings import get_project_settings class RenderProducts(object): def __init__(self, project_settings=None): - self._project_settings = project_settings or get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + self._project_settings = project_settings + if not self._project_settings: + self._project_settings = get_project_settings( + get_current_project_name() + ) def get_beauty(self, container): render_dir = os.path.dirname(rt.rendOutputFilename) diff --git a/openpype/hosts/max/api/lib_rendersettings.py b/openpype/hosts/max/api/lib_rendersettings.py index 91e4a5bf9b..26e176aa8d 100644 --- a/openpype/hosts/max/api/lib_rendersettings.py +++ b/openpype/hosts/max/api/lib_rendersettings.py @@ -2,7 +2,7 @@ import os from pymxs import runtime as rt from openpype.lib import Logger from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name from openpype.pipeline.context_tools import get_current_project_asset from openpype.hosts.max.api.lib import ( @@ -31,19 +31,16 @@ class RenderSettings(object): self._project_settings = project_settings if not self._project_settings: self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + get_current_project_name() ) def set_render_camera(self, selection): for sel in selection: # to avoid Attribute Error from pymxs wrapper - found = False if rt.classOf(sel) in rt.Camera.classes: - found = True rt.viewport.setCamera(sel) - break - if not found: - raise RuntimeError("Camera not found") + return + raise RuntimeError("Active Camera not found") def render_output(self, container): folder = rt.maxFilePath @@ -113,7 +110,8 @@ class RenderSettings(object): # for setting up renderable camera arv = rt.MAXToAOps.ArnoldRenderView() render_camera = rt.viewport.GetCamera() - arv.setOption("Camera", str(render_camera)) + if render_camera: + arv.setOption("Camera", str(render_camera)) # TODO: add AOVs and extension img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa diff --git a/openpype/hosts/max/api/menu.py b/openpype/hosts/max/api/menu.py index 066cc90039..364f9cd5c5 100644 --- a/openpype/hosts/max/api/menu.py +++ b/openpype/hosts/max/api/menu.py @@ -119,6 +119,10 @@ class OpenPypeMenu(object): frame_action.triggered.connect(self.frame_range_callback) openpype_menu.addAction(frame_action) + colorspace_action = QtWidgets.QAction("Set Colorspace", openpype_menu) + colorspace_action.triggered.connect(self.colorspace_callback) + openpype_menu.addAction(colorspace_action) + return openpype_menu def load_callback(self): @@ -148,3 +152,7 @@ class OpenPypeMenu(object): def frame_range_callback(self): """Callback to reset frame range""" return lib.reset_frame_range() + + def colorspace_callback(self): + """Callback to reset colorspace""" + return lib.reset_colorspace() diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py index 03b85a4066..e46c4cabe7 100644 --- a/openpype/hosts/max/api/pipeline.py +++ b/openpype/hosts/max/api/pipeline.py @@ -15,6 +15,7 @@ from openpype.pipeline import ( ) from openpype.hosts.max.api.menu import OpenPypeMenu from openpype.hosts.max.api import lib +from openpype.hosts.max.api.plugin import MS_CUSTOM_ATTRIB from openpype.hosts.max import MAX_HOST_DIR from pymxs import runtime as rt # noqa @@ -56,6 +57,9 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): rt.callbacks.addScript(rt.Name('systemPostNew'), context_setting) + rt.callbacks.addScript(rt.Name('filePostOpen'), + lib.check_colorspace) + def has_unsaved_changes(self): # TODO: how to get it from 3dsmax? return True @@ -152,21 +156,87 @@ def ls() -> list: yield lib.read(container) -def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"): +def containerise(name: str, nodes: list, context, + namespace=None, loader=None, suffix="_CON"): data = { "schema": "openpype:container-2.0", "id": AVALON_CONTAINER_ID, "name": name, - "namespace": "", + "namespace": namespace or "", "loader": loader, "representation": context["representation"]["_id"], } - - container_name = f"{name}{suffix}" + container_name = f"{namespace}:{name}{suffix}" container = rt.container(name=container_name) - for node in nodes: - node.Parent = container - + import_custom_attribute_data(container, nodes) if not lib.imprint(container_name, data): print(f"imprinting of {container_name} failed.") return container + + +def load_custom_attribute_data(): + """Re-loading the Openpype/AYON custom parameter built by the creator + + Returns: + attribute: re-loading the custom OP attributes set in Maxscript + """ + return rt.Execute(MS_CUSTOM_ATTRIB) + + +def import_custom_attribute_data(container: str, selections: list): + """Importing the Openpype/AYON custom parameter built by the creator + + Args: + container (str): target container which adds custom attributes + selections (list): nodes to be added into + group in custom attributes + """ + attrs = load_custom_attribute_data() + modifier = rt.EmptyModifier() + rt.addModifier(container, modifier) + container.modifiers[0].name = "OP Data" + rt.custAttributes.add(container.modifiers[0], attrs) + node_list = [] + sel_list = [] + for i in selections: + node_ref = rt.NodeTransformMonitor(node=i) + node_list.append(node_ref) + sel_list.append(str(i)) + + # Setting the property + rt.setProperty( + container.modifiers[0].openPypeData, + "all_handles", node_list) + rt.setProperty( + container.modifiers[0].openPypeData, + "sel_list", sel_list) + + +def update_custom_attribute_data(container: str, selections: list): + """Updating the Openpype/AYON custom parameter built by the creator + + Args: + container (str): target container which adds custom attributes + selections (list): nodes to be added into + group in custom attributes + """ + if container.modifiers[0].name == "OP Data": + rt.deleteModifier(container, container.modifiers[0]) + import_custom_attribute_data(container, selections) + + +def get_previous_loaded_object(container: str): + """Get previous loaded_object through the OP data + + Args: + container (str): the container which stores the OP data + + Returns: + node_list(list): list of nodes which are previously loaded + """ + node_list = [] + sel_list = rt.getProperty(container.modifiers[0].openPypeData, "sel_list") + for obj in rt.Objects: + if str(obj) in sel_list: + node_list.append(obj) + return node_list diff --git a/openpype/hosts/max/api/plugin.py b/openpype/hosts/max/api/plugin.py index 14b0653f40..fa6db073db 100644 --- a/openpype/hosts/max/api/plugin.py +++ b/openpype/hosts/max/api/plugin.py @@ -15,6 +15,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" parameters main rollout:OPparams ( all_handles type:#maxObjectTab tabSize:0 tabSizeVariable:on + sel_list type:#stringTab tabSize:0 tabSizeVariable:on ) rollout OPparams "OP Parameters" @@ -30,15 +31,46 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" handle_name = obj_name + "<" + handle as string + ">" return handle_name ) + fn nodes_to_add node = + ( + sceneObjs = #() + if classOf node == Container do return false + n = node as string + for obj in Objects do + ( + tmp_obj = obj as string + append sceneObjs tmp_obj + ) + if sel_list != undefined do + ( + for obj in sel_list do + ( + idx = findItem sceneObjs obj + if idx do + ( + deleteItem sceneObjs idx + ) + ) + ) + idx = findItem sceneObjs n + if idx then return true else false + ) + + fn nodes_to_rmv node = + ( + n = node as string + idx = findItem sel_list n + if idx then return true else false + ) on button_add pressed do ( - current_selection = selectByName title:"Select Objects to add to - the Container" buttontext:"Add" - if current_selection == undefined then return False + current_sel = selectByName title:"Select Objects to add to + the Container" buttontext:"Add" filter:nodes_to_add + if current_sel == undefined then return False temp_arr = #() i_node_arr = #() - for c in current_selection do + for c in current_sel do ( handle_name = node_to_name c node_ref = NodeTransformMonitor node:c @@ -46,8 +78,10 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" if idx do ( continue ) + name = c as string append temp_arr handle_name append i_node_arr node_ref + append sel_list name ) all_handles = join i_node_arr all_handles list_node.items = join temp_arr list_node.items @@ -55,18 +89,22 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" on button_del pressed do ( - current_selection = selectByName title:"Select Objects to remove - from the Container" buttontext:"Remove" - if current_selection == undefined then return False + current_sel = selectByName title:"Select Objects to remove + from the Container" buttontext:"Remove" filter: nodes_to_rmv + if current_sel == undefined or current_sel.count == 0 then + ( + return False + ) temp_arr = #() i_node_arr = #() new_i_node_arr = #() new_temp_arr = #() - for c in current_selection do + for c in current_sel do ( node_ref = NodeTransformMonitor node:c as string handle_name = node_to_name c + n = c as string tmp_all_handles = #() for i in all_handles do ( @@ -84,6 +122,11 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" ( new_temp_arr = DeleteItem list_node.items idx ) + idx = finditem sel_list n + if idx do + ( + sel_list = DeleteItem sel_list idx + ) ) all_handles = join i_node_arr new_i_node_arr list_node.items = join temp_arr new_temp_arr @@ -96,6 +139,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" temp_arr = #() for x in all_handles do ( + if x.node == undefined do continue handle_name = node_to_name x.node append temp_arr handle_name ) @@ -145,7 +189,10 @@ class MaxCreatorBase(object): node = rt.Container(name=node) attrs = rt.Execute(MS_CUSTOM_ATTRIB) - rt.custAttributes.add(node.baseObject, attrs) + modifier = rt.EmptyModifier() + rt.addModifier(node, modifier) + node.modifiers[0].name = "OP Data" + rt.custAttributes.add(node.modifiers[0], attrs) return node @@ -169,13 +216,19 @@ class MaxCreator(Creator, MaxCreatorBase): if pre_create_data.get("use_selection"): node_list = [] + sel_list = [] for i in self.selected_nodes: node_ref = rt.NodeTransformMonitor(node=i) node_list.append(node_ref) + sel_list.append(str(i)) # Setting the property rt.setProperty( - instance_node.openPypeData, "all_handles", node_list) + instance_node.modifiers[0].openPypeData, + "all_handles", node_list) + rt.setProperty( + instance_node.modifiers[0].openPypeData, + "sel_list", sel_list) self._add_instance_to_context(instance) imprint(instance_node.name, instance.data_to_store()) @@ -214,8 +267,8 @@ class MaxCreator(Creator, MaxCreatorBase): instance_node = rt.GetNodeByName( instance.data.get("instance_node")) if instance_node: - count = rt.custAttributes.count(instance_node) - rt.custAttributes.delete(instance_node, count) + count = rt.custAttributes.count(instance_node.modifiers[0]) + rt.custAttributes.delete(instance_node.modifiers[0], count) rt.Delete(instance_node) self._remove_instance_from_context(instance) diff --git a/openpype/hosts/max/hooks/force_startup_script.py b/openpype/hosts/max/hooks/force_startup_script.py index 4fcf4fef21..5fb8334d4b 100644 --- a/openpype/hosts/max/hooks/force_startup_script.py +++ b/openpype/hosts/max/hooks/force_startup_script.py @@ -1,7 +1,8 @@ # -*- coding: utf-8 -*- """Pre-launch to force 3ds max startup script.""" -from openpype.lib import PreLaunchHook import os +from openpype.hosts.max import MAX_HOST_DIR +from openpype.lib.applications import PreLaunchHook, LaunchTypes class ForceStartupScript(PreLaunchHook): @@ -13,12 +14,14 @@ class ForceStartupScript(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} order = 11 + launch_types = {LaunchTypes.local} def execute(self): startup_args = [ "-U", "MAXScript", - f"{os.getenv('OPENPYPE_ROOT')}\\openpype\\hosts\\max\\startup\\startup.ms"] # noqa + os.path.join(MAX_HOST_DIR, "startup", "startup.ms"), + ] self.launch_context.launch_args.append(startup_args) diff --git a/openpype/hosts/max/hooks/inject_python.py b/openpype/hosts/max/hooks/inject_python.py index d9753ccbd8..e9dddbf710 100644 --- a/openpype/hosts/max/hooks/inject_python.py +++ b/openpype/hosts/max/hooks/inject_python.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """Pre-launch hook to inject python environment.""" -from openpype.lib import PreLaunchHook import os +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InjectPythonPath(PreLaunchHook): @@ -13,7 +13,8 @@ class InjectPythonPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} + launch_types = {LaunchTypes.local} def execute(self): self.launch_context.env["MAX_PYTHONPATH"] = os.environ["PYTHONPATH"] diff --git a/openpype/hosts/max/hooks/set_paths.py b/openpype/hosts/max/hooks/set_paths.py index 3db5306344..4b961fa91e 100644 --- a/openpype/hosts/max/hooks/set_paths.py +++ b/openpype/hosts/max/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["max"] + app_groups = {"max"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/max/plugins/create/create_render.py b/openpype/hosts/max/plugins/create/create_render.py index 235046684e..9cc3c8da8a 100644 --- a/openpype/hosts/max/plugins/create/create_render.py +++ b/openpype/hosts/max/plugins/create/create_render.py @@ -14,7 +14,6 @@ class CreateRender(plugin.MaxCreator): def create(self, subset_name, instance_data, pre_create_data): from pymxs import runtime as rt - sel_obj = list(rt.selection) file = rt.maxFileName filename, _ = os.path.splitext(file) instance_data["AssetName"] = filename diff --git a/openpype/hosts/max/plugins/load/load_camera_fbx.py b/openpype/hosts/max/plugins/load/load_camera_fbx.py index c51900dbb7..ce1427a980 100644 --- a/openpype/hosts/max/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/max/plugins/load/load_camera_fbx.py @@ -1,7 +1,16 @@ import os from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -16,50 +25,66 @@ class FbxLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - - filepath = os.path.normpath(self.fname) + filepath = self.filepath_from_context(context) + filepath = os.path.normpath(filepath) rt.FBXImporterSetParam("Animation", True) rt.FBXImporterSetParam("Camera", True) rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("Mode", rt.Name("create")) rt.FBXImporterSetParam("Preserveinstances", True) rt.ImportFile( filepath, rt.name("noPrompt"), using=rt.FBXIMP) - container = rt.GetNodeByName(f"{name}") - if not container: - container = rt.Container() - container.name = f"{name}" + namespace = unique_namespace( + name + "_", + suffix="_", + ) + selections = rt.GetCurrentSelection() - for selection in rt.GetCurrentSelection(): - selection.Parent = container + for selection in selections: + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [container], context, loader=self.__class__.__name__) + name, selections, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) - node = rt.GetNodeByName(container["instance_node"]) - rt.Select(node.Children) - fbx_reimport_cmd = ( - f""" + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + namespace, _ = get_namespace(node_name) -FBXImporterSetParam "Animation" true -FBXImporterSetParam "Cameras" true -FBXImporterSetParam "AxisConversionMethod" true -FbxExporterSetParam "UpAxis" "Y" -FbxExporterSetParam "Preserveinstances" true + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + prev_fbx_objects = rt.GetCurrentSelection() + transform_data = object_transform_set(prev_fbx_objects) + for prev_fbx_obj in prev_fbx_objects: + if rt.isValidNode(prev_fbx_obj): + rt.Delete(prev_fbx_obj) -importFile @"{path}" #noPrompt using:FBXIMP - """) - rt.Execute(fbx_reimport_cmd) - - with maintained_selection(): - rt.Select(node) + rt.FBXImporterSetParam("Animation", True) + rt.FBXImporterSetParam("Camera", True) + rt.FBXImporterSetParam("Mode", rt.Name("merge")) + rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("Preserveinstances", True) + rt.ImportFile( + path, rt.name("noPrompt"), using=rt.FBXIMP) + current_fbx_objects = rt.GetCurrentSelection() + fbx_objects = [] + for fbx_object in current_fbx_objects: + fbx_object.name = f"{namespace}:{fbx_object.name}" + fbx_objects.append(fbx_object) + fbx_transform = f"{fbx_object.name}.transform" + if fbx_transform in transform_data.keys(): + fbx_object.pos = transform_data[fbx_transform] or 0 + fbx_object.scale = transform_data[ + f"{fbx_object.name}.scale"] or 0 + update_custom_attribute_data(node, fbx_objects) lib.imprint(container["instance_node"], { "representation": str(representation["_id"]) }) diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index e3fb34f5bc..0b5f0a2858 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -1,7 +1,15 @@ import os from openpype.hosts.max.api import lib -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) +from openpype.hosts.max.api.pipeline import ( + containerise, get_previous_loaded_object, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -19,34 +27,63 @@ class MaxSceneLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - path = os.path.normpath(self.fname) + path = self.filepath_from_context(context) + path = os.path.normpath(path) # import the max scene by using "merge file" path = path.replace('\\', '/') - rt.MergeMaxFile(path) + rt.MergeMaxFile(path, quiet=True, includeFullGroup=True) max_objects = rt.getLastMergedNodes() - max_container = rt.Container(name=f"{name}") - for max_object in max_objects: - max_object.Parent = max_container - + max_object_names = [obj.name for obj in max_objects] + # implement the OP/AYON custom attributes before load + max_container = [] + namespace = unique_namespace( + name + "_", + suffix="_", + ) + for max_obj, obj_name in zip(max_objects, max_object_names): + max_obj.name = f"{namespace}:{obj_name}" + max_container.append(rt.getNodeByName(max_obj.name)) return containerise( - name, [max_container], context, loader=self.__class__.__name__) + name, max_container, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + namespace, _ = get_namespace(node_name) + # delete the old container with attribute + # delete old duplicate + # use the modifier OP data to delete the data + node_list = get_previous_loaded_object(node) + rt.select(node_list) + prev_max_objects = rt.GetCurrentSelection() + transform_data = object_transform_set(prev_max_objects) - rt.MergeMaxFile(path, - rt.Name("noRedraw"), - rt.Name("deleteOldDups"), - rt.Name("useSceneMtlDups")) + for prev_max_obj in prev_max_objects: + if rt.isValidNode(prev_max_obj): # noqa + rt.Delete(prev_max_obj) + rt.MergeMaxFile(path, quiet=True) - max_objects = rt.getLastMergedNodes() - container_node = rt.GetNodeByName(node_name) - for max_object in max_objects: - max_object.Parent = container_node + current_max_objects = rt.getLastMergedNodes() + current_max_object_names = [obj.name for obj + in current_max_objects] + + max_objects = [] + for max_obj, obj_name in zip(current_max_objects, + current_max_object_names): + max_obj.name = f"{namespace}:{obj_name}" + max_objects.append(max_obj) + max_transform = f"{max_obj.name}.transform" + if max_transform in transform_data.keys(): + max_obj.pos = transform_data[max_transform] or 0 + max_obj.scale = transform_data[ + f"{max_obj.name}.scale"] or 0 + + update_custom_attribute_data(node, max_objects) lib.imprint(container["instance_node"], { "representation": str(representation["_id"]) }) diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py index 58c6d3c889..c41608c860 100644 --- a/openpype/hosts/max/plugins/load/load_model.py +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -1,15 +1,20 @@ import os from openpype.pipeline import load, get_representation_path -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object +) from openpype.hosts.max.api import lib -from openpype.hosts.max.api.lib import maintained_selection +from openpype.hosts.max.api.lib import ( + maintained_selection, unique_namespace +) class ModelAbcLoader(load.LoaderPlugin): """Loading model with the Alembic loader.""" families = ["model"] - label = "Load Model(Alembic)" + label = "Load Model with Alembic" representations = ["abc"] order = -10 icon = "code-fork" @@ -18,7 +23,7 @@ class ModelAbcLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - file_path = os.path.normpath(self.fname) + file_path = os.path.normpath(self.filepath_from_context(context)) abc_before = { c @@ -30,7 +35,7 @@ class ModelAbcLoader(load.LoaderPlugin): rt.AlembicImport.CustomAttributes = True rt.AlembicImport.UVs = True rt.AlembicImport.VertexColors = True - rt.importFile(file_path, rt.name("noPrompt")) + rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport) abc_after = { c @@ -46,8 +51,22 @@ class ModelAbcLoader(load.LoaderPlugin): abc_container = abc_containers.pop() + namespace = unique_namespace( + name + "_", + suffix="_", + ) + abc_objects = [] + for abc_object in abc_container.Children: + abc_object.name = f"{namespace}:{abc_object.name}" + abc_objects.append(abc_object) + # rename the abc container with namespace + abc_container_name = f"{namespace}:{name}" + abc_container.name = abc_container_name + abc_objects.append(abc_container) + return containerise( - name, [abc_container], context, loader=self.__class__.__name__ + name, abc_objects, context, + namespace, loader=self.__class__.__name__ ) def update(self, container, representation): @@ -55,22 +74,19 @@ class ModelAbcLoader(load.LoaderPlugin): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) - rt.Select(node.Children) - - for alembic in rt.Selection: - abc = rt.GetNodeByName(alembic.name) - rt.Select(abc.Children) - for abc_con in rt.Selection: - container = rt.GetNodeByName(abc_con.name) - container.source = path - rt.Select(container.Children) - for abc_obj in rt.Selection: - alembic_obj = rt.GetNodeByName(abc_obj.name) - alembic_obj.source = path - + node_list = [n for n in get_previous_loaded_object(node) + if rt.ClassOf(n) == rt.AlembicContainer] with maintained_selection(): - rt.Select(node) + rt.Select(node_list) + for alembic in rt.Selection: + abc = rt.GetNodeByName(alembic.name) + rt.Select(abc.Children) + for abc_con in abc.Children: + abc_con.source = path + rt.Select(abc_con.Children) + for abc_obj in abc_con.Children: + abc_obj.source = path lib.imprint( container["instance_node"], {"representation": str(representation["_id"])}, diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py index 663f79f9f5..71fc382eed 100644 --- a/openpype/hosts/max/plugins/load/load_model_fbx.py +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -1,7 +1,15 @@ import os from openpype.pipeline import load, get_representation_path -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, get_previous_loaded_object, + update_custom_attribute_data +) from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set +) from openpype.hosts.max.api.lib import maintained_selection @@ -16,45 +24,68 @@ class FbxModelLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - - filepath = os.path.normpath(self.fname) + filepath = self.filepath_from_context(context) + filepath = os.path.normpath(filepath) rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) + rt.FBXImporterSetParam("Mode", rt.Name("create")) rt.FBXImporterSetParam("Preserveinstances", True) - rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP) + rt.importFile( + filepath, rt.name("noPrompt"), using=rt.FBXIMP) - container = rt.GetNodeByName(name) - if not container: - container = rt.Container() - container.name = name + namespace = unique_namespace( + name + "_", + suffix="_", + ) + selections = rt.GetCurrentSelection() - for selection in rt.GetCurrentSelection(): - selection.Parent = container + for selection in selections: + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [container], context, loader=self.__class__.__name__ - ) + name, selections, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt + path = get_representation_path(representation) - node = rt.getNodeByName(container["instance_node"]) - rt.select(node.Children) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + if not node: + rt.Container(name=node_name) + namespace, _ = get_namespace(node_name) + + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + prev_fbx_objects = rt.GetCurrentSelection() + transform_data = object_transform_set(prev_fbx_objects) + for prev_fbx_obj in prev_fbx_objects: + if rt.isValidNode(prev_fbx_obj): + rt.Delete(prev_fbx_obj) rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) - rt.FBXImporterSetParam("AxisConversionMethod", True) - rt.FBXImporterSetParam("UpAxis", "Y") + rt.FBXImporterSetParam("Mode", rt.Name("create")) rt.FBXImporterSetParam("Preserveinstances", True) rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP) + current_fbx_objects = rt.GetCurrentSelection() + fbx_objects = [] + for fbx_object in current_fbx_objects: + fbx_object.name = f"{namespace}:{fbx_object.name}" + fbx_objects.append(fbx_object) + fbx_transform = f"{fbx_object.name}.transform" + if fbx_transform in transform_data.keys(): + fbx_object.pos = transform_data[fbx_transform] or 0 + fbx_object.scale = transform_data[ + f"{fbx_object.name}.scale"] or 0 with maintained_selection(): rt.Select(node) - - lib.imprint( - container["instance_node"], - {"representation": str(representation["_id"])}, - ) + update_custom_attribute_data(node, fbx_objects) + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py index 77d4e08cfb..aedb288a2d 100644 --- a/openpype/hosts/max/plugins/load/load_model_obj.py +++ b/openpype/hosts/max/plugins/load/load_model_obj.py @@ -1,8 +1,18 @@ import os from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + maintained_selection, + object_transform_set +) from openpype.hosts.max.api.lib import maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -18,40 +28,50 @@ class ObjLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) self.log.debug("Executing command to import..") rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp') + + namespace = unique_namespace( + name + "_", + suffix="_", + ) # create "missing" container for obj import - container = rt.Container() - container.name = name - + selections = rt.GetCurrentSelection() # get current selection - for selection in rt.GetCurrentSelection(): - selection.Parent = container - - asset = rt.GetNodeByName(name) - + for selection in selections: + selection.name = f"{namespace}:{selection.name}" return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, selections, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) node_name = container["instance_node"] - node = rt.GetNodeByName(node_name) - - instance_name, _ = node_name.split("_") - container = rt.GetNodeByName(instance_name) - for child in container.Children: - rt.Delete(child) + node = rt.getNodeByName(node_name) + namespace, _ = get_namespace(node_name) + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + previous_objects = rt.GetCurrentSelection() + transform_data = object_transform_set(previous_objects) + for prev_obj in previous_objects: + if rt.isValidNode(prev_obj): + rt.Delete(prev_obj) rt.Execute(f'importFile @"{path}" #noPrompt using:ObjImp') # get current selection - for selection in rt.GetCurrentSelection(): - selection.Parent = container - + selections = rt.GetCurrentSelection() + for selection in selections: + selection.name = f"{namespace}:{selection.name}" + selection_transform = f"{selection.name}.transform" + if selection_transform in transform_data.keys(): + selection.pos = transform_data[selection_transform] or 0 + selection.scale = transform_data[ + f"{selection.name}.scale"] or 0 + update_custom_attribute_data(node, selections) with maintained_selection(): rt.Select(node) diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py index 2b34669278..bce4bd4a9a 100644 --- a/openpype/hosts/max/plugins/load/load_model_usd.py +++ b/openpype/hosts/max/plugins/load/load_model_usd.py @@ -1,8 +1,20 @@ import os +from pymxs import runtime as rt +from openpype.pipeline.load import LoadError from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set, + get_plugins +) from openpype.hosts.max.api.lib import maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -17,36 +29,54 @@ class ModelUSDLoader(load.LoaderPlugin): color = "orange" def load(self, context, name=None, namespace=None, data=None): - from pymxs import runtime as rt - # asset_filepath - filepath = os.path.normpath(self.fname) + plugin_info = get_plugins() + if "usdimport.dli" not in plugin_info: + raise LoadError("No USDImporter loaded/installed in Max..") + filepath = os.path.normpath(self.filepath_from_context(context)) import_options = rt.USDImporter.CreateOptions() base_filename = os.path.basename(filepath) - filename, ext = os.path.splitext(base_filename) + _, ext = os.path.splitext(base_filename) log_filepath = filepath.replace(ext, "txt") rt.LogPath = log_filepath rt.LogLevel = rt.Name("info") rt.USDImporter.importFile(filepath, importOptions=import_options) - + namespace = unique_namespace( + name + "_", + suffix="_", + ) asset = rt.GetNodeByName(name) + usd_objects = [] + + for usd_asset in asset.Children: + usd_asset.name = f"{namespace}:{usd_asset.name}" + usd_objects.append(usd_asset) + + asset_name = f"{namespace}:{name}" + asset.name = asset_name + # need to get the correct container after renamed + asset = rt.GetNodeByName(asset_name) + usd_objects.append(asset) return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, usd_objects, context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): - from pymxs import runtime as rt - path = get_representation_path(representation) node_name = container["instance_node"] node = rt.GetNodeByName(node_name) - for n in node.Children: - for r in n.Children: - rt.Delete(r) + namespace, name = get_namespace(node_name) + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + prev_objects = [sel for sel in rt.GetCurrentSelection() + if sel != rt.Container + and sel.name != node_name] + transform_data = object_transform_set(prev_objects) + for n in prev_objects: rt.Delete(n) - instance_name, _ = node_name.split("_") import_options = rt.USDImporter.CreateOptions() base_filename = os.path.basename(path) @@ -55,12 +85,23 @@ class ModelUSDLoader(load.LoaderPlugin): rt.LogPath = log_filepath rt.LogLevel = rt.Name("info") - rt.USDImporter.importFile(path, - importOptions=import_options) + rt.USDImporter.importFile( + path, importOptions=import_options) - asset = rt.GetNodeByName(instance_name) - asset.Parent = node + asset = rt.GetNodeByName(name) + usd_objects = [] + for children in asset.Children: + children.name = f"{namespace}:{children.name}" + usd_objects.append(children) + children_transform = f"{children.name}.transform" + if children_transform in transform_data.keys(): + children.pos = transform_data[children_transform] or 0 + children.scale = transform_data[ + f"{children.name}.scale"] or 0 + asset.name = f"{namespace}:{asset.name}" + usd_objects.append(asset) + update_custom_attribute_data(node, usd_objects) with maintained_selection(): rt.Select(node) @@ -72,7 +113,5 @@ class ModelUSDLoader(load.LoaderPlugin): self.update(container, representation) def remove(self, container): - from pymxs import runtime as rt - node = rt.GetNodeByName(container["instance_node"]) rt.Delete(node) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index cadbe7cac2..3c2dfe8c25 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -7,7 +7,11 @@ Because of limited api, alembics can be only loaded, but not easily updated. import os from openpype.pipeline import load, get_representation_path from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import unique_namespace +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object +) class AbcLoader(load.LoaderPlugin): @@ -23,7 +27,8 @@ class AbcLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) abc_before = { c @@ -32,7 +37,7 @@ class AbcLoader(load.LoaderPlugin): } rt.AlembicImport.ImportToRoot = False - rt.importFile(file_path, rt.name("noPrompt")) + rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport) abc_after = { c @@ -47,13 +52,27 @@ class AbcLoader(load.LoaderPlugin): self.log.error("Something failed when loading.") abc_container = abc_containers.pop() - - for abc in rt.GetCurrentSelection(): + selections = rt.GetCurrentSelection() + for abc in selections: for cam_shape in abc.Children: - cam_shape.playbackType = 2 + cam_shape.playbackType = 0 + + namespace = unique_namespace( + name + "_", + suffix="_", + ) + abc_objects = [] + for abc_object in abc_container.Children: + abc_object.name = f"{namespace}:{abc_object.name}" + abc_objects.append(abc_object) + # rename the abc container with namespace + abc_container_name = f"{namespace}:{name}" + abc_container.name = abc_container_name + abc_objects.append(abc_container) return containerise( - name, [abc_container], context, loader=self.__class__.__name__ + name, abc_objects, context, + namespace, loader=self.__class__.__name__ ) def update(self, container, representation): @@ -61,29 +80,23 @@ class AbcLoader(load.LoaderPlugin): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) - - alembic_objects = self.get_container_children(node, "AlembicObject") - for alembic_object in alembic_objects: - alembic_object.source = path - - lib.imprint( - container["instance_node"], - {"representation": str(representation["_id"])}, - ) - + abc_container = [n for n in get_previous_loaded_object(node) + if rt.ClassOf(n) == rt.AlembicContainer] with maintained_selection(): - rt.Select(node.Children) + rt.Select(abc_container) for alembic in rt.Selection: abc = rt.GetNodeByName(alembic.name) rt.Select(abc.Children) - for abc_con in rt.Selection: - container = rt.GetNodeByName(abc_con.name) - container.source = path - rt.Select(container.Children) - for abc_obj in rt.Selection: - alembic_obj = rt.GetNodeByName(abc_obj.name) - alembic_obj.source = path + for abc_con in abc.Children: + abc_con.source = path + rt.Select(abc_con.Children) + for abc_obj in abc_con.Children: + abc_obj.source = path + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_pointcache_ornatrix.py b/openpype/hosts/max/plugins/load/load_pointcache_ornatrix.py new file mode 100644 index 0000000000..96060a6a6f --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_pointcache_ornatrix.py @@ -0,0 +1,108 @@ +import os +from openpype.pipeline import load, get_representation_path +from openpype.pipeline.load import LoadError +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) + +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_namespace, + object_transform_set, + get_plugins +) +from openpype.hosts.max.api import lib +from pymxs import runtime as rt + + +class OxAbcLoader(load.LoaderPlugin): + """Ornatrix Alembic loader.""" + + families = ["camera", "animation", "pointcache"] + label = "Load Alembic with Ornatrix" + representations = ["abc"] + order = -10 + icon = "code-fork" + color = "orange" + postfix = "param" + + def load(self, context, name=None, namespace=None, data=None): + plugin_list = get_plugins() + if "ephere.plugins.autodesk.max.ornatrix.dlo" not in plugin_list: + raise LoadError("Ornatrix plugin not " + "found/installed in Max yet..") + + file_path = os.path.normpath(self.filepath_from_context(context)) + rt.AlembicImport.ImportToRoot = True + rt.AlembicImport.CustomAttributes = True + rt.importFile( + file_path, rt.name("noPrompt"), + using=rt.Ornatrix_Alembic_Importer) + + scene_object = [] + for obj in rt.rootNode.Children: + obj_type = rt.ClassOf(obj) + if str(obj_type).startswith("Ox_"): + scene_object.append(obj) + + namespace = unique_namespace( + name + "_", + suffix="_", + ) + abc_container = [] + for abc in scene_object: + abc.name = f"{namespace}:{abc.name}" + abc_container.append(abc) + + return containerise( + name, abc_container, context, + namespace, loader=self.__class__.__name__ + ) + + def update(self, container, representation): + path = get_representation_path(representation) + node_name = container["instance_node"] + namespace, name = get_namespace(node_name) + node = rt.getNodeByName(node_name) + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + selections = rt.getCurrentSelection() + transform_data = object_transform_set(selections) + for prev_obj in selections: + if rt.isValidNode(prev_obj): + rt.Delete(prev_obj) + + rt.AlembicImport.ImportToRoot = False + rt.AlembicImport.CustomAttributes = True + rt.importFile( + path, rt.name("noPrompt"), + using=rt.Ornatrix_Alembic_Importer) + + scene_object = [] + for obj in rt.rootNode.Children: + obj_type = rt.ClassOf(obj) + if str(obj_type).startswith("Ox_"): + scene_object.append(obj) + ox_abc_objects = [] + for abc in scene_object: + abc.Parent = container + abc.name = f"{namespace}:{abc.name}" + ox_abc_objects.append(abc) + ox_transform = f"{abc.name}.transform" + if ox_transform in transform_data.keys(): + abc.pos = transform_data[ox_transform] or 0 + abc.scale = transform_data[f"{abc.name}.scale"] or 0 + update_custom_attribute_data(node, ox_abc_objects) + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + node = rt.GetNodeByName(container["instance_node"]) + rt.Delete(node) diff --git a/openpype/hosts/max/plugins/load/load_pointcloud.py b/openpype/hosts/max/plugins/load/load_pointcloud.py index 8634e1d51f..e0317a2e22 100644 --- a/openpype/hosts/max/plugins/load/load_pointcloud.py +++ b/openpype/hosts/max/plugins/load/load_pointcloud.py @@ -1,7 +1,15 @@ import os from openpype.hosts.max.api import lib, maintained_selection -from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api.lib import ( + unique_namespace, + +) +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) from openpype.pipeline import get_representation_path, load @@ -13,19 +21,24 @@ class PointCloudLoader(load.LoaderPlugin): order = -8 icon = "code-fork" color = "green" + postfix = "param" def load(self, context, name=None, namespace=None, data=None): """load point cloud by tyCache""" from pymxs import runtime as rt - - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) obj = rt.tyCache() obj.filename = filepath - prt_container = rt.GetNodeByName(obj.name) + namespace = unique_namespace( + name + "_", + suffix="_", + ) + obj.name = f"{namespace}:{obj.name}" return containerise( - name, [prt_container], context, loader=self.__class__.__name__) + name, [obj], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): """update the container""" @@ -33,15 +46,16 @@ class PointCloudLoader(load.LoaderPlugin): path = get_representation_path(representation) node = rt.GetNodeByName(container["instance_node"]) + node_list = get_previous_loaded_object(node) + update_custom_attribute_data( + node, node_list) with maintained_selection(): - rt.Select(node.Children) + rt.Select(node_list) for prt in rt.Selection: - prt_object = rt.GetNodeByName(prt.name) - prt_object.filename = path - - lib.imprint(container["instance_node"], { - "representation": str(representation["_id"]) - }) + prt.filename = path + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_redshift_proxy.py b/openpype/hosts/max/plugins/load/load_redshift_proxy.py index 31692f6367..daf6d3e169 100644 --- a/openpype/hosts/max/plugins/load/load_redshift_proxy.py +++ b/openpype/hosts/max/plugins/load/load_redshift_proxy.py @@ -5,8 +5,17 @@ from openpype.pipeline import ( load, get_representation_path ) -from openpype.hosts.max.api.pipeline import containerise +from openpype.pipeline.load import LoadError +from openpype.hosts.max.api.pipeline import ( + containerise, + update_custom_attribute_data, + get_previous_loaded_object +) from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import ( + unique_namespace, + get_plugins +) class RedshiftProxyLoader(load.LoaderPlugin): @@ -21,7 +30,9 @@ class RedshiftProxyLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - + plugin_info = get_plugins() + if "redshift4max.dlr" not in plugin_info: + raise LoadError("Redshift not loaded/installed in Max..") filepath = self.filepath_from_context(context) rs_proxy = rt.RedshiftProxy() rs_proxy.file = filepath @@ -30,24 +41,27 @@ class RedshiftProxyLoader(load.LoaderPlugin): if collections: rs_proxy.is_sequence = True - container = rt.container() - container.name = name - rs_proxy.Parent = container - - asset = rt.getNodeByName(name) + namespace = unique_namespace( + name + "_", + suffix="_", + ) + rs_proxy.name = f"{namespace}:{rs_proxy.name}" return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, [rs_proxy], context, + namespace, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt path = get_representation_path(representation) node = rt.getNodeByName(container["instance_node"]) - for children in node.Children: - children_node = rt.getNodeByName(children.name) - for proxy in children_node.Children: - proxy.file = path + node_list = get_previous_loaded_object(node) + rt.Select(node_list) + update_custom_attribute_data( + node, rt.Selection) + for proxy in rt.Selection: + proxy.file = path lib.imprint(container["instance_node"], { "representation": str(representation["_id"]) diff --git a/openpype/hosts/max/plugins/publish/collect_members.py b/openpype/hosts/max/plugins/publish/collect_members.py index 812d82ff26..2970cf0e24 100644 --- a/openpype/hosts/max/plugins/publish/collect_members.py +++ b/openpype/hosts/max/plugins/publish/collect_members.py @@ -17,6 +17,6 @@ class CollectMembers(pyblish.api.InstancePlugin): container = rt.GetNodeByName(instance.data["instance_node"]) instance.data["members"] = [ member.node for member - in container.openPypeData.all_handles + in container.modifiers[0].openPypeData.all_handles ] self.log.debug("{}".format(instance.data["members"])) diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index db5c84fad9..a359e61921 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -30,10 +30,18 @@ class CollectRender(pyblish.api.InstancePlugin): asset = get_current_asset_name() files_by_aov = RenderProducts().get_beauty(instance.name) - folder = folder.replace("\\", "/") aovs = RenderProducts().get_aovs(instance.name) files_by_aov.update(aovs) + camera = rt.viewport.GetCamera() + if instance.data.get("members"): + camera_list = [member for member in instance.data["members"] + if rt.ClassOf(member) == rt.Camera.Classes] + if camera_list: + camera = camera_list[-1] + + instance.data["cameras"] = [camera.name] if camera else None # noqa + if "expectedFiles" not in instance.data: instance.data["expectedFiles"] = list() instance.data["files"] = list() @@ -61,6 +69,17 @@ class CollectRender(pyblish.api.InstancePlugin): instance.data["colorspaceConfig"] = "" instance.data["colorspaceDisplay"] = "sRGB" instance.data["colorspaceView"] = "ACES 1.0 SDR-video" + + if int(get_max_version()) >= 2024: + colorspace_mgr = rt.ColorPipelineMgr # noqa + display = next( + (display for display in colorspace_mgr.GetDisplayList())) + view_transform = next( + (view for view in colorspace_mgr.GetViewList(display))) + instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath + instance.data["colorspaceDisplay"] = display + instance.data["colorspaceView"] = view_transform + instance.data["renderProducts"] = colorspace.ARenderProduct() instance.data["publishJobState"] = "Suspended" instance.data["attachTo"] = [] diff --git a/openpype/hosts/max/plugins/publish/collect_review.py b/openpype/hosts/max/plugins/publish/collect_review.py index 7aeb45f46b..8e27a857d7 100644 --- a/openpype/hosts/max/plugins/publish/collect_review.py +++ b/openpype/hosts/max/plugins/publish/collect_review.py @@ -4,6 +4,7 @@ import pyblish.api from pymxs import runtime as rt from openpype.lib import BoolDef +from openpype.hosts.max.api.lib import get_max_version from openpype.pipeline.publish import OpenPypePyblishPluginMixin @@ -43,6 +44,17 @@ class CollectReview(pyblish.api.InstancePlugin, "dspSafeFrame": attr_values.get("dspSafeFrame"), "dspFrameNums": attr_values.get("dspFrameNums") } + + if int(get_max_version()) >= 2024: + colorspace_mgr = rt.ColorPipelineMgr # noqa + display = next( + (display for display in colorspace_mgr.GetDisplayList())) + view_transform = next( + (view for view in colorspace_mgr.GetViewList(display))) + instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath + instance.data["colorspaceDisplay"] = display + instance.data["colorspaceView"] = view_transform + # Enable ftrack functionality instance.data.setdefault("families", []).append('ftrack') @@ -54,7 +66,6 @@ class CollectReview(pyblish.api.InstancePlugin, @classmethod def get_attribute_defs(cls): - return [ BoolDef("dspGeometry", label="Geometry", diff --git a/openpype/hosts/max/plugins/publish/collect_workfile.py b/openpype/hosts/max/plugins/publish/collect_workfile.py index 3500b2735c..0eb4bb731e 100644 --- a/openpype/hosts/max/plugins/publish/collect_workfile.py +++ b/openpype/hosts/max/plugins/publish/collect_workfile.py @@ -4,7 +4,6 @@ import os import pyblish.api from pymxs import runtime as rt -from openpype.pipeline import legacy_io class CollectWorkfile(pyblish.api.ContextPlugin): @@ -26,7 +25,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] data = {} @@ -36,7 +35,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): data.update({ "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), + "asset": context.data["asset"], "label": subset, "publish": True, "family": 'workfile', diff --git a/openpype/hosts/max/plugins/publish/extract_camera_abc.py b/openpype/hosts/max/plugins/publish/extract_camera_abc.py index b42732e70d..b1918c53e0 100644 --- a/openpype/hosts/max/plugins/publish/extract_camera_abc.py +++ b/openpype/hosts/max/plugins/publish/extract_camera_abc.py @@ -22,8 +22,6 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin): start = float(instance.data.get("frameStartHandle", 1)) end = float(instance.data.get("frameEndHandle", 1)) - container = instance.data["instance_node"] - self.log.info("Extracting Camera ...") stagingdir = self.staging_dir(instance) diff --git a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py index 06ac3da093..537c88eb4d 100644 --- a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py +++ b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py @@ -19,9 +19,8 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin): def process(self, instance): if not self.is_active(instance.data): return - container = instance.data["instance_node"] - self.log.info("Extracting Camera ...") + self.log.debug("Extracting Camera ...") stagingdir = self.staging_dir(instance) filename = "{name}.fbx".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py index de5db9ab56..a7a889c587 100644 --- a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py +++ b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py @@ -18,10 +18,9 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin): def process(self, instance): if not self.is_active(instance.data): return - container = instance.data["instance_node"] # publish the raw scene for camera - self.log.info("Extracting Raw Max Scene ...") + self.log.debug("Extracting Raw Max Scene ...") stagingdir = self.staging_dir(instance) filename = "{name}.max".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/extract_model.py b/openpype/hosts/max/plugins/publish/extract_model.py index c7ecf7efc9..38f4848c5e 100644 --- a/openpype/hosts/max/plugins/publish/extract_model.py +++ b/openpype/hosts/max/plugins/publish/extract_model.py @@ -20,9 +20,7 @@ class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin): if not self.is_active(instance.data): return - container = instance.data["instance_node"] - - self.log.info("Extracting Geometry ...") + self.log.debug("Extracting Geometry ...") stagingdir = self.staging_dir(instance) filename = "{name}.abc".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/extract_model_fbx.py b/openpype/hosts/max/plugins/publish/extract_model_fbx.py index 56c2cadd94..fd48ed5007 100644 --- a/openpype/hosts/max/plugins/publish/extract_model_fbx.py +++ b/openpype/hosts/max/plugins/publish/extract_model_fbx.py @@ -20,10 +20,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin): if not self.is_active(instance.data): return - container = instance.data["instance_node"] - - - self.log.info("Extracting Geometry ...") + self.log.debug("Extracting Geometry ...") stagingdir = self.staging_dir(instance) filename = "{name}.fbx".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/extract_model_obj.py b/openpype/hosts/max/plugins/publish/extract_model_obj.py index 4fde65cf22..a5d9ad6597 100644 --- a/openpype/hosts/max/plugins/publish/extract_model_obj.py +++ b/openpype/hosts/max/plugins/publish/extract_model_obj.py @@ -3,6 +3,7 @@ import pyblish.api from openpype.pipeline import publish, OptionalPyblishPluginMixin from pymxs import runtime as rt from openpype.hosts.max.api import maintained_selection +from openpype.pipeline.publish import KnownPublishError class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin): @@ -20,15 +21,14 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin): if not self.is_active(instance.data): return - container = instance.data["instance_node"] - - self.log.info("Extracting Geometry ...") + self.log.debug("Extracting Geometry ...") stagingdir = self.staging_dir(instance) filename = "{name}.obj".format(**instance.data) filepath = os.path.join(stagingdir, filename) self.log.info("Writing OBJ '%s' to '%s'" % (filepath, stagingdir)) + self.log.info("Performing Extraction ...") with maintained_selection(): # select and export node_list = instance.data["members"] @@ -40,7 +40,10 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin): using=rt.ObjExp, ) - self.log.info("Performing Extraction ...") + if not os.path.exists(filepath): + raise KnownPublishError( + "File {} wasn't produced by 3ds max, please check the logs.") + if "representations" not in instance.data: instance.data["representations"] = [] diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py index 6d1e8d03b4..c3de623bc0 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py @@ -54,9 +54,7 @@ class ExtractAlembic(publish.Extractor): start = float(instance.data.get("frameStartHandle", 1)) end = float(instance.data.get("frameEndHandle", 1)) - container = instance.data["instance_node"] - - self.log.info("Extracting pointcache ...") + self.log.debug("Extracting pointcache ...") parent_dir = self.staging_dir(instance) file_name = "{name}.abc".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/extract_pointcloud.py b/openpype/hosts/max/plugins/publish/extract_pointcloud.py index 583bbb6dbd..190f049d23 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcloud.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcloud.py @@ -36,6 +36,7 @@ class ExtractPointCloud(publish.Extractor): label = "Extract Point Cloud" hosts = ["max"] families = ["pointcloud"] + settings = [] def process(self, instance): self.settings = self.get_setting(instance) diff --git a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py index ab569ecbcb..f67ed30c6b 100644 --- a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py @@ -16,11 +16,10 @@ class ExtractRedshiftProxy(publish.Extractor): families = ["redshiftproxy"] def process(self, instance): - container = instance.data["instance_node"] start = int(instance.context.data.get("frameStart")) end = int(instance.context.data.get("frameEnd")) - self.log.info("Extracting Redshift Proxy...") + self.log.debug("Extracting Redshift Proxy...") stagingdir = self.staging_dir(instance) rs_filename = "{name}.rs".format(**instance.data) rs_filepath = os.path.join(stagingdir, rs_filename) diff --git a/openpype/hosts/max/plugins/publish/validate_no_max_content.py b/openpype/hosts/max/plugins/publish/validate_instance_has_members.py similarity index 52% rename from openpype/hosts/max/plugins/publish/validate_no_max_content.py rename to openpype/hosts/max/plugins/publish/validate_instance_has_members.py index c6a27dace3..3c0039d5e0 100644 --- a/openpype/hosts/max/plugins/publish/validate_no_max_content.py +++ b/openpype/hosts/max/plugins/publish/validate_instance_has_members.py @@ -1,22 +1,24 @@ # -*- coding: utf-8 -*- import pyblish.api from openpype.pipeline import PublishValidationError -from pymxs import runtime as rt -class ValidateMaxContents(pyblish.api.InstancePlugin): - """Validates Max contents. +class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): + """Validates Instance has members. - Check if MaxScene container includes any contents underneath. + Check if MaxScene containers includes any contents underneath. """ order = pyblish.api.ValidatorOrder families = ["camera", + "model", "maxScene", - "maxrender", - "review"] + "review", + "pointcache", + "pointcloud", + "redshiftproxy"] hosts = ["max"] - label = "Max Scene Contents" + label = "Container Contents" def process(self, instance): if not instance.data["members"]: diff --git a/openpype/hosts/max/plugins/publish/validate_pointcloud.py b/openpype/hosts/max/plugins/publish/validate_pointcloud.py index 1ff6eb126f..a336cbd80c 100644 --- a/openpype/hosts/max/plugins/publish/validate_pointcloud.py +++ b/openpype/hosts/max/plugins/publish/validate_pointcloud.py @@ -1,15 +1,6 @@ import pyblish.api from openpype.pipeline import PublishValidationError from pymxs import runtime as rt -from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io - - -def get_setting(project_setting=None): - project_setting = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) - return project_setting["max"]["PointCloud"] class ValidatePointCloud(pyblish.api.InstancePlugin): @@ -108,6 +99,9 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): f"Validating tyFlow custom attributes for {container}") selection_list = instance.data["members"] + + project_settings = instance.context.data["project_settings"] + attr_settings = project_settings["max"]["PointCloud"]["attribute"] for sel in selection_list: obj = sel.baseobject anim_names = rt.GetSubAnimNames(obj) @@ -118,8 +112,7 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): event_name = sub_anim.name opt = "${0}.{1}.export_particles".format(sel.name, event_name) - attributes = get_setting()["attribute"] - for key, value in attributes.items(): + for key, value in attr_settings.items(): custom_attr = "{0}.PRTChannels_{1}".format(opt, value) try: diff --git a/openpype/hosts/max/plugins/publish/validate_renderable_camera.py b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py new file mode 100644 index 0000000000..61321661b5 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin) +from openpype.pipeline.publish import RepairAction +from openpype.hosts.max.api.lib import get_current_renderer + +from pymxs import runtime as rt + + +class ValidateRenderableCamera(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates Renderable Camera + + Check if the renderable camera used for rendering + """ + + order = pyblish.api.ValidatorOrder + families = ["maxrender"] + hosts = ["max"] + label = "Renderable Camera" + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + return + if not instance.data["cameras"]: + raise PublishValidationError( + "No renderable Camera found in scene." + ) + + @classmethod + def repair(cls, instance): + + rt.viewport.setType(rt.Name("view_camera")) + camera = rt.viewport.GetCamera() + cls.log.info(f"Camera {camera} set as renderable camera") + renderer_class = get_current_renderer() + renderer = str(renderer_class).split(":")[0] + if renderer == "Arnold": + arv = rt.MAXToAOps.ArnoldRenderView() + arv.setOption("Camera", str(camera)) + arv.close() + instance.data["cameras"] = [camera.name] diff --git a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py index 5fcb843b20..5ac41b10a0 100644 --- a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py +++ b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py @@ -6,11 +6,6 @@ from openpype.pipeline import ( from pymxs import runtime as rt from openpype.hosts.max.api.lib import reset_scene_resolution -from openpype.pipeline.context_tools import ( - get_current_project_asset, - get_current_project -) - class ValidateResolutionSetting(pyblish.api.InstancePlugin, OptionalPyblishPluginMixin): @@ -43,22 +38,16 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin, "on asset or shot.") def get_db_resolution(self, instance): - data = ["data.resolutionWidth", "data.resolutionHeight"] - project_resolution = get_current_project(fields=data) - project_resolution_data = project_resolution["data"] - asset_resolution = get_current_project_asset(fields=data) - asset_resolution_data = asset_resolution["data"] - # Set project resolution - project_width = int( - project_resolution_data.get("resolutionWidth", 1920)) - project_height = int( - project_resolution_data.get("resolutionHeight", 1080)) - width = int( - asset_resolution_data.get("resolutionWidth", project_width)) - height = int( - asset_resolution_data.get("resolutionHeight", project_height)) + asset_doc = instance.data["assetEntity"] + project_doc = instance.context.data["projectEntity"] + for data in [asset_doc["data"], project_doc["data"]]: + if "resolutionWidth" in data and "resolutionHeight" in data: + width = data["resolutionWidth"] + height = data["resolutionHeight"] + return int(width), int(height) - return width, height + # Defaults if not found in asset document or project document + return 1920, 1080 @classmethod def repair(cls, instance): diff --git a/openpype/hosts/max/plugins/publish/validate_usd_plugin.py b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py index 9957e62736..36c4291925 100644 --- a/openpype/hosts/max/plugins/publish/validate_usd_plugin.py +++ b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py @@ -1,9 +1,13 @@ # -*- coding: utf-8 -*- """Validator for USD plugin.""" -from openpype.pipeline import PublishValidationError from pyblish.api import InstancePlugin, ValidatorOrder from pymxs import runtime as rt +from openpype.pipeline import ( + OptionalPyblishPluginMixin, + PublishValidationError +) + def get_plugins() -> list: """Get plugin list from 3ds max.""" @@ -17,17 +21,25 @@ def get_plugins() -> list: return plugin_info_list -class ValidateUSDPlugin(InstancePlugin): +class ValidateUSDPlugin(OptionalPyblishPluginMixin, + InstancePlugin): """Validates if USD plugin is installed or loaded in 3ds max.""" order = ValidatorOrder - 0.01 families = ["model"] hosts = ["max"] - label = "USD Plugin" + label = "Validate USD Plugin loaded" + optional = True def process(self, instance): """Plugin entry point.""" + for sc in ValidateUSDPlugin.__subclasses__(): + self.log.info(sc) + + if not self.is_active(instance.data): + return + plugin_info = get_plugins() usd_import = "usdimport.dli" if usd_import not in plugin_info: diff --git a/openpype/hosts/maya/api/action.py b/openpype/hosts/maya/api/action.py index 3b8e2c1848..277f4cc238 100644 --- a/openpype/hosts/maya/api/action.py +++ b/openpype/hosts/maya/api/action.py @@ -4,7 +4,6 @@ from __future__ import absolute_import import pyblish.api from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io from openpype.pipeline.publish import get_errored_instances_from_context @@ -80,7 +79,7 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action): asset_doc = instance.data.get("assetEntity") if not asset_doc: asset_name = instance.data["asset"] - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] self.log.info(( "Asset is not stored on instance." " Querying by name \"{}\" from project \"{}\"" diff --git a/openpype/hosts/maya/api/commands.py b/openpype/hosts/maya/api/commands.py index 3e31875fd8..46494413b7 100644 --- a/openpype/hosts/maya/api/commands.py +++ b/openpype/hosts/maya/api/commands.py @@ -3,7 +3,7 @@ from maya import cmds from openpype.client import get_asset_by_name, get_project -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name, get_current_asset_name class ToolWindows: @@ -85,8 +85,8 @@ def reset_resolution(): resolution_height = 1080 # Get resolution from asset - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) resolution = _resolution_from_document(asset_doc) # Try get resolution from project diff --git a/openpype/hosts/maya/api/fbx.py b/openpype/hosts/maya/api/fbx.py index 260241f5fc..dbb3578f08 100644 --- a/openpype/hosts/maya/api/fbx.py +++ b/openpype/hosts/maya/api/fbx.py @@ -6,6 +6,7 @@ from pyblish.api import Instance from maya import cmds # noqa import maya.mel as mel # noqa +from openpype.hosts.maya.api.lib import maintained_selection class FBXExtractor: @@ -53,7 +54,6 @@ class FBXExtractor: "bakeComplexEnd": int, "bakeComplexStep": int, "bakeResampleAnimation": bool, - "animationOnly": bool, "useSceneName": bool, "quaternion": str, # "euler" "shapes": bool, @@ -63,7 +63,10 @@ class FBXExtractor: "embeddedTextures": bool, "inputConnections": bool, "upAxis": str, # x, y or z, - "triangulate": bool + "triangulate": bool, + "fileVersion": str, + "skeletonDefinitions": bool, + "referencedAssetsContent": bool } @property @@ -94,7 +97,6 @@ class FBXExtractor: "bakeComplexEnd": end_frame, "bakeComplexStep": 1, "bakeResampleAnimation": True, - "animationOnly": False, "useSceneName": False, "quaternion": "euler", "shapes": True, @@ -104,7 +106,10 @@ class FBXExtractor: "embeddedTextures": False, "inputConnections": True, "upAxis": "y", - "triangulate": False + "triangulate": False, + "fileVersion": "FBX202000", + "skeletonDefinitions": False, + "referencedAssetsContent": False } def __init__(self, log=None): @@ -198,5 +203,9 @@ class FBXExtractor: path (str): Path to use for export. """ - cmds.select(members, r=True, noExpand=True) - mel.eval('FBXExport -f "{}" -s'.format(path)) + # The export requires forward slashes because we need + # to format it into a string in a mel expression + path = path.replace("\\", "/") + with maintained_selection(): + cmds.select(members, r=True, noExpand=True) + mel.eval('FBXExport -f "{}" -s'.format(path)) diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 8569bbd38f..7c49c837e9 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -25,23 +25,18 @@ from openpype.client import ( ) from openpype.settings import get_project_settings from openpype.pipeline import ( - legacy_io, + get_current_project_name, + get_current_asset_name, + get_current_task_name, discover_loader_plugins, loaders_from_representation, get_representation_path, load_container, - registered_host, -) -from openpype.pipeline.create import ( - legacy_create, - get_legacy_creator_by_name, -) -from openpype.pipeline.context_tools import ( - get_current_asset_name, - get_current_project_asset, - get_current_project_name, - get_current_task_name + registered_host ) +from openpype.lib import NumberDef +from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.create import CreateContext from openpype.lib.profiles_filtering import filter_profiles @@ -122,16 +117,14 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94} RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"] -DISPLAY_LIGHTS_VALUES = [ - "project_settings", "default", "all", "selected", "flat", "none" -] -DISPLAY_LIGHTS_LABELS = [ - "Use Project Settings", - "Default Lighting", - "All Lights", - "Selected Lights", - "Flat Lighting", - "No Lights" + +DISPLAY_LIGHTS_ENUM = [ + {"label": "Use Project Settings", "value": "project_settings"}, + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "none"} ] @@ -153,6 +146,10 @@ def suspended_refresh(suspend=True): cmds.ogs(pause=True) is a toggle so we cant pass False. """ + if IS_HEADLESS: + yield + return + original_state = cmds.ogs(query=True, pause=True) try: if suspend and not original_state: @@ -190,6 +187,51 @@ def maintained_selection(): cmds.select(clear=True) +def get_namespace(node): + """Return namespace of given node""" + node_name = node.rsplit("|", 1)[-1] + if ":" in node_name: + return node_name.rsplit(":", 1)[0] + else: + return "" + + +def strip_namespace(node, namespace): + """Strip given namespace from node path. + + The namespace will only be stripped from names + if it starts with that namespace. If the namespace + occurs within another namespace it's not removed. + + Examples: + >>> strip_namespace("namespace:node", namespace="namespace:") + "node" + >>> strip_namespace("hello:world:node", namespace="hello:world") + "node" + >>> strip_namespace("hello:world:node", namespace="hello") + "world:node" + >>> strip_namespace("hello:world:node", namespace="world") + "hello:world:node" + >>> strip_namespace("ns:group|ns:node", namespace="ns") + "group|node" + + Returns: + str: Node name without given starting namespace. + + """ + + # Ensure namespace ends with `:` + if not namespace.endswith(":"): + namespace = "{}:".format(namespace) + + # The long path for a node can also have the namespace + # in its parents so we need to remove it from each + return "|".join( + name[len(namespace):] if name.startswith(namespace) else name + for name in node.split("|") + ) + + def get_custom_namespace(custom_namespace): """Return unique namespace. @@ -343,8 +385,8 @@ def pairwise(iterable): return zip(a, a) -def collect_animation_data(fps=False): - """Get the basic animation data +def collect_animation_defs(fps=False): + """Get the basic animation attribute defintions for the publisher. Returns: OrderedDict @@ -363,17 +405,42 @@ def collect_animation_data(fps=False): handle_end = frame_end_handle - frame_end # build attributes - data = OrderedDict() - data["frameStart"] = frame_start - data["frameEnd"] = frame_end - data["handleStart"] = handle_start - data["handleEnd"] = handle_end - data["step"] = 1.0 + defs = [ + NumberDef("frameStart", + label="Frame Start", + default=frame_start, + decimals=0), + NumberDef("frameEnd", + label="Frame End", + default=frame_end, + decimals=0), + NumberDef("handleStart", + label="Handle Start", + default=handle_start, + decimals=0), + NumberDef("handleEnd", + label="Handle End", + default=handle_end, + decimals=0), + NumberDef("step", + label="Step size", + tooltip="A smaller step size means more samples and larger " + "output files.\n" + "A 1.0 step size is a single sample every frame.\n" + "A 0.5 step size is two samples per frame.\n" + "A 0.2 step size is five samples per frame.", + default=1.0, + decimals=3), + ] if fps: - data["fps"] = mel.eval('currentTimeUnitToFPS()') + current_fps = mel.eval('currentTimeUnitToFPS()') + fps_def = NumberDef( + "fps", label="FPS", default=current_fps, decimals=5 + ) + defs.append(fps_def) - return data + return defs def imprint(node, data): @@ -459,10 +526,10 @@ def lsattrs(attrs): attrs (dict): Name and value pairs of expected matches Example: - >> # Return nodes with an `age` of five. - >> lsattr({"age": "five"}) - >> # Return nodes with both `age` and `color` of five and blue. - >> lsattr({"age": "five", "color": "blue"}) + >>> # Return nodes with an `age` of five. + >>> lsattrs({"age": "five"}) + >>> # Return nodes with both `age` and `color` of five and blue. + >>> lsattrs({"age": "five", "color": "blue"}) Return: list: matching nodes. @@ -904,7 +971,7 @@ def no_display_layers(nodes): @contextlib.contextmanager -def namespaced(namespace, new=True): +def namespaced(namespace, new=True, relative_names=None): """Work inside namespace during context Args: @@ -916,15 +983,19 @@ def namespaced(namespace, new=True): """ original = cmds.namespaceInfo(cur=True, absoluteName=True) + original_relative_names = cmds.namespace(query=True, relativeNames=True) if new: namespace = unique_namespace(namespace) cmds.namespace(add=namespace) - + if relative_names is not None: + cmds.namespace(relativeNames=relative_names) try: cmds.namespace(set=namespace) yield namespace finally: cmds.namespace(set=original) + if relative_names is not None: + cmds.namespace(relativeNames=original_relative_names) @contextlib.contextmanager @@ -1392,8 +1463,8 @@ def generate_ids(nodes, asset_id=None): if asset_id is None: # Get the asset ID from the database for the asset of current context - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) assert asset_doc, "No current asset found in Session" asset_id = asset_doc['_id'] @@ -1522,7 +1593,15 @@ def set_attribute(attribute, value, node): cmds.addAttr(node, longName=attribute, **kwargs) node_attr = "{}.{}".format(node, attribute) - if "dataType" in kwargs: + enum_type = cmds.attributeQuery(attribute, node=node, enum=True) + if enum_type and value_type == "str": + enum_string_values = cmds.attributeQuery( + attribute, node=node, listEnum=True + )[0].split(":") + cmds.setAttr( + "{}.{}".format(node, attribute), enum_string_values.index(value) + ) + elif "dataType" in kwargs: attr_type = kwargs["dataType"] cmds.setAttr(node_attr, value, type=attr_type) else: @@ -1585,17 +1664,15 @@ def get_container_members(container): # region LOOKDEV -def list_looks(asset_id): +def list_looks(project_name, asset_id): """Return all look subsets for the given asset This assumes all look subsets start with "look*" in their names. """ - # # get all subsets with look leading in # the name associated with the asset # TODO this should probably look for family 'look' instead of checking # subset name that can not start with family - project_name = legacy_io.active_project() subset_docs = get_subsets(project_name, asset_ids=[asset_id]) return [ subset_doc @@ -1617,7 +1694,7 @@ def assign_look_by_version(nodes, version_id): None """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Get representations of shader file and relationships look_representation = get_representation_by_name( @@ -1683,7 +1760,7 @@ def assign_look(nodes, subset="lookDefault"): parts = pype_id.split(":", 1) grouped[parts[0]].append(node) - project_name = legacy_io.active_project() + project_name = get_current_project_name() subset_docs = get_subsets( project_name, subset_names=[subset], asset_ids=grouped.keys() ) @@ -2197,6 +2274,35 @@ def set_scene_resolution(width, height, pixelAspect): cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect) +def get_fps_for_current_context(): + """Get fps that should be set for current context. + + Todos: + - Skip project value. + - Merge logic with 'get_frame_range' and 'reset_scene_resolution' -> + all the values in the functions can be collected at one place as + they have same requirements. + + Returns: + Union[int, float]: FPS value. + """ + + project_name = get_current_project_name() + asset_name = get_current_asset_name() + asset_doc = get_asset_by_name( + project_name, asset_name, fields=["data.fps"] + ) or {} + fps = asset_doc.get("data", {}).get("fps") + if not fps: + project_doc = get_project(project_name, fields=["data.fps"]) or {} + fps = project_doc.get("data", {}).get("fps") + + if not fps: + fps = 25 + + return convert_to_maya_fps(fps) + + def get_frame_range(include_animation_range=False): """Get the current assets frame range and handles. @@ -2271,10 +2377,7 @@ def reset_frame_range(playback=True, render=True, fps=True): fps (bool, Optional): Whether to set scene FPS. Defaults to True. """ if fps: - fps = convert_to_maya_fps( - float(legacy_io.Session.get("AVALON_FPS", 25)) - ) - set_scene_fps(fps) + set_scene_fps(get_fps_for_current_context()) frame_range = get_frame_range(include_animation_range=True) if not frame_range: @@ -2310,7 +2413,7 @@ def reset_scene_resolution(): None """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) project_data = project_doc["data"] asset_data = get_current_project_asset()["data"] @@ -2343,19 +2446,9 @@ def set_context_settings(): None """ - # Todo (Wijnand): apply renderer and resolution of project - project_name = legacy_io.active_project() - project_doc = get_project(project_name) - project_data = project_doc["data"] - asset_doc = get_current_project_asset(fields=["data.fps"]) - asset_data = asset_doc.get("data", {}) # Set project fps - fps = convert_to_maya_fps( - asset_data.get("fps", project_data.get("fps", 25)) - ) - legacy_io.Session["AVALON_FPS"] = str(fps) - set_scene_fps(fps) + set_scene_fps(get_fps_for_current_context()) reset_scene_resolution() @@ -2375,9 +2468,7 @@ def validate_fps(): """ - expected_fps = convert_to_maya_fps( - get_current_project_asset(fields=["data.fps"])["data"]["fps"] - ) + expected_fps = get_fps_for_current_context() current_fps = mel.eval('currentTimeUnitToFPS()') fps_match = current_fps == expected_fps @@ -2533,7 +2624,7 @@ def bake_to_world_space(nodes, new_name = "{0}_baked".format(short_name) new_node = cmds.duplicate(node, name=new_name, - renameChildren=True)[0] + renameChildren=True)[0] # noqa # Connect all attributes on the node except for transform # attributes @@ -4062,14 +4153,19 @@ def create_rig_animation_instance( """ if options is None: options = {} - + name = context["representation"]["name"] output = next((node for node in nodes if node.endswith("out_SET")), None) controls = next((node for node in nodes if node.endswith("controls_SET")), None) + if name != "fbx": + assert output, "No out_SET in rig, this is a bug." + assert controls, "No controls_SET in rig, this is a bug." - assert output, "No out_SET in rig, this is a bug." - assert controls, "No controls_SET in rig, this is a bug." + anim_skeleton = next((node for node in nodes if + node.endswith("skeletonAnim_SET")), None) + skeleton_mesh = next((node for node in nodes if + node.endswith("skeletonMesh_SET")), None) # Find the roots amongst the loaded nodes roots = ( @@ -4078,14 +4174,10 @@ def create_rig_animation_instance( ) assert roots, "No root nodes in rig, this is a bug." - asset = legacy_io.Session["AVALON_ASSET"] - dependency = str(context["representation"]["_id"]) - custom_subset = options.get("animationSubsetName") if custom_subset: formatting_data = { - "asset_name": context['asset']['name'], - "asset_type": context['asset']['type'], + "asset": context["asset"], "subset": context['subset']['name'], "family": ( context['subset']['data'].get('family') or @@ -4101,14 +4193,19 @@ def create_rig_animation_instance( if log: log.info("Creating subset: {}".format(namespace)) + # Fill creator identifier + creator_identifier = "io.openpype.creators.maya.animation" + + host = registered_host() + create_context = CreateContext(host) # Create the animation instance - creator_plugin = get_legacy_creator_by_name("CreateAnimation") + rig_sets = [output, controls, anim_skeleton, skeleton_mesh] + # Remove sets that this particular rig does not have + rig_sets = [s for s in rig_sets if s is not None] with maintained_selection(): - cmds.select([output, controls] + roots, noExpand=True) - legacy_create( - creator_plugin, - name=namespace, - asset=asset, - options={"useSelection": True}, - data={"dependencies": dependency} + cmds.select(rig_sets + roots, noExpand=True) + create_context.create( + creator_identifier=creator_identifier, + variant=namespace, + pre_create_data={"use_selection": True} ) diff --git a/openpype/hosts/maya/api/lib_renderproducts.py b/openpype/hosts/maya/api/lib_renderproducts.py index a6bcd003a5..b5b71a5a36 100644 --- a/openpype/hosts/maya/api/lib_renderproducts.py +++ b/openpype/hosts/maya/api/lib_renderproducts.py @@ -177,7 +177,7 @@ def get(layer, render_instance=None): }.get(renderer_name.lower(), None) if renderer is None: raise UnsupportedRendererException( - "unsupported {}".format(renderer_name) + "Unsupported renderer: {}".format(renderer_name) ) return renderer(layer, render_instance) @@ -274,12 +274,14 @@ class ARenderProducts: "Unsupported renderer {}".format(self.renderer) ) + # Note: When this attribute is never set (e.g. on maya launch) then + # this can return None even though it is a string attribute prefix = self._get_attr(prefix_attr) if not prefix: # Fall back to scene name by default - log.debug("Image prefix not set, using ") - file_prefix = "" + log.warning("Image prefix not set, using ") + prefix = "" return prefix diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py index eaa728a2f6..42cf29d0a7 100644 --- a/openpype/hosts/maya/api/lib_rendersettings.py +++ b/openpype/hosts/maya/api/lib_rendersettings.py @@ -6,13 +6,9 @@ import six import sys from openpype.lib import Logger -from openpype.settings import ( - get_project_settings, - get_current_project_settings -) +from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io -from openpype.pipeline import CreatorError +from openpype.pipeline import CreatorError, get_current_project_name from openpype.pipeline.context_tools import get_current_project_asset from openpype.hosts.maya.api.lib import reset_frame_range @@ -27,21 +23,6 @@ class RenderSettings(object): 'mayahardware2': 'defaultRenderGlobals.imageFilePrefix' } - _image_prefixes = { - 'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa - 'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa - 'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_prefix"], # noqa - 'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa - } - - # Renderman only - _image_dir = { - 'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_dir"], # noqa - 'cryptomatte': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["cryptomatte_dir"], # noqa - 'imageDisplay': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["imageDisplay_dir"], # noqa - "watermark": get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["watermark_dir"] # noqa - } - _aov_chars = { "dot": ".", "dash": "-", @@ -55,11 +36,30 @@ class RenderSettings(object): return cls._image_prefix_nodes[renderer] def __init__(self, project_settings=None): - self._project_settings = project_settings - if not self._project_settings: - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + if not project_settings: + project_settings = get_project_settings( + get_current_project_name() ) + render_settings = project_settings["maya"]["RenderSettings"] + image_prefixes = { + "vray": render_settings["vray_renderer"]["image_prefix"], + "arnold": render_settings["arnold_renderer"]["image_prefix"], + "renderman": render_settings["renderman_renderer"]["image_prefix"], + "redshift": render_settings["redshift_renderer"]["image_prefix"] + } + + # TODO probably should be stored to more explicit attribute + # Renderman only + renderman_settings = render_settings["renderman_renderer"] + _image_dir = { + "renderman": renderman_settings["image_dir"], + "cryptomatte": renderman_settings["cryptomatte_dir"], + "imageDisplay": renderman_settings["imageDisplay_dir"], + "watermark": renderman_settings["watermark_dir"] + } + self._image_prefixes = image_prefixes + self._image_dir = _image_dir + self._project_settings = project_settings def set_default_renderer_settings(self, renderer=None): """Set basic settings based on renderer.""" @@ -177,12 +177,7 @@ class RenderSettings(object): # list all the aovs all_rs_aovs = cmds.ls(type='RedshiftAOV') for rs_aov in redshift_aovs: - rs_layername = rs_aov - if " " in rs_aov: - rs_renderlayer = rs_aov.replace(" ", "") - rs_layername = "rsAov_{}".format(rs_renderlayer) - else: - rs_layername = "rsAov_{}".format(rs_aov) + rs_layername = "rsAov_{}".format(rs_aov.replace(" ", "")) if rs_layername in all_rs_aovs: continue cmds.rsCreateAov(type=rs_aov) @@ -317,7 +312,7 @@ class RenderSettings(object): separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501 try: sep_idx = separators.index(aov_separator) - except ValueError as e: + except ValueError: six.reraise( CreatorError, CreatorError( diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py index 5284c0249d..18a4ea0e9a 100644 --- a/openpype/hosts/maya/api/menu.py +++ b/openpype/hosts/maya/api/menu.py @@ -1,13 +1,16 @@ import os import logging +from functools import partial from qtpy import QtWidgets, QtGui import maya.utils import maya.cmds as cmds -from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io +from openpype.pipeline import ( + get_current_asset_name, + get_current_task_name +) from openpype.pipeline.workfile import BuildWorkfile from openpype.tools.utils import host_tools from openpype.hosts.maya.api import lib, lib_rendersettings @@ -35,29 +38,32 @@ def _get_menu(menu_name=None): return widgets.get(menu_name) -def install(): +def get_context_label(): + return "{}, {}".format( + get_current_asset_name(), + get_current_task_name() + ) + + +def install(project_settings): if cmds.about(batch=True): log.info("Skipping openpype.menu initialization in batch mode..") return - def deferred(): + def add_menu(): pyblish_icon = host_tools.get_pyblish_icon() parent_widget = get_main_window() cmds.menu( MENU_NAME, - label=legacy_io.Session["AVALON_LABEL"], + label=os.environ.get("AVALON_LABEL") or "OpenPype", tearOff=True, parent="MayaWindow" ) # Create context menu - context_label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) cmds.menuItem( "currentContext", - label=context_label, + label=get_context_label(), parent=MENU_NAME, enable=False ) @@ -66,10 +72,12 @@ def install(): cmds.menuItem(divider=True) - # Create default items cmds.menuItem( "Create...", - command=lambda *args: host_tools.show_creator(parent=parent_widget) + command=lambda *args: host_tools.show_publisher( + parent=parent_widget, + tab="create" + ) ) cmds.menuItem( @@ -82,8 +90,9 @@ def install(): cmds.menuItem( "Publish...", - command=lambda *args: host_tools.show_publish( - parent=parent_widget + command=lambda *args: host_tools.show_publisher( + parent=parent_widget, + tab="publish" ), image=pyblish_icon ) @@ -181,7 +190,7 @@ def install(): cmds.setParent(MENU_NAME, menu=True) - def add_scripts_menu(): + def add_scripts_menu(project_settings): try: import scriptsmenu.launchformaya as launchformaya except ImportError: @@ -191,8 +200,6 @@ def install(): ) return - # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) config = project_settings["maya"]["scriptsmenu"]["definition"] _menu = project_settings["maya"]["scriptsmenu"]["name"] @@ -214,8 +221,9 @@ def install(): # so that it only gets called after Maya UI has initialized too. # This is crucial with Maya 2020+ which initializes without UI # first as a QCoreApplication - maya.utils.executeDeferred(deferred) - cmds.evalDeferred(add_scripts_menu, lowestPriority=True) + maya.utils.executeDeferred(add_menu) + cmds.evalDeferred(partial(add_scripts_menu, project_settings), + lowestPriority=True) def uninstall(): @@ -249,8 +257,5 @@ def update_menu_task_label(): log.warning("Can't find menuItem: {}".format(object_name)) return - label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + label = get_context_label() cmds.menuItem(object_name, edit=True, label=label) diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index e2d00b5bd7..6b791c9665 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -1,3 +1,5 @@ +import json +import base64 import os import errno import logging @@ -14,6 +16,7 @@ from openpype.host import ( HostBase, IWorkfileHost, ILoadHost, + IPublishHost, HostDirmap, ) from openpype.tools.utils import host_tools @@ -24,6 +27,7 @@ from openpype.lib import ( ) from openpype.pipeline import ( legacy_io, + get_current_project_name, register_loader_plugin_path, register_inventory_action_path, register_creator_plugin_path, @@ -64,7 +68,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") AVALON_CONTAINERS = ":AVALON_CONTAINERS" -class MayaHost(HostBase, IWorkfileHost, ILoadHost): +class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): name = "maya" def __init__(self): @@ -72,7 +76,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): self._op_events = {} def install(self): - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_settings = get_project_settings(project_name) # process path mapping dirmap_processor = MayaDirmap("maya", project_name, project_settings) @@ -91,6 +95,8 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): self.log.info("Installing callbacks ... ") register_event_callback("init", on_init) + _set_project() + if lib.IS_HEADLESS: self.log.info(( "Running in headless mode, skipping Maya save/open/new" @@ -99,10 +105,9 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): return - _set_project() self._register_callbacks() - menu.install() + menu.install(project_settings) register_event_callback("save", on_save) register_event_callback("open", on_open) @@ -150,6 +155,20 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): with lib.maintained_selection(): yield + def get_context_data(self): + data = cmds.fileInfo("OpenPypeContext", query=True) + if not data: + return {} + + data = data[0] # Maya seems to return a list + decoded = base64.b64decode(data).decode("utf-8") + return json.loads(decoded) + + def update_context_data(self, data, changes): + json_str = json.dumps(data) + encoded = base64.b64encode(json_str.encode("utf-8")) + return cmds.fileInfo("OpenPypeContext", encoded) + def _register_callbacks(self): for handler, event in self._op_events.copy().items(): if event is None: @@ -303,7 +322,7 @@ def _remove_workfile_lock(): def handle_workfile_locks(): if lib.IS_HEADLESS: return False - project_name = legacy_io.active_project() + project_name = get_current_project_name() return is_workfile_lock_enabled(MayaHost.name, project_name) @@ -639,17 +658,6 @@ def on_task_changed(): lib.set_context_settings() lib.update_content_on_context_change() - msg = " project: {}\n asset: {}\n task:{}".format( - legacy_io.active_project(), - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) - - lib.show_message( - "Context was changed", - ("Context was changed to:\n{}".format(msg)), - ) - def before_workfile_open(): if handle_workfile_locks(): @@ -657,7 +665,7 @@ def before_workfile_open(): def before_workfile_save(event): - project_name = legacy_io.active_project() + project_name = get_current_project_name() if handle_workfile_locks(): _remove_workfile_lock() workdir_path = event["workdir_path"] diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 0971251469..3b54954c8a 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -1,29 +1,50 @@ +import json import os - -from maya import cmds +from abc import ABCMeta import qargparse +import six +from maya import cmds +from maya.app.renderSetup.model import renderSetup -from openpype.lib import Logger +from openpype.lib import BoolDef, Logger +from openpype.settings import get_project_settings from openpype.pipeline import ( + AVALON_CONTAINER_ID, + Anatomy, + + CreatedInstance, + Creator as NewCreator, + AutoCreator, + HiddenCreator, + + CreatorError, LegacyCreator, LoaderPlugin, get_representation_path, - AVALON_CONTAINER_ID, - Anatomy, ) from openpype.pipeline.load import LoadError -from openpype.settings import get_project_settings -from .pipeline import containerise -from . import lib +from openpype.client import get_asset_by_name +from openpype.pipeline.create import get_subset_name +from . import lib +from .lib import imprint, read +from .pipeline import containerise log = Logger.get_logger() +def _get_attr(node, attr, default=None): + """Helper to get attribute which allows attribute to not exist.""" + if not cmds.attributeQuery(attr, node=node, exists=True): + return default + return cmds.getAttr("{}.{}".format(node, attr)) + + # Backwards compatibility: these functions has been moved to lib. def get_reference_node(*args, **kwargs): - """ + """Get the reference node from the container members + Deprecated: This function was moved and will be removed in 3.16.x. """ @@ -60,9 +81,583 @@ class Creator(LegacyCreator): return instance +@six.add_metaclass(ABCMeta) +class MayaCreatorBase(object): + + @staticmethod + def cache_subsets(shared_data): + """Cache instances for Creators to shared data. + + Create `maya_cached_subsets` key when needed in shared data and + fill it with all collected instances from the scene under its + respective creator identifiers. + + If legacy instances are detected in the scene, create + `maya_cached_legacy_subsets` there and fill it with + all legacy subsets under family as a key. + + Args: + Dict[str, Any]: Shared data. + + Return: + Dict[str, Any]: Shared data dictionary. + + """ + if shared_data.get("maya_cached_subsets") is None: + cache = dict() + cache_legacy = dict() + + for node in cmds.ls(type="objectSet"): + + if _get_attr(node, attr="id") != "pyblish.avalon.instance": + continue + + creator_id = _get_attr(node, attr="creator_identifier") + if creator_id is not None: + # creator instance + cache.setdefault(creator_id, []).append(node) + else: + # legacy instance + family = _get_attr(node, attr="family") + if family is None: + # must be a broken instance + continue + + cache_legacy.setdefault(family, []).append(node) + + shared_data["maya_cached_subsets"] = cache + shared_data["maya_cached_legacy_subsets"] = cache_legacy + return shared_data + + def get_publish_families(self): + """Return families for the instances of this creator. + + Allow a Creator to define multiple families so that a creator can + e.g. specify `usd` and `usdMaya` and another USD creator can also + specify `usd` but apply different extractors like `usdMultiverse`. + + There is no need to override this method if you only have the + primary family defined by the `family` property as that will always + be set. + + Returns: + list: families for instances of this creator + + """ + return [] + + def imprint_instance_node(self, node, data): + + # We never store the instance_node as value on the node since + # it's the node name itself + data.pop("instance_node", None) + data.pop("instance_id", None) + + # Don't store `families` since it's up to the creator itself + # to define the initial publish families - not a stored attribute of + # `families` + data.pop("families", None) + + # We store creator attributes at the root level and assume they + # will not clash in names with `subset`, `task`, etc. and other + # default names. This is just so these attributes in many cases + # are still editable in the maya UI by artists. + # note: pop to move to end of dict to sort attributes last on the node + creator_attributes = data.pop("creator_attributes", {}) + + # We only flatten value types which `imprint` function supports + json_creator_attributes = {} + for key, value in dict(creator_attributes).items(): + if isinstance(value, (list, tuple, dict)): + creator_attributes.pop(key) + json_creator_attributes[key] = value + + # Flatten remaining creator attributes to the node itself + data.update(creator_attributes) + + # We know the "publish_attributes" will be complex data of + # settings per plugins, we'll store this as a flattened json structure + # pop to move to end of dict to sort attributes last on the node + data["publish_attributes"] = json.dumps( + data.pop("publish_attributes", {}) + ) + + # Persist the non-flattened creator attributes (special value types, + # like multiselection EnumDef) + data["creator_attributes"] = json.dumps(json_creator_attributes) + + # Since we flattened the data structure for creator attributes we want + # to correctly detect which flattened attributes should end back in the + # creator attributes when reading the data from the node, so we store + # the relevant keys as a string + data["__creator_attributes_keys"] = ",".join(creator_attributes.keys()) + + # Kill any existing attributes just so we can imprint cleanly again + for attr in data.keys(): + if cmds.attributeQuery(attr, node=node, exists=True): + cmds.deleteAttr("{}.{}".format(node, attr)) + + return imprint(node, data) + + def read_instance_node(self, node): + node_data = read(node) + + # Never care about a cbId attribute on the object set + # being read as 'data' + node_data.pop("cbId", None) + + # Make sure we convert any creator attributes from the json string + creator_attributes = node_data.get("creator_attributes") + if creator_attributes: + node_data["creator_attributes"] = json.loads(creator_attributes) + else: + node_data["creator_attributes"] = {} + + # Move the relevant attributes into "creator_attributes" that + # we flattened originally + creator_attribute_keys = node_data.pop("__creator_attributes_keys", + "").split(",") + for key in creator_attribute_keys: + if key in node_data: + node_data["creator_attributes"][key] = node_data.pop(key) + + # Make sure we convert any publish attributes from the json string + publish_attributes = node_data.get("publish_attributes") + if publish_attributes: + node_data["publish_attributes"] = json.loads(publish_attributes) + + # Explicitly re-parse the node name + node_data["instance_node"] = node + node_data["instance_id"] = node + + # If the creator plug-in specifies + families = self.get_publish_families() + if families: + node_data["families"] = families + + return node_data + + def _default_collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def _default_update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + + self.imprint_instance_node(node, data) + + def _default_remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + node = instance.data.get("instance_node") + if node: + cmds.delete(node) + + self._remove_instance_from_context(instance) + + +@six.add_metaclass(ABCMeta) +class MayaCreator(NewCreator, MayaCreatorBase): + + settings_name = None + + def create(self, subset_name, instance_data, pre_create_data): + + members = list() + if pre_create_data.get("use_selection"): + members = cmds.ls(selection=True) + + # Allow a Creator to define multiple families + publish_families = self.get_publish_families() + if publish_families: + families = instance_data.setdefault("families", []) + for family in self.get_publish_families(): + if family not in families: + families.append(family) + + with lib.undo_chunk(): + instance_node = cmds.sets(members, name=subset_name) + instance_data["instance_node"] = instance_node + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self) + self._add_instance_to_context(instance) + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + return instance + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + def get_pre_create_attr_defs(self): + return [ + BoolDef("use_selection", + label="Use selection", + default=True) + ] + + def apply_settings(self, project_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["maya"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) + + +class MayaAutoCreator(AutoCreator, MayaCreatorBase): + """Automatically triggered creator for Maya. + + The plugin is not visible in UI, and 'create' method does not expect + any arguments. + """ + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +class MayaHiddenCreator(HiddenCreator, MayaCreatorBase): + """Hidden creator for Maya. + + The plugin is not visible in UI, and it does not have strictly defined + arguments for 'create' method. + """ + + def create(self, *args, **kwargs): + return MayaCreator.create(self, *args, **kwargs) + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +def ensure_namespace(namespace): + """Make sure the namespace exists. + + Args: + namespace (str): The preferred namespace name. + + Returns: + str: The generated or existing namespace + + """ + exists = cmds.namespace(exists=namespace) + if exists: + return namespace + else: + return cmds.namespace(add=namespace) + + +class RenderlayerCreator(NewCreator, MayaCreatorBase): + """Creator which creates an instance per renderlayer in the workfile. + + Create and manages renderlayer subset per renderLayer in workfile. + This generates a singleton node in the scene which, if it exists, tells the + Creator to collect Maya rendersetup renderlayers as individual instances. + As such, triggering create doesn't actually create the instance node per + layer but only the node which tells the Creator it may now collect + an instance per renderlayer. + + """ + + # These are required to be overridden in subclass + singleton_node_name = "" + + # These are optional to be overridden in subclass + layer_instance_prefix = None + + def _get_singleton_node(self, return_all=False): + nodes = lib.lsattr("pre_creator_identifier", self.identifier) + if nodes: + return nodes if return_all else nodes[0] + + def create(self, subset_name, instance_data, pre_create_data): + # A Renderlayer is never explicitly created using the create method. + # Instead, renderlayers from the scene are collected. Thus "create" + # would only ever be called to say, 'hey, please refresh collect' + self.create_singleton_node() + + # if no render layers are present, create default one with + # asterisk selector + rs = renderSetup.instance() + if not rs.getRenderLayers(): + render_layer = rs.createRenderLayer("Main") + collection = render_layer.createCollection("defaultCollection") + collection.getSelector().setPattern('*') + + # By RenderLayerCreator.create we make it so that the renderlayer + # instances directly appear even though it just collects scene + # renderlayers. This doesn't actually 'create' any scene contents. + self.collect_instances() + + def create_singleton_node(self): + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") + + with lib.undo_chunk(): + node = cmds.sets(empty=True, name=self.singleton_node_name) + lib.imprint(node, data={ + "pre_creator_identifier": self.identifier + }) + + return node + + def collect_instances(self): + + # We only collect if the global render instance exists + if not self._get_singleton_node(): + return + + rs = renderSetup.instance() + layers = rs.getRenderLayers() + for layer in layers: + layer_instance_node = self.find_layer_instance_node(layer) + if layer_instance_node: + data = self.read_instance_node(layer_instance_node) + instance = CreatedInstance.from_existing(data, creator=self) + else: + # No existing scene instance node for this layer. Note that + # this instance will not have the `instance_node` data yet + # until it's been saved/persisted at least once. + project_name = self.create_context.get_current_project_name() + + instance_data = { + "asset": self.create_context.get_current_asset_name(), + "task": self.create_context.get_current_task_name(), + "variant": layer.name(), + } + asset_doc = get_asset_by_name(project_name, + instance_data["asset"]) + subset_name = self.get_subset_name( + layer.name(), + instance_data["task"], + asset_doc, + project_name) + + instance = CreatedInstance( + family=self.family, + subset_name=subset_name, + data=instance_data, + creator=self + ) + + instance.transient_data["layer"] = layer + self._add_instance_to_context(instance) + + def find_layer_instance_node(self, layer): + connected_sets = cmds.listConnections( + "{}.message".format(layer.name()), + source=False, + destination=True, + type="objectSet" + ) or [] + + for node in connected_sets: + if not cmds.attributeQuery("creator_identifier", + node=node, + exists=True): + continue + + creator_identifier = cmds.getAttr(node + ".creator_identifier") + if creator_identifier == self.identifier: + self.log.info("Found node: {}".format(node)) + return node + + def _create_layer_instance_node(self, layer): + + # We only collect if a CreateRender instance exists + create_render_set = self._get_singleton_node() + if not create_render_set: + raise CreatorError("Creating a renderlayer instance node is not " + "allowed if no 'CreateRender' instance exists") + + namespace = "_{}".format(self.singleton_node_name) + namespace = ensure_namespace(namespace) + + name = "{}:{}".format(namespace, layer.name()) + render_set = cmds.sets(name=name, empty=True) + + # Keep an active link with the renderlayer so we can retrieve it + # later by a physical maya connection instead of relying on the layer + # name + cmds.addAttr(render_set, longName="renderlayer", at="message") + cmds.connectAttr("{}.message".format(layer.name()), + "{}.renderlayer".format(render_set), force=True) + + # Add the set to the 'CreateRender' set. + cmds.sets(render_set, forceElement=create_render_set) + + return render_set + + def update_instances(self, update_list): + # We only generate the persisting layer data into the scene once + # we save with the UI on e.g. validate or publish + for instance, _changes in update_list: + instance_node = instance.data.get("instance_node") + + # Ensure a node exists to persist the data to + if not instance_node: + layer = instance.transient_data["layer"] + instance_node = self._create_layer_instance_node(layer) + instance.data["instance_node"] = instance_node + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + + def imprint_instance_node(self, node, data): + # Do not ever try to update the `renderlayer` since it'll try + # to remove the attribute and recreate it but fail to keep it a + # message attribute link. We only ever imprint that on the initial + # node creation. + # TODO: Improve how this is handled + data.pop("renderlayer", None) + data.get("creator_attributes", {}).pop("renderlayer", None) + + return super(RenderlayerCreator, self).imprint_instance_node(node, + data=data) + + def remove_instances(self, instances): + """Remove specified instances from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + # Instead of removing the single instance or renderlayers we instead + # remove the CreateRender node this creator relies on to decide whether + # it should collect anything at all. + nodes = self._get_singleton_node(return_all=True) + if nodes: + cmds.delete(nodes) + + # Remove ALL the instances even if only one gets deleted + for instance in list(self.create_context.instances): + if instance.get("creator_identifier") == self.identifier: + self._remove_instance_from_context(instance) + + # Remove the stored settings per renderlayer too + node = instance.data.get("instance_node") + if node and cmds.objExists(node): + cmds.delete(node) + + def get_subset_name( + self, + variant, + task_name, + asset_doc, + project_name, + host_name=None, + instance=None + ): + # creator.family != 'render' as expected + return get_subset_name(self.layer_instance_prefix, + variant, + task_name, + asset_doc, + project_name) + + class Loader(LoaderPlugin): hosts = ["maya"] + load_settings = {} # defined in settings + + @classmethod + def apply_settings(cls, project_settings, system_settings): + super(Loader, cls).apply_settings(project_settings, system_settings) + cls.load_settings = project_settings['maya']['load'] + + def get_custom_namespace_and_group(self, context, options, loader_key): + """Queries Settings to get custom template for namespace and group. + + Group template might be empty >> this forces to not wrap imported items + into separate group. + + Args: + context (dict) + options (dict): artist modifiable options from dialog + loader_key (str): key to get separate configuration from Settings + ('reference_loader'|'import_loader') + """ + + options["attach_to_root"] = True + custom_naming = self.load_settings[loader_key] + + if not custom_naming['namespace']: + raise LoadError("No namespace specified in " + "Maya ReferenceLoader settings") + elif not custom_naming['group_name']: + self.log.debug("No custom group_name, no group will be created.") + options["attach_to_root"] = False + + asset = context['asset'] + subset = context['subset'] + formatting_data = { + "asset_name": asset['name'], + "asset_type": asset['type'], + "folder": { + "name": asset["name"], + }, + "subset": subset['name'], + "family": ( + subset['data'].get('family') or + subset['data']['families'][0] + ) + } + + custom_namespace = custom_naming['namespace'].format( + **formatting_data + ) + + custom_group_name = custom_naming['group_name'].format( + **formatting_data + ) + + return custom_group_name, custom_namespace, options + class ReferenceLoader(Loader): """A basic ReferenceLoader for Maya @@ -102,41 +697,16 @@ class ReferenceLoader(Loader): namespace=None, options=None ): - assert os.path.exists(self.fname), "%s does not exist." % self.fname + path = self.filepath_from_context(context) + assert os.path.exists(path), "%s does not exist." % path - asset = context['asset'] - subset = context['subset'] - settings = get_project_settings(context['project']['name']) - custom_naming = settings['maya']['load']['reference_loader'] - loaded_containers = [] - - if not custom_naming['namespace']: - raise LoadError("No namespace specified in " - "Maya ReferenceLoader settings") - elif not custom_naming['group_name']: - raise LoadError("No group name specified in " - "Maya ReferenceLoader settings") - - formatting_data = { - "asset_name": asset['name'], - "asset_type": asset['type'], - "subset": subset['name'], - "family": ( - subset['data'].get('family') or - subset['data']['families'][0] - ) - } - - custom_namespace = custom_naming['namespace'].format( - **formatting_data - ) - - custom_group_name = custom_naming['group_name'].format( - **formatting_data - ) + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, options, + "reference_loader") count = options.get("count") or 1 + loaded_containers = [] for c in range(0, count): namespace = lib.get_custom_namespace(custom_namespace) group_name = "{}:{}".format( @@ -176,7 +746,6 @@ class ReferenceLoader(Loader): loaded_containers.append(container) self._organize_containers(nodes, container) c += 1 - namespace = None return loaded_containers @@ -186,6 +755,7 @@ class ReferenceLoader(Loader): def update(self, container, representation): from maya import cmds + from openpype.hosts.maya.api.lib import get_container_members node = container["objectName"] diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index 0bb1f186eb..7624aacd0f 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -18,13 +18,13 @@ from openpype.client import ( ) from openpype.pipeline import ( schema, - legacy_io, discover_loader_plugins, loaders_from_representation, load_container, update_container, remove_container, get_representation_path, + get_current_project_name, ) from openpype.hosts.maya.api.lib import ( matrix_equals, @@ -289,7 +289,7 @@ def update_package_version(container, version): """ # Versioning (from `core.maya.pipeline`) - project_name = legacy_io.active_project() + project_name = get_current_project_name() current_representation = get_representation_by_id( project_name, container["representation"] ) @@ -332,7 +332,7 @@ def update_package(set_container, representation): """ # Load the original package data - project_name = legacy_io.active_project() + project_name = get_current_project_name() current_representation = get_representation_by_id( project_name, set_container["representation"] ) @@ -380,7 +380,7 @@ def update_scene(set_container, containers, current_data, new_data, new_file): """ set_namespace = set_container['namespace'] - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Update the setdress hierarchy alembic set_root = get_container_transforms(set_container, root=True) diff --git a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py index 689d7adb4f..4b1ea698a6 100644 --- a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py +++ b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreAutoLoadPlugins(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreAutoLoadPlugins(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/hooks/pre_copy_mel.py b/openpype/hosts/maya/hooks/pre_copy_mel.py index 6f90af4b7c..6cd2c69e20 100644 --- a/openpype/hosts/maya/hooks/pre_copy_mel.py +++ b/openpype/hosts/maya/hooks/pre_copy_mel.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.maya.lib import create_workspace_mel @@ -7,13 +7,15 @@ class PreCopyMel(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["maya"] + app_groups = {"maya", "mayapy"} + launch_types = {LaunchTypes.local} def execute(self): - project_name = self.launch_context.env.get("AVALON_PROJECT") + project_doc = self.data["project_doc"] workdir = self.launch_context.env.get("AVALON_WORKDIR") if not workdir: self.log.warning("BUG: Workdir is not filled.") return - create_workspace_mel(workdir, project_name) + project_settings = self.data["project_settings"] + create_workspace_mel(workdir, project_doc["name"], project_settings) diff --git a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py index 7582ce0591..1fe3c3ca2c 100644 --- a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py +++ b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs. order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/lib.py b/openpype/hosts/maya/lib.py index ffb2f0b27c..765c60381b 100644 --- a/openpype/hosts/maya/lib.py +++ b/openpype/hosts/maya/lib.py @@ -3,7 +3,7 @@ from openpype.settings import get_project_settings from openpype.lib import Logger -def create_workspace_mel(workdir, project_name): +def create_workspace_mel(workdir, project_name, project_settings=None): dst_filepath = os.path.join(workdir, "workspace.mel") if os.path.exists(dst_filepath): return @@ -11,8 +11,9 @@ def create_workspace_mel(workdir, project_name): if not os.path.exists(workdir): os.makedirs(workdir) - project_setting = get_project_settings(project_name) - mel_script = project_setting["maya"].get("mel_workspace") + if not project_settings: + project_settings = get_project_settings(project_name) + mel_script = project_settings["maya"].get("mel_workspace") # Skip if mel script in settings is empty if not mel_script: diff --git a/openpype/hosts/maya/plugins/create/convert_legacy.py b/openpype/hosts/maya/plugins/create/convert_legacy.py new file mode 100644 index 0000000000..cd8faf291b --- /dev/null +++ b/openpype/hosts/maya/plugins/create/convert_legacy.py @@ -0,0 +1,178 @@ +from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin +from openpype.hosts.maya.api import plugin +from openpype.hosts.maya.api.lib import read + +from openpype.client import get_asset_by_name + +from maya import cmds +from maya.app.renderSetup.model import renderSetup + + +class MayaLegacyConvertor(SubsetConvertorPlugin, + plugin.MayaCreatorBase): + """Find and convert any legacy subsets in the scene. + + This Convertor will find all legacy subsets in the scene and will + transform them to the current system. Since the old subsets doesn't + retain any information about their original creators, the only mapping + we can do is based on their families. + + Its limitation is that you can have multiple creators creating subset + of the same family and there is no way to handle it. This code should + nevertheless cover all creators that came with OpenPype. + + """ + identifier = "io.openpype.creators.maya.legacy" + + # Cases where the identifier or new family doesn't correspond to the + # original family on the legacy instances + special_family_conversions = { + "rendering": "io.openpype.creators.maya.renderlayer", + } + + def find_instances(self): + + self.cache_subsets(self.collection_shared_data) + legacy = self.collection_shared_data.get("maya_cached_legacy_subsets") + if not legacy: + return + + self.add_convertor_item("Convert legacy instances") + + def convert(self): + self.remove_convertor_item() + + # We can't use the collected shared data cache here + # we re-query it here directly to convert all found. + cache = {} + self.cache_subsets(cache) + legacy = cache.get("maya_cached_legacy_subsets") + if not legacy: + return + + # From all current new style manual creators find the mapping + # from family to identifier + family_to_id = {} + for identifier, creator in self.create_context.creators.items(): + family = getattr(creator, "family", None) + if not family: + continue + + if family in family_to_id: + # We have a clash of family -> identifier. Multiple + # new style creators use the same family + self.log.warning("Clash on family->identifier: " + "{}".format(identifier)) + family_to_id[family] = identifier + + family_to_id.update(self.special_family_conversions) + + # We also embed the current 'task' into the instance since legacy + # instances didn't store that data on the instances. The old style + # logic was thus to be live to the current task to begin with. + data = dict() + data["task"] = self.create_context.get_current_task_name() + for family, instance_nodes in legacy.items(): + if family not in family_to_id: + self.log.warning( + "Unable to convert legacy instance with family '{}'" + " because there is no matching new creator's family" + "".format(family) + ) + continue + + creator_id = family_to_id[family] + creator = self.create_context.creators[creator_id] + data["creator_identifier"] = creator_id + + if isinstance(creator, plugin.RenderlayerCreator): + self._convert_per_renderlayer(instance_nodes, data, creator) + else: + self._convert_regular(instance_nodes, data) + + def _convert_regular(self, instance_nodes, data): + # We only imprint the creator identifier for it to identify + # as the new style creator + for instance_node in instance_nodes: + self.imprint_instance_node(instance_node, + data=data.copy()) + + def _convert_per_renderlayer(self, instance_nodes, data, creator): + # Split the instance into an instance per layer + rs = renderSetup.instance() + layers = rs.getRenderLayers() + if not layers: + self.log.error( + "Can't convert legacy renderlayer instance because no existing" + " renderSetup layers exist in the scene." + ) + return + + creator_attribute_names = { + attr_def.key for attr_def in creator.get_instance_attr_defs() + } + + for instance_node in instance_nodes: + + # Ensure we have the new style singleton node generated + # TODO: Make function public + singleton_node = creator._get_singleton_node() + if singleton_node: + self.log.error( + "Can't convert legacy renderlayer instance '{}' because" + " new style instance '{}' already exists".format( + instance_node, + singleton_node + ) + ) + continue + + creator.create_singleton_node() + + # We are creating new nodes to replace the original instance + # Copy the attributes of the original instance to the new node + original_data = read(instance_node) + + # The family gets converted to the new family (this is due to + # "rendering" family being converted to "renderlayer" family) + original_data["family"] = creator.family + + # recreate subset name as without it would be + # `renderingMain` vs correct `renderMain` + project_name = self.create_context.get_current_project_name() + asset_doc = get_asset_by_name(project_name, + original_data["asset"]) + subset_name = creator.get_subset_name( + original_data["variant"], + data["task"], + asset_doc, + project_name) + original_data["subset"] = subset_name + + # Convert to creator attributes when relevant + creator_attributes = {} + for key in list(original_data.keys()): + # Iterate in order of the original attributes to preserve order + # in the output creator attributes + if key in creator_attribute_names: + creator_attributes[key] = original_data.pop(key) + original_data["creator_attributes"] = creator_attributes + + # For layer in maya layers + for layer in layers: + layer_instance_node = creator.find_layer_instance_node(layer) + if not layer_instance_node: + # TODO: Make function public + layer_instance_node = creator._create_layer_instance_node( + layer + ) + + # Transfer the main attributes of the original instance + layer_data = original_data.copy() + layer_data.update(data) + + self.imprint_instance_node(layer_instance_node, + data=layer_data) + + # Delete the legacy instance node + cmds.delete(instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index 095cbcdd64..115c73c0d3 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -2,59 +2,88 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateAnimation(plugin.Creator): - """Animation output for character rigs""" - - # We hide the animation creator from the UI since the creation of it - # is automated upon loading a rig. There's an inventory action to recreate - # it for loaded rigs if by chance someone deleted the animation instance. - # Note: This setting is actually applied from project settings - enabled = False +class CreateAnimation(plugin.MayaHiddenCreator): + """Animation output for character rigs + We hide the animation creator from the UI since the creation of it is + automated upon loading a rig. There's an inventory action to recreate it + for loaded rigs if by chance someone deleted the animation instance. + """ + identifier = "io.openpype.creators.maya.animation" name = "animationDefault" label = "Animation" family = "animation" icon = "male" + write_color_sets = False write_face_sets = False include_parent_hierarchy = False include_user_defined_attributes = False - def __init__(self, *args, **kwargs): - super(CreateAnimation, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # create an ordered dict with the existing data first + defs = lib.collect_animation_defs() - # get basic animation data : start / end / handles / steps - for key, value in lib.collect_animation_data().items(): - self.data[key] = value - - # Write vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - self.data["writeFaceSets"] = self.write_face_sets - - # Include only renderable visible shapes. - # Skips locators and empty transforms - self.data["renderableOnly"] = False - - # Include only nodes that are visible at least once during the - # frame range. - self.data["visibleOnly"] = False - - # Include the groups above the out_SET content - self.data["includeParentHierarchy"] = self.include_parent_hierarchy - - # Default to exporting world-space - self.data["worldSpace"] = True + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("writeNormals", + label="Write normals", + tooltip="Write normals with the deforming geometry", + default=True), + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=self.include_parent_hierarchy), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("includeUserDefinedAttributes", + label="Include User Defined Attributes", + default=self.include_user_defined_attributes), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) + # TODO: Implement these on a Deadline plug-in instead? + """ # Default to not send to farm. self.data["farm"] = False self.data["priority"] = 50 + """ - # Default to write normals. - self.data["writeNormals"] = True + return defs - value = self.include_user_defined_attributes - self.data["includeUserDefinedAttributes"] = value + def apply_settings(self, project_settings): + super(CreateAnimation, self).apply_settings(project_settings) + # Hardcoding creator to be enabled due to existing settings would + # disable the creator causing the creator plugin to not be + # discoverable. + self.enabled = True diff --git a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py index 2afb897e94..1ef132725f 100644 --- a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py +++ b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py @@ -2,17 +2,21 @@ from openpype.hosts.maya.api import ( lib, plugin ) - -from maya import cmds +from openpype.lib import ( + NumberDef, + BoolDef +) -class CreateArnoldSceneSource(plugin.Creator): +class CreateArnoldSceneSource(plugin.MayaCreator): """Arnold Scene Source""" - name = "ass" + identifier = "io.openpype.creators.maya.ass" label = "Arnold Scene Source" family = "ass" icon = "cube" + settings_name = "CreateAss" + expandProcedurals = False motionBlur = True motionBlurKeys = 2 @@ -28,39 +32,71 @@ class CreateArnoldSceneSource(plugin.Creator): maskColor_manager = False maskOperator = False - def __init__(self, *args, **kwargs): - super(CreateArnoldSceneSource, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - self.data["expandProcedurals"] = self.expandProcedurals - self.data["motionBlur"] = self.motionBlur - self.data["motionBlurKeys"] = self.motionBlurKeys - self.data["motionBlurLength"] = self.motionBlurLength + defs.extend([ + BoolDef("expandProcedural", + label="Expand Procedural", + default=self.expandProcedurals), + BoolDef("motionBlur", + label="Motion Blur", + default=self.motionBlur), + NumberDef("motionBlurKeys", + label="Motion Blur Keys", + decimals=0, + default=self.motionBlurKeys), + NumberDef("motionBlurLength", + label="Motion Blur Length", + decimals=3, + default=self.motionBlurLength), - # Masks - self.data["maskOptions"] = self.maskOptions - self.data["maskCamera"] = self.maskCamera - self.data["maskLight"] = self.maskLight - self.data["maskShape"] = self.maskShape - self.data["maskShader"] = self.maskShader - self.data["maskOverride"] = self.maskOverride - self.data["maskDriver"] = self.maskDriver - self.data["maskFilter"] = self.maskFilter - self.data["maskColor_manager"] = self.maskColor_manager - self.data["maskOperator"] = self.maskOperator + # Masks + BoolDef("maskOptions", + label="Export Options", + default=self.maskOptions), + BoolDef("maskCamera", + label="Export Cameras", + default=self.maskCamera), + BoolDef("maskLight", + label="Export Lights", + default=self.maskLight), + BoolDef("maskShape", + label="Export Shapes", + default=self.maskShape), + BoolDef("maskShader", + label="Export Shaders", + default=self.maskShader), + BoolDef("maskOverride", + label="Export Override Nodes", + default=self.maskOverride), + BoolDef("maskDriver", + label="Export Drivers", + default=self.maskDriver), + BoolDef("maskFilter", + label="Export Filters", + default=self.maskFilter), + BoolDef("maskOperator", + label="Export Operators", + default=self.maskOperator), + BoolDef("maskColor_manager", + label="Export Color Managers", + default=self.maskColor_manager), + ]) - def process(self): - instance = super(CreateArnoldSceneSource, self).process() + return defs - nodes = [] + def create(self, subset_name, instance_data, pre_create_data): - if (self.options or {}).get("useSelection"): - nodes = cmds.ls(selection=True) + from maya import cmds - cmds.sets(nodes, rm=instance) + instance = super(CreateArnoldSceneSource, self).create( + subset_name, instance_data, pre_create_data + ) - assContent = cmds.sets(name=instance + "_content_SET") - assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True) - cmds.sets([assContent, assProxy], forceElement=instance) + instance_node = instance.get("instance_node") + + content = cmds.sets(name=instance_node + "_content_SET", empty=True) + proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True) + cmds.sets([content, proxy], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_assembly.py b/openpype/hosts/maya/plugins/create/create_assembly.py index ff5e1d45c4..813fe4da04 100644 --- a/openpype/hosts/maya/plugins/create/create_assembly.py +++ b/openpype/hosts/maya/plugins/create/create_assembly.py @@ -1,10 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateAssembly(plugin.Creator): +class CreateAssembly(plugin.MayaCreator): """A grouped package of loaded content""" - name = "assembly" + identifier = "io.openpype.creators.maya.assembly" label = "Assembly" family = "assembly" icon = "cubes" diff --git a/openpype/hosts/maya/plugins/create/create_camera.py b/openpype/hosts/maya/plugins/create/create_camera.py index 8b2c881036..0219f56330 100644 --- a/openpype/hosts/maya/plugins/create/create_camera.py +++ b/openpype/hosts/maya/plugins/create/create_camera.py @@ -2,33 +2,35 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import BoolDef -class CreateCamera(plugin.Creator): +class CreateCamera(plugin.MayaCreator): """Single baked camera""" - name = "cameraMain" + identifier = "io.openpype.creators.maya.camera" label = "Camera" family = "camera" icon = "video-camera" - def __init__(self, *args, **kwargs): - super(CreateCamera, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # get basic animation data : start / end / handles / steps - animation_data = lib.collect_animation_data() - for key, value in animation_data.items(): - self.data[key] = value + defs = lib.collect_animation_defs() - # Bake to world space by default, when this is False it will also - # include the parent hierarchy in the baked results - self.data['bakeToWorldSpace'] = True + defs.extend([ + BoolDef("bakeToWorldSpace", + label="Bake to World-Space", + tooltip="Bake to World-Space", + default=True), + ]) + + return defs -class CreateCameraRig(plugin.Creator): +class CreateCameraRig(plugin.MayaCreator): """Complex hierarchy with camera.""" - name = "camerarigMain" + identifier = "io.openpype.creators.maya.camerarig" label = "Camera Rig" family = "camerarig" icon = "video-camera" diff --git a/openpype/hosts/maya/plugins/create/create_layout.py b/openpype/hosts/maya/plugins/create/create_layout.py index 1768a3d49e..168743d4dc 100644 --- a/openpype/hosts/maya/plugins/create/create_layout.py +++ b/openpype/hosts/maya/plugins/create/create_layout.py @@ -1,16 +1,21 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef -class CreateLayout(plugin.Creator): +class CreateLayout(plugin.MayaCreator): """A grouped package of loaded content""" - name = "layoutMain" + identifier = "io.openpype.creators.maya.layout" label = "Layout" family = "layout" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateLayout, self).__init__(*args, **kwargs) - # enable this when you want to - # publish group of loaded asset - self.data["groupLoadedAssets"] = False + def get_instance_attr_defs(self): + + return [ + BoolDef("groupLoadedAssets", + label="Group Loaded Assets", + tooltip="Enable this when you want to publish group of " + "loaded asset", + default=False) + ] diff --git a/openpype/hosts/maya/plugins/create/create_look.py b/openpype/hosts/maya/plugins/create/create_look.py index 51b0b8819a..11a69151fd 100644 --- a/openpype/hosts/maya/plugins/create/create_look.py +++ b/openpype/hosts/maya/plugins/create/create_look.py @@ -1,29 +1,47 @@ from openpype.hosts.maya.api import ( - lib, - plugin + plugin, + lib +) +from openpype.lib import ( + BoolDef, + TextDef ) -class CreateLook(plugin.Creator): +class CreateLook(plugin.MayaCreator): """Shader connections defining shape look""" - name = "look" + identifier = "io.openpype.creators.maya.look" label = "Look" family = "look" icon = "paint-brush" + make_tx = True rs_tex = False - def __init__(self, *args, **kwargs): - super(CreateLook, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - self.data["renderlayer"] = lib.get_current_renderlayer() + return [ + # TODO: This value should actually get set on create! + TextDef("renderLayer", + # TODO: Bug: Hidden attribute's label is still shown in UI? + hidden=True, + default=lib.get_current_renderlayer(), + label="Renderlayer", + tooltip="Renderlayer to extract the look from"), + BoolDef("maketx", + label="MakeTX", + tooltip="Whether to generate .tx files for your textures", + default=self.make_tx), + BoolDef("rstex", + label="Convert textures to .rstex", + tooltip="Whether to generate Redshift .rstex files for " + "your textures", + default=self.rs_tex) + ] - # Whether to automatically convert the textures to .tx upon publish. - self.data["maketx"] = self.make_tx - # Whether to automatically convert the textures to .rstex upon publish. - self.data["rstex"] = self.rs_tex - # Enable users to force a copy. - # - on Windows is "forceCopy" always changed to `True` because of - # windows implementation of hardlinks - self.data["forceCopy"] = False + def get_pre_create_attr_defs(self): + # Show same attributes on create but include use selection + defs = super(CreateLook, self).get_pre_create_attr_defs() + defs.extend(self.get_instance_attr_defs()) + return defs diff --git a/openpype/hosts/maya/plugins/create/create_matchmove.py b/openpype/hosts/maya/plugins/create/create_matchmove.py new file mode 100644 index 0000000000..e64eb6a471 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_matchmove.py @@ -0,0 +1,32 @@ +from openpype.hosts.maya.api import ( + lib, + plugin +) +from openpype.lib import BoolDef + + +class CreateMatchmove(plugin.MayaCreator): + """Instance for more complex setup of cameras. + + Might contain multiple cameras, geometries etc. + + It is expected to be extracted into .abc or .ma + """ + + identifier = "io.openpype.creators.maya.matchmove" + label = "Matchmove" + family = "matchmove" + icon = "video-camera" + + def get_instance_attr_defs(self): + + defs = lib.collect_animation_defs() + + defs.extend([ + BoolDef("bakeToWorldSpace", + label="Bake Cameras to World-Space", + tooltip="Bake Cameras to World-Space", + default=True), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_maya_usd.py b/openpype/hosts/maya/plugins/create/create_maya_usd.py new file mode 100644 index 0000000000..cc9a14bd3a --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_maya_usd.py @@ -0,0 +1,102 @@ +from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + EnumDef, + TextDef +) + +from maya import cmds + + +class CreateMayaUsd(plugin.MayaCreator): + """Create Maya USD Export""" + + identifier = "io.openpype.creators.maya.mayausd" + label = "Maya USD" + family = "usd" + icon = "cubes" + description = "Create Maya USD Export" + + cache = {} + + def get_publish_families(self): + return ["usd", "mayaUsd"] + + def get_instance_attr_defs(self): + + if "jobContextItems" not in self.cache: + # Query once instead of per instance + job_context_items = {} + try: + cmds.loadPlugin("mayaUsdPlugin", quiet=True) + job_context_items = { + cmds.mayaUSDListJobContexts(jobContext=name): name + for name in cmds.mayaUSDListJobContexts(export=True) or [] + } + except RuntimeError: + # Likely `mayaUsdPlugin` plug-in not available + self.log.warning("Unable to retrieve available job " + "contexts for `mayaUsdPlugin` exports") + + if not job_context_items: + # enumdef multiselection may not be empty + job_context_items = [""] + + self.cache["jobContextItems"] = job_context_items + + defs = lib.collect_animation_defs() + defs.extend([ + EnumDef("defaultUSDFormat", + label="File format", + items={ + "usdc": "Binary", + "usda": "ASCII" + }, + default="usdc"), + BoolDef("stripNamespaces", + label="Strip Namespaces", + tooltip=( + "Remove namespaces during export. By default, " + "namespaces are exported to the USD file in the " + "following format: nameSpaceExample_pPlatonic1" + ), + default=True), + BoolDef("mergeTransformAndShape", + label="Merge Transform and Shape", + tooltip=( + "Combine Maya transform and shape into a single USD" + "prim that has transform and geometry, for all" + " \"geometric primitives\" (gprims).\n" + "This results in smaller and faster scenes. Gprims " + "will be \"unpacked\" back into transform and shape " + "nodes when imported into Maya from USD." + ), + default=True), + BoolDef("includeUserDefinedAttributes", + label="Include User Defined Attributes", + tooltip=( + "Whether to include all custom maya attributes found " + "on nodes as metadata (userProperties) in USD." + ), + default=False), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + default="", + placeholder="prefix1, prefix2"), + EnumDef("jobContext", + label="Job Context", + items=self.cache["jobContextItems"], + tooltip=( + "Specifies an additional export context to handle.\n" + "These usually contain extra schemas, primitives,\n" + "and materials that are to be exported for a " + "specific\ntask, a target renderer for example." + ), + multiselection=True), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_mayaascii.py b/openpype/hosts/maya/plugins/create/create_mayascene.py similarity index 65% rename from openpype/hosts/maya/plugins/create/create_mayaascii.py rename to openpype/hosts/maya/plugins/create/create_mayascene.py index f54f2df812..b61c97aebf 100644 --- a/openpype/hosts/maya/plugins/create/create_mayaascii.py +++ b/openpype/hosts/maya/plugins/create/create_mayascene.py @@ -1,9 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateMayaScene(plugin.Creator): +class CreateMayaScene(plugin.MayaCreator): """Raw Maya Scene file export""" + identifier = "io.openpype.creators.maya.mayascene" name = "mayaScene" label = "Maya Scene" family = "mayaScene" diff --git a/openpype/hosts/maya/plugins/create/create_model.py b/openpype/hosts/maya/plugins/create/create_model.py index 520e962f74..5c3dd04af0 100644 --- a/openpype/hosts/maya/plugins/create/create_model.py +++ b/openpype/hosts/maya/plugins/create/create_model.py @@ -1,26 +1,43 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateModel(plugin.Creator): +class CreateModel(plugin.MayaCreator): """Polygonal static geometry""" - name = "modelMain" + identifier = "io.openpype.creators.maya.model" label = "Model" family = "model" icon = "cube" - defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"] + default_variants = ["Main", "Proxy", "_MD", "_HD", "_LD"] + write_color_sets = False write_face_sets = False - def __init__(self, *args, **kwargs): - super(CreateModel, self).__init__(*args, **kwargs) - # Vertex colors with the geometry - self.data["writeColorSets"] = self.write_color_sets - self.data["writeFaceSets"] = self.write_face_sets + def get_instance_attr_defs(self): - # Include attributes by attribute name or prefix - self.data["attr"] = "" - self.data["attrPrefix"] = "" - - # Whether to include parent hierarchy of nodes in the instance - self.data["includeParentHierarchy"] = False + return [ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ] diff --git a/openpype/hosts/maya/plugins/create/create_multishot_layout.py b/openpype/hosts/maya/plugins/create/create_multishot_layout.py new file mode 100644 index 0000000000..0b027c02ea --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multishot_layout.py @@ -0,0 +1,211 @@ +from ayon_api import ( + get_folder_by_name, + get_folder_by_path, + get_folders, +) +from maya import cmds # noqa: F401 + +from openpype import AYON_SERVER_ENABLED +from openpype.client import get_assets +from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef, EnumDef, TextDef +from openpype.pipeline import ( + Creator, + get_current_asset_name, + get_current_project_name, +) +from openpype.pipeline.create import CreatorError + + +class CreateMultishotLayout(plugin.MayaCreator): + """Create a multi-shot layout in the Maya scene. + + This creator will create a Camera Sequencer in the Maya scene based on + the shots found under the specified folder. The shots will be added to + the sequencer in the order of their clipIn and clipOut values. For each + shot a Layout will be created. + + """ + identifier = "io.openpype.creators.maya.multishotlayout" + label = "Multi-shot Layout" + family = "layout" + icon = "project-diagram" + + def get_pre_create_attr_defs(self): + # Present artist with a list of parents of the current context + # to choose from. This will be used to get the shots under the + # selected folder to create the Camera Sequencer. + + """ + Todo: `get_folder_by_name` should be switched to `get_folder_by_path` + once the fork to pure AYON is done. + + Warning: this will not work for projects where the asset name + is not unique across the project until the switch mentioned + above is done. + """ + + current_folder = get_folder_by_name( + project_name=get_current_project_name(), + folder_name=get_current_asset_name(), + ) + + current_path_parts = current_folder["path"].split("/") + + # populate the list with parents of the current folder + # this will create menu items like: + # [ + # { + # "value": "", + # "label": "project (shots directly under the project)" + # }, { + # "value": "shots/shot_01", "label": "shot_01 (current)" + # }, { + # "value": "shots", "label": "shots" + # } + # ] + + # add the project as the first item + items_with_label = [ + { + "label": f"{self.project_name} " + "(shots directly under the project)", + "value": "" + } + ] + + # go through the current folder path and add each part to the list, + # but mark the current folder. + for part_idx, part in enumerate(current_path_parts): + label = part + if label == current_folder["name"]: + label = f"{label} (current)" + + value = "/".join(current_path_parts[:part_idx + 1]) + + items_with_label.append({"label": label, "value": value}) + + return [ + EnumDef("shotParent", + default=current_folder["name"], + label="Shot Parent Folder", + items=items_with_label, + ), + BoolDef("groupLoadedAssets", + label="Group Loaded Assets", + tooltip="Enable this when you want to publish group of " + "loaded asset", + default=False), + TextDef("taskName", + label="Associated Task Name", + tooltip=("Task name to be associated " + "with the created Layout"), + default="layout"), + ] + + def create(self, subset_name, instance_data, pre_create_data): + shots = list( + self.get_related_shots(folder_path=pre_create_data["shotParent"]) + ) + if not shots: + # There are no shot folders under the specified folder. + # We are raising an error here but in the future we might + # want to create a new shot folders by publishing the layouts + # and shot defined in the sequencer. Sort of editorial publish + # in side of Maya. + raise CreatorError(( + "No shots found under the specified " + f"folder: {pre_create_data['shotParent']}.")) + + # Get layout creator + layout_creator_id = "io.openpype.creators.maya.layout" + layout_creator: Creator = self.create_context.creators.get( + layout_creator_id) + if not layout_creator: + raise CreatorError( + f"Creator {layout_creator_id} not found.") + + # Get OpenPype style asset documents for the shots + op_asset_docs = get_assets( + self.project_name, [s["id"] for s in shots]) + asset_docs_by_id = {doc["_id"]: doc for doc in op_asset_docs} + for shot in shots: + # we are setting shot name to be displayed in the sequencer to + # `shot name (shot label)` if the label is set, otherwise just + # `shot name`. So far, labels are used only when the name is set + # with characters that are not allowed in the shot name. + if not shot["active"]: + continue + + # get task for shot + asset_doc = asset_docs_by_id[shot["id"]] + + tasks = asset_doc.get("data").get("tasks").keys() + layout_task = None + if pre_create_data["taskName"] in tasks: + layout_task = pre_create_data["taskName"] + + shot_name = f"{shot['name']}%s" % ( + f" ({shot['label']})" if shot["label"] else "") + cmds.shot(sequenceStartTime=shot["attrib"]["clipIn"], + sequenceEndTime=shot["attrib"]["clipOut"], + shotName=shot_name) + + # Create layout instance by the layout creator + + instance_data = { + "asset": shot["name"], + "variant": layout_creator.get_default_variant() + } + if layout_task: + instance_data["task"] = layout_task + + layout_creator.create( + subset_name=layout_creator.get_subset_name( + layout_creator.get_default_variant(), + self.create_context.get_current_task_name(), + asset_doc, + self.project_name), + instance_data=instance_data, + pre_create_data={ + "groupLoadedAssets": pre_create_data["groupLoadedAssets"] + } + ) + + def get_related_shots(self, folder_path: str): + """Get all shots related to the current asset. + + Get all folders of type Shot under specified folder. + + Args: + folder_path (str): Path of the folder. + + Returns: + list: List of dicts with folder data. + + """ + # if folder_path is None, project is selected as a root + # and its name is used as a parent id + parent_id = self.project_name + if folder_path: + current_folder = get_folder_by_path( + project_name=self.project_name, + folder_path=folder_path, + ) + parent_id = current_folder["id"] + + # get all child folders of the current one + return get_folders( + project_name=self.project_name, + parent_ids=[parent_id], + fields=[ + "attrib.clipIn", "attrib.clipOut", + "attrib.frameStart", "attrib.frameEnd", + "name", "label", "path", "folderType", "id" + ] + ) + + +# blast this creator if Ayon server is not enabled +if not AYON_SERVER_ENABLED: + del CreateMultishotLayout diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_look.py b/openpype/hosts/maya/plugins/create/create_multiverse_look.py index f47c88a93b..f27eb57fc1 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_look.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_look.py @@ -1,15 +1,27 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import ( + BoolDef, + EnumDef +) -class CreateMultiverseLook(plugin.Creator): +class CreateMultiverseLook(plugin.MayaCreator): """Create Multiverse Look""" - name = "mvLook" + identifier = "io.openpype.creators.maya.mvlook" label = "Multiverse Look" family = "mvLook" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseLook, self).__init__(*args, **kwargs) - self.data["fileFormat"] = ["usda", "usd"] - self.data["publishMipMap"] = True + def get_instance_attr_defs(self): + + return [ + EnumDef("fileFormat", + label="File Format", + tooltip="USD export file format", + items=["usda", "usd"], + default="usda"), + BoolDef("publishMipMap", + label="Publish MipMap", + default=True), + ] diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py index 8cd76b5f40..2963d4d5b6 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py @@ -1,53 +1,139 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + TextDef, + EnumDef +) -class CreateMultiverseUsd(plugin.Creator): +class CreateMultiverseUsd(plugin.MayaCreator): """Create Multiverse USD Asset""" - name = "mvUsdMain" + identifier = "io.openpype.creators.maya.mvusdasset" label = "Multiverse USD Asset" family = "usd" icon = "cubes" + description = "Create Multiverse USD Asset" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsd, self).__init__(*args, **kwargs) + def get_publish_families(self): + return ["usd", "mvUsd"] - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) + def get_instance_attr_defs(self): - self.data["fileFormat"] = ["usd", "usda", "usdz"] - self.data["stripNamespaces"] = True - self.data["mergeTransformAndShape"] = False - self.data["writeAncestors"] = True - self.data["flattenParentXforms"] = False - self.data["writeSparseOverrides"] = False - self.data["useMetaPrimPath"] = False - self.data["customRootPath"] = '' - self.data["customAttributes"] = '' - self.data["nodeTypesToIgnore"] = '' - self.data["writeMeshes"] = True - self.data["writeCurves"] = True - self.data["writeParticles"] = True - self.data["writeCameras"] = False - self.data["writeLights"] = False - self.data["writeJoints"] = False - self.data["writeCollections"] = False - self.data["writePositions"] = True - self.data["writeNormals"] = True - self.data["writeUVs"] = True - self.data["writeColorSets"] = False - self.data["writeTangents"] = False - self.data["writeRefPositions"] = True - self.data["writeBlendShapes"] = False - self.data["writeDisplayColor"] = True - self.data["writeSkinWeights"] = False - self.data["writeMaterialAssignment"] = False - self.data["writeHardwareShader"] = False - self.data["writeShadingNetworks"] = False - self.data["writeTransformMatrix"] = True - self.data["writeUsdAttributes"] = True - self.data["writeInstancesAsReferences"] = False - self.data["timeVaryingTopology"] = False - self.data["customMaterialNamespace"] = '' - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda", "usdz"], + default="usd"), + BoolDef("stripNamespaces", + label="Strip Namespaces", + default=True), + BoolDef("mergeTransformAndShape", + label="Merge Transform and Shape", + default=False), + BoolDef("writeAncestors", + label="Write Ancestors", + default=True), + BoolDef("flattenParentXforms", + label="Flatten Parent Xforms", + default=False), + BoolDef("writeSparseOverrides", + label="Write Sparse Overrides", + default=False), + BoolDef("useMetaPrimPath", + label="Use Meta Prim Path", + default=False), + TextDef("customRootPath", + label="Custom Root Path", + default=''), + TextDef("customAttributes", + label="Custom Attributes", + tooltip="Comma-separated list of attribute names", + default=''), + TextDef("nodeTypesToIgnore", + label="Node Types to Ignore", + tooltip="Comma-separated list of node types to be ignored", + default=''), + BoolDef("writeMeshes", + label="Write Meshes", + default=True), + BoolDef("writeCurves", + label="Write Curves", + default=True), + BoolDef("writeParticles", + label="Write Particles", + default=True), + BoolDef("writeCameras", + label="Write Cameras", + default=False), + BoolDef("writeLights", + label="Write Lights", + default=False), + BoolDef("writeJoints", + label="Write Joints", + default=False), + BoolDef("writeCollections", + label="Write Collections", + default=False), + BoolDef("writePositions", + label="Write Positions", + default=True), + BoolDef("writeNormals", + label="Write Normals", + default=True), + BoolDef("writeUVs", + label="Write UVs", + default=True), + BoolDef("writeColorSets", + label="Write Color Sets", + default=False), + BoolDef("writeTangents", + label="Write Tangents", + default=False), + BoolDef("writeRefPositions", + label="Write Ref Positions", + default=True), + BoolDef("writeBlendShapes", + label="Write BlendShapes", + default=False), + BoolDef("writeDisplayColor", + label="Write Display Color", + default=True), + BoolDef("writeSkinWeights", + label="Write Skin Weights", + default=False), + BoolDef("writeMaterialAssignment", + label="Write Material Assignment", + default=False), + BoolDef("writeHardwareShader", + label="Write Hardware Shader", + default=False), + BoolDef("writeShadingNetworks", + label="Write Shading Networks", + default=False), + BoolDef("writeTransformMatrix", + label="Write Transform Matrix", + default=True), + BoolDef("writeUsdAttributes", + label="Write USD Attributes", + default=True), + BoolDef("writeInstancesAsReferences", + label="Write Instances as References", + default=False), + BoolDef("timeVaryingTopology", + label="Time Varying Topology", + default=False), + TextDef("customMaterialNamespace", + label="Custom Material Namespace", + default=''), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py index ed466a8068..66ddd83eda 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py @@ -1,26 +1,48 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) -class CreateMultiverseUsdComp(plugin.Creator): +class CreateMultiverseUsdComp(plugin.MayaCreator): """Create Multiverse USD Composition""" - name = "mvUsdCompositionMain" + identifier = "io.openpype.creators.maya.mvusdcomposition" label = "Multiverse USD Composition" family = "mvUsdComposition" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda"], + default="usd"), + BoolDef("stripNamespaces", + label="Strip Namespaces", + default=False), + BoolDef("mergeTransformAndShape", + label="Merge Transform and Shape", + default=False), + BoolDef("flattenContent", + label="Flatten Content", + default=False), + BoolDef("writeAsCompoundLayers", + label="Write As Compound Layers", + default=False), + BoolDef("writePendingOverrides", + label="Write Pending Overrides", + default=False), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) - # Order of `fileFormat` must match extract_multiverse_usd_comp.py - self.data["fileFormat"] = ["usda", "usd"] - self.data["stripNamespaces"] = False - self.data["mergeTransformAndShape"] = False - self.data["flattenContent"] = False - self.data["writeAsCompoundLayers"] = False - self.data["writePendingOverrides"] = False - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + return defs diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py index 06e22df295..166dbf6515 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py @@ -1,30 +1,59 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) -class CreateMultiverseUsdOver(plugin.Creator): +class CreateMultiverseUsdOver(plugin.MayaCreator): """Create Multiverse USD Override""" - name = "mvUsdOverrideMain" + identifier = "io.openpype.creators.maya.mvusdoverride" label = "Multiverse USD Override" family = "mvUsdOverride" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda"], + default="usd"), + BoolDef("writeAll", + label="Write All", + default=False), + BoolDef("writeTransforms", + label="Write Transforms", + default=True), + BoolDef("writeVisibility", + label="Write Visibility", + default=True), + BoolDef("writeAttributes", + label="Write Attributes", + default=True), + BoolDef("writeMaterials", + label="Write Materials", + default=True), + BoolDef("writeVariants", + label="Write Variants", + default=True), + BoolDef("writeVariantsDefinition", + label="Write Variants Definition", + default=True), + BoolDef("writeActiveState", + label="Write Active State", + default=True), + BoolDef("writeNamespaces", + label="Write Namespaces", + default=False), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) - - # Order of `fileFormat` must match extract_multiverse_usd_over.py - self.data["fileFormat"] = ["usda", "usd"] - self.data["writeAll"] = False - self.data["writeTransforms"] = True - self.data["writeVisibility"] = True - self.data["writeAttributes"] = True - self.data["writeMaterials"] = True - self.data["writeVariants"] = True - self.data["writeVariantsDefinition"] = True - self.data["writeActiveState"] = True - self.data["writeNamespaces"] = False - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + return defs diff --git a/openpype/hosts/maya/plugins/create/create_pointcache.py b/openpype/hosts/maya/plugins/create/create_pointcache.py index 1b8d5e6850..f4e8cbfc9a 100644 --- a/openpype/hosts/maya/plugins/create/create_pointcache.py +++ b/openpype/hosts/maya/plugins/create/create_pointcache.py @@ -4,47 +4,85 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreatePointCache(plugin.Creator): +class CreatePointCache(plugin.MayaCreator): """Alembic pointcache for animated data""" - name = "pointcache" - label = "Point Cache" + identifier = "io.openpype.creators.maya.pointcache" + label = "Pointcache" family = "pointcache" icon = "gears" write_color_sets = False write_face_sets = False include_user_defined_attributes = False - def __init__(self, *args, **kwargs): - super(CreatePointCache, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - # Vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - # Vertex colors with the geometry. - self.data["writeFaceSets"] = self.write_face_sets - self.data["renderableOnly"] = False # Only renderable visible shapes - self.data["visibleOnly"] = False # only nodes that are visible - self.data["includeParentHierarchy"] = False # Include parent groups - self.data["worldSpace"] = True # Default to exporting world-space - self.data["refresh"] = False # Default to suspend refresh. - - # Add options for custom attributes - value = self.include_user_defined_attributes - self.data["includeUserDefinedAttributes"] = value - self.data["attr"] = "" - self.data["attrPrefix"] = "" + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=False), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=False), + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("refresh", + label="Refresh viewport during export", + default=False), + BoolDef("includeUserDefinedAttributes", + label="Include User Defined Attributes", + default=self.include_user_defined_attributes), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + default="", + placeholder="prefix1, prefix2") + ]) + # TODO: Implement these on a Deadline plug-in instead? + """ # Default to not send to farm. self.data["farm"] = False self.data["priority"] = 50 + """ - def process(self): - instance = super(CreatePointCache, self).process() + return defs - assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True) - cmds.sets(assProxy, forceElement=instance) + def create(self, subset_name, instance_data, pre_create_data): + + instance = super(CreatePointCache, self).create( + subset_name, instance_data, pre_create_data + ) + instance_node = instance.get("instance_node") + + # For Arnold standin proxy + proxy_set = cmds.sets(name=instance_node + "_proxy_SET", empty=True) + cmds.sets(proxy_set, forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_proxy_abc.py b/openpype/hosts/maya/plugins/create/create_proxy_abc.py index 2946f7b530..d89470ebee 100644 --- a/openpype/hosts/maya/plugins/create/create_proxy_abc.py +++ b/openpype/hosts/maya/plugins/create/create_proxy_abc.py @@ -2,34 +2,49 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateProxyAlembic(plugin.Creator): +class CreateProxyAlembic(plugin.MayaCreator): """Proxy Alembic for animated data""" - name = "proxyAbcMain" + identifier = "io.openpype.creators.maya.proxyabc" label = "Proxy Alembic" family = "proxyAbc" icon = "gears" write_color_sets = False write_face_sets = False - def __init__(self, *args, **kwargs): - super(CreateProxyAlembic, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - # Vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - # Vertex colors with the geometry. - self.data["writeFaceSets"] = self.write_face_sets - # Default to exporting world-space - self.data["worldSpace"] = True + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + TextDef("nameSuffix", + label="Name Suffix for Bounding Box", + default="_BBox", + placeholder="_BBox"), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) - # name suffix for the bounding box - self.data["nameSuffix"] = "_BBox" - - # Add options for custom attributes - self.data["attr"] = "" - self.data["attrPrefix"] = "" + return defs diff --git a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py index 419a8d99d4..2490738e8f 100644 --- a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py @@ -2,22 +2,24 @@ """Creator of Redshift proxy subset types.""" from openpype.hosts.maya.api import plugin, lib +from openpype.lib import BoolDef -class CreateRedshiftProxy(plugin.Creator): +class CreateRedshiftProxy(plugin.MayaCreator): """Create instance of Redshift Proxy subset.""" - name = "redshiftproxy" + identifier = "io.openpype.creators.maya.redshiftproxy" label = "Redshift Proxy" family = "redshiftproxy" icon = "gears" - def __init__(self, *args, **kwargs): - super(CreateRedshiftProxy, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - animation_data = lib.collect_animation_data() + defs = [ + BoolDef("animation", + label="Export animation", + default=False) + ] - self.data["animation"] = False - self.data["proxyFrameStart"] = animation_data["frameStart"] - self.data["proxyFrameEnd"] = animation_data["frameEnd"] - self.data["proxyFrameStep"] = animation_data["step"] + defs.extend(lib.collect_animation_defs()) + return defs diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py index 4681175808..6266689af4 100644 --- a/openpype/hosts/maya/plugins/create/create_render.py +++ b/openpype/hosts/maya/plugins/create/create_render.py @@ -1,425 +1,108 @@ # -*- coding: utf-8 -*- """Create ``Render`` instance in Maya.""" -import json -import os -import appdirs -import requests - -from maya import cmds -from maya.app.renderSetup.model import renderSetup - -from openpype.settings import ( - get_system_settings, - get_project_settings, -) -from openpype.lib import requests_get -from openpype.modules import ModulesManager -from openpype.pipeline import legacy_io from openpype.hosts.maya.api import ( - lib, lib_rendersettings, plugin ) +from openpype.pipeline import CreatorError +from openpype.lib import ( + BoolDef, + NumberDef, +) -class CreateRender(plugin.Creator): - """Create *render* instance. +class CreateRenderlayer(plugin.RenderlayerCreator): + """Create and manages renderlayer subset per renderLayer in workfile. - Render instances are not actually published, they hold options for - collecting of render data. It render instance is present, it will trigger - collection of render layers, AOVs, cameras for either direct submission - to render farm or export as various standalone formats (like V-Rays - ``vrscenes`` or Arnolds ``ass`` files) and then submitting them to render - farm. - - Instance has following attributes:: - - primaryPool (list of str): Primary list of slave machine pool to use. - secondaryPool (list of str): Optional secondary list of slave pools. - suspendPublishJob (bool): Suspend the job after it is submitted. - extendFrames (bool): Use already existing frames from previous version - to extend current render. - overrideExistingFrame (bool): Overwrite already existing frames. - priority (int): Submitted job priority - framesPerTask (int): How many frames per task to render. This is - basically job division on render farm. - whitelist (list of str): White list of slave machines - machineList (list of str): Specific list of slave machines to use - useMayaBatch (bool): Use Maya batch mode to render as opposite to - Maya interactive mode. This consumes different licenses. - vrscene (bool): Submit as ``vrscene`` file for standalone V-Ray - renderer. - ass (bool): Submit as ``ass`` file for standalone Arnold renderer. - tileRendering (bool): Instance is set to tile rendering mode. We - won't submit actual render, but we'll make publish job to wait - for Tile Assembly job done and then publish. - strict_error_checking (bool): Enable/disable error checking on DL - - See Also: - https://pype.club/docs/artist_hosts_maya#creating-basic-render-setup + This generates a single node in the scene which tells the Creator to if + it exists collect Maya rendersetup renderlayers as individual instances. + As such, triggering create doesn't actually create the instance node per + layer but only the node which tells the Creator it may now collect + the renderlayers. """ + identifier = "io.openpype.creators.maya.renderlayer" + family = "renderlayer" label = "Render" - family = "rendering" icon = "eye" - _token = None - _user = None - _password = None - _project_settings = None + layer_instance_prefix = "render" + singleton_node_name = "renderingMain" - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateRender, self).__init__(*args, **kwargs) + render_settings = {} - # Defaults - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) - if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa + @classmethod + def apply_settings(cls, project_settings): + cls.render_settings = project_settings["maya"]["RenderSettings"] + + def create(self, subset_name, instance_data, pre_create_data): + # Only allow a single render instance to exist + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") + + # Apply default project render settings on create + if self.render_settings.get("apply_render_settings"): lib_rendersettings.RenderSettings().set_default_renderer_settings() - # Deadline-only - manager = ModulesManager() - deadline_settings = get_system_settings()["modules"]["deadline"] - if not deadline_settings["enabled"]: - self.deadline_servers = {} - return - self.deadline_module = manager.modules_by_name["deadline"] - try: - default_servers = deadline_settings["deadline_urls"] - project_servers = ( - self._project_settings["deadline"]["deadline_servers"] - ) - self.deadline_servers = { - k: default_servers[k] - for k in project_servers - if k in default_servers - } + super(CreateRenderlayer, self).create(subset_name, + instance_data, + pre_create_data) - if not self.deadline_servers: - self.deadline_servers = default_servers - - except AttributeError: - # Handle situation were we had only one url for deadline. - # get default deadline webservice url from deadline module - self.deadline_servers = self.deadline_module.deadline_urls - - def process(self): - """Entry point.""" - exists = cmds.ls(self.name) - if exists: - cmds.warning("%s already exists." % exists[0]) - return - - use_selection = self.options.get("useSelection") - with lib.undo_chunk(): - self._create_render_settings() - self.instance = super(CreateRender, self).process() - # create namespace with instance - index = 1 - namespace_name = "_{}".format(str(self.instance)) - try: - cmds.namespace(rm=namespace_name) - except RuntimeError: - # namespace is not empty, so we leave it untouched - pass - - while cmds.namespace(exists=namespace_name): - namespace_name = "_{}{}".format(str(self.instance), index) - index += 1 - - namespace = cmds.namespace(add=namespace_name) - - # add Deadline server selection list - if self.deadline_servers: - cmds.scriptJob( - attributeChange=[ - "{}.deadlineServers".format(self.instance), - self._deadline_webservice_changed - ]) - - cmds.setAttr("{}.machineList".format(self.instance), lock=True) - rs = renderSetup.instance() - layers = rs.getRenderLayers() - if use_selection: - self.log.info("Processing existing layers") - sets = [] - for layer in layers: - self.log.info(" - creating set for {}:{}".format( - namespace, layer.name())) - render_set = cmds.sets( - n="{}:{}".format(namespace, layer.name())) - sets.append(render_set) - cmds.sets(sets, forceElement=self.instance) - - # if no render layers are present, create default one with - # asterisk selector - if not layers: - render_layer = rs.createRenderLayer('Main') - collection = render_layer.createCollection("defaultCollection") - collection.getSelector().setPattern('*') - - return self.instance - - def _deadline_webservice_changed(self): - """Refresh Deadline server dependent options.""" - # get selected server - webservice = self.deadline_servers[ - self.server_aliases[ - cmds.getAttr("{}.deadlineServers".format(self.instance)) - ] - ] - pools = self.deadline_module.get_deadline_pools(webservice, self.log) - cmds.deleteAttr("{}.primaryPool".format(self.instance)) - cmds.deleteAttr("{}.secondaryPool".format(self.instance)) - - pool_setting = (self._project_settings["deadline"] - ["publish"] - ["CollectDeadlinePools"]) - - primary_pool = pool_setting["primary_pool"] - sorted_pools = self._set_default_pool(list(pools), primary_pool) - cmds.addAttr( - self.instance, - longName="primaryPool", - attributeType="enum", - enumName=":".join(sorted_pools) - ) - cmds.setAttr( - "{}.primaryPool".format(self.instance), - 0, - keyable=False, - channelBox=True - ) - - pools = ["-"] + pools - secondary_pool = pool_setting["secondary_pool"] - sorted_pools = self._set_default_pool(list(pools), secondary_pool) - cmds.addAttr( - self.instance, - longName="secondaryPool", - attributeType="enum", - enumName=":".join(sorted_pools) - ) - cmds.setAttr( - "{}.secondaryPool".format(self.instance), - 0, - keyable=False, - channelBox=True - ) - - def _create_render_settings(self): + def get_instance_attr_defs(self): """Create instance settings.""" - # get pools (slave machines of the render farm) - pool_names = [] - default_priority = 50 - self.data["suspendPublishJob"] = False - self.data["review"] = True - self.data["extendFrames"] = False - self.data["overrideExistingFrame"] = True - # self.data["useLegacyRenderLayers"] = True - self.data["priority"] = default_priority - self.data["tile_priority"] = default_priority - self.data["framesPerTask"] = 1 - self.data["whitelist"] = False - self.data["machineList"] = "" - self.data["useMayaBatch"] = False - self.data["tileRendering"] = False - self.data["tilesX"] = 2 - self.data["tilesY"] = 2 - self.data["convertToScanline"] = False - self.data["useReferencedAovs"] = False - self.data["renderSetupIncludeLights"] = ( - self._project_settings.get( - "maya", {}).get( - "RenderSettings", {}).get( - "enable_all_lights", False) - ) - # Disable for now as this feature is not working yet - # self.data["assScene"] = False + return [ + BoolDef("review", + label="Review", + tooltip="Mark as reviewable", + default=True), + BoolDef("extendFrames", + label="Extend Frames", + tooltip="Extends the frames on top of the previous " + "publish.\nIf the previous was 1001-1050 and you " + "would now submit 1020-1070 only the new frames " + "1051-1070 would be rendered and published " + "together with the previously rendered frames.\n" + "If 'overrideExistingFrame' is enabled it *will* " + "render any existing frames.", + default=False), + BoolDef("overrideExistingFrame", + label="Override Existing Frame", + tooltip="Override existing rendered frames " + "(if they exist).", + default=True), - system_settings = get_system_settings()["modules"] + # TODO: Should these move to submit_maya_deadline plugin? + # Tile rendering + BoolDef("tileRendering", + label="Enable tiled rendering", + default=False), + NumberDef("tilesX", + label="Tiles X", + default=2, + minimum=1, + decimals=0), + NumberDef("tilesY", + label="Tiles Y", + default=2, + minimum=1, + decimals=0), - deadline_enabled = system_settings["deadline"]["enabled"] - muster_enabled = system_settings["muster"]["enabled"] - muster_url = system_settings["muster"]["MUSTER_REST_URL"] + # Additional settings + BoolDef("convertToScanline", + label="Convert to Scanline", + tooltip="Convert the output images to scanline images", + default=False), + BoolDef("useReferencedAovs", + label="Use Referenced AOVs", + tooltip="Consider the AOVs from referenced scenes as well", + default=False), - if deadline_enabled and muster_enabled: - self.log.error( - "Both Deadline and Muster are enabled. " "Cannot support both." - ) - raise RuntimeError("Both Deadline and Muster are enabled") - - if deadline_enabled: - self.server_aliases = list(self.deadline_servers.keys()) - self.data["deadlineServers"] = self.server_aliases - - try: - deadline_url = self.deadline_servers["default"] - except KeyError: - # if 'default' server is not between selected, - # use first one for initial list of pools. - deadline_url = next(iter(self.deadline_servers.values())) - # Uses function to get pool machines from the assigned deadline - # url in settings - pool_names = self.deadline_module.get_deadline_pools(deadline_url, - self.log) - maya_submit_dl = self._project_settings.get( - "deadline", {}).get( - "publish", {}).get( - "MayaSubmitDeadline", {}) - priority = maya_submit_dl.get("priority", default_priority) - self.data["priority"] = priority - - tile_priority = maya_submit_dl.get("tile_priority", - default_priority) - self.data["tile_priority"] = tile_priority - - strict_error_checking = maya_submit_dl.get("strict_error_checking", - True) - self.data["strict_error_checking"] = strict_error_checking - - # Pool attributes should be last since they will be recreated when - # the deadline server changes. - pool_setting = (self._project_settings["deadline"] - ["publish"] - ["CollectDeadlinePools"]) - primary_pool = pool_setting["primary_pool"] - self.data["primaryPool"] = self._set_default_pool(pool_names, - primary_pool) - # We add a string "-" to allow the user to not - # set any secondary pools - pool_names = ["-"] + pool_names - secondary_pool = pool_setting["secondary_pool"] - self.data["secondaryPool"] = self._set_default_pool(pool_names, - secondary_pool) - - if muster_enabled: - self.log.info(">>> Loading Muster credentials ...") - self._load_credentials() - self.log.info(">>> Getting pools ...") - pools = [] - try: - pools = self._get_muster_pools() - except requests.exceptions.HTTPError as e: - if e.startswith("401"): - self.log.warning("access token expired") - self._show_login() - raise RuntimeError("Access token expired") - except requests.exceptions.ConnectionError: - self.log.error("Cannot connect to Muster API endpoint.") - raise RuntimeError("Cannot connect to {}".format(muster_url)) - for pool in pools: - self.log.info(" - pool: {}".format(pool["name"])) - pool_names.append(pool["name"]) - - self.options = {"useSelection": False} # Force no content - - def _set_default_pool(self, pool_names, pool_value): - """Reorder pool names, default should come first""" - if pool_value and pool_value in pool_names: - pool_names.remove(pool_value) - pool_names = [pool_value] + pool_names - return pool_names - - def _load_credentials(self): - """Load Muster credentials. - - Load Muster credentials from file and set ``MUSTER_USER``, - ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from settings. - - Raises: - RuntimeError: If loaded credentials are invalid. - AttributeError: If ``MUSTER_REST_URL`` is not set. - - """ - app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype")) - file_name = "muster_cred.json" - fpath = os.path.join(app_dir, file_name) - file = open(fpath, "r") - muster_json = json.load(file) - self._token = muster_json.get("token", None) - if not self._token: - self._show_login() - raise RuntimeError("Invalid access token for Muster") - file.close() - self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") - if not self.MUSTER_REST_URL: - raise AttributeError("Muster REST API url not set") - - def _get_muster_pools(self): - """Get render pools from Muster. - - Raises: - Exception: If pool list cannot be obtained from Muster. - - """ - params = {"authToken": self._token} - api_entry = "/api/pools/list" - response = requests_get(self.MUSTER_REST_URL + api_entry, - params=params) - if response.status_code != 200: - if response.status_code == 401: - self.log.warning("Authentication token expired.") - self._show_login() - else: - self.log.error( - ("Cannot get pools from " - "Muster: {}").format(response.status_code) - ) - raise Exception("Cannot get pools from Muster") - try: - pools = response.json()["ResponseData"]["pools"] - except ValueError as e: - self.log.error("Invalid response from Muster server {}".format(e)) - raise Exception("Invalid response from Muster server") - - return pools - - def _show_login(self): - # authentication token expired so we need to login to Muster - # again to get it. We use Pype API call to show login window. - api_url = "{}/muster/show_login".format( - os.environ["OPENPYPE_WEBSERVER_URL"]) - self.log.debug(api_url) - login_response = requests_get(api_url, timeout=1) - if login_response.status_code != 200: - self.log.error("Cannot show login form to Muster") - raise Exception("Cannot show login form to Muster") - - def _requests_post(self, *args, **kwargs): - """Wrap request post method. - - Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment - variable is found. This is useful when Deadline or Muster server are - running with self-signed certificates and their certificate is not - added to trusted certificates on client machines. - - Warning: - Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - - """ - if "verify" not in kwargs: - kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) - return requests.post(*args, **kwargs) - - def _requests_get(self, *args, **kwargs): - """Wrap request get method. - - Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment - variable is found. This is useful when Deadline or Muster server are - running with self-signed certificates and their certificate is not - added to trusted certificates on client machines. - - Warning: - Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - - """ - if "verify" not in kwargs: - kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) - return requests.get(*args, **kwargs) + BoolDef("renderSetupIncludeLights", + label="Render Setup Include Lights", + default=self.render_settings.get("enable_all_lights", + False)) + ] diff --git a/openpype/hosts/maya/plugins/create/create_rendersetup.py b/openpype/hosts/maya/plugins/create/create_rendersetup.py index 494f90d87b..dd64a0a842 100644 --- a/openpype/hosts/maya/plugins/create/create_rendersetup.py +++ b/openpype/hosts/maya/plugins/create/create_rendersetup.py @@ -1,55 +1,31 @@ -from openpype.hosts.maya.api import ( - lib, - plugin -) -from maya import cmds +from openpype.hosts.maya.api import plugin +from openpype.pipeline import CreatorError -class CreateRenderSetup(plugin.Creator): +class CreateRenderSetup(plugin.MayaCreator): """Create rendersetup template json data""" - name = "rendersetup" + identifier = "io.openpype.creators.maya.rendersetup" label = "Render Setup Preset" family = "rendersetup" icon = "tablet" - def __init__(self, *args, **kwargs): - super(CreateRenderSetup, self).__init__(*args, **kwargs) + def get_pre_create_attr_defs(self): + # Do not show the "use_selection" setting from parent class + return [] - # here we can pre-create renderSetup layers, possibly utlizing - # settings for it. + def create(self, subset_name, instance_data, pre_create_data): - # _____ - # / __\__ - # | / __\__ - # | | / \ - # | | | | - # \__| | | - # \__| | - # \_____/ + existing_instance = None + for instance in self.create_context.instances: + if instance.family == self.family: + existing_instance = instance + break - # from pype.api import get_project_settings - # import maya.app.renderSetup.model.renderSetup as renderSetup - # settings = get_project_settings(os.environ['AVALON_PROJECT']) - # layer = settings['maya']['create']['renderSetup']["layer"] + if existing_instance: + raise CreatorError("A RenderSetup instance already exists - only " + "one can be configured.") - # rs = renderSetup.instance() - # rs.createRenderLayer(layer) - - self.options = {"useSelection": False} # Force no content - - def process(self): - exists = cmds.ls(self.name) - assert len(exists) <= 1, ( - "More than one renderglobal exists, this is a bug" - ) - - if exists: - return cmds.warning("%s already exists." % exists[0]) - - with lib.undo_chunk(): - instance = super(CreateRenderSetup, self).process() - - self.data["renderSetup"] = "42" - null = cmds.sets(name="null_SET", empty=True) - cmds.sets([null], forceElement=instance) + super(CreateRenderSetup, self).create(subset_name, + instance_data, + pre_create_data) diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index 40ae99b57c..f60e2406bc 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -1,76 +1,142 @@ -import os -from collections import OrderedDict import json +from maya import cmds + from openpype.hosts.maya.api import ( lib, plugin ) -from openpype.settings import get_project_settings -from openpype.pipeline import get_current_project_name, get_current_task_name +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) +from openpype.pipeline import CreatedInstance from openpype.client import get_asset_by_name +TRANSPARENCIES = [ + "preset", + "simple", + "object sorting", + "weighted average", + "depth peeling", + "alpha cut" +] -class CreateReview(plugin.Creator): - """Single baked camera""" - name = "reviewDefault" +class CreateReview(plugin.MayaCreator): + """Playblast reviewable""" + + identifier = "io.openpype.creators.maya.review" label = "Review" family = "review" icon = "video-camera" - keepImages = False - isolate = False - imagePlane = True - Width = 0 - Height = 0 - transparency = [ - "preset", - "simple", - "object sorting", - "weighted average", - "depth peeling", - "alpha cut" - ] + useMayaTimeline = True panZoom = False - def __init__(self, *args, **kwargs): - super(CreateReview, self).__init__(*args, **kwargs) - data = OrderedDict(**self.data) + # Overriding "create" method to prefill values from settings. + def create(self, subset_name, instance_data, pre_create_data): - project_name = get_current_project_name() - asset_doc = get_asset_by_name(project_name, data["asset"]) - task_name = get_current_task_name() + members = list() + if pre_create_data.get("use_selection"): + members = cmds.ls(selection=True) + + project_name = self.project_name + asset_doc = get_asset_by_name(project_name, instance_data["asset"]) + task_name = instance_data["task"] preset = lib.get_capture_preset( task_name, asset_doc["data"]["tasks"][task_name]["type"], - data["subset"], - get_project_settings(project_name), + subset_name, + self.project_settings, self.log ) - if os.environ.get("OPENPYPE_DEBUG") == "1": - self.log.debug( - "Using preset: {}".format( - json.dumps(preset, indent=4, sort_keys=True) - ) + self.log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) ) + ) + + with lib.undo_chunk(): + instance_node = cmds.sets(members, name=subset_name) + instance_data["instance_node"] = instance_node + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self) + + creator_attribute_defs_by_key = { + x.key: x for x in instance.creator_attribute_defs + } + mapping = { + "review_width": preset["Resolution"]["width"], + "review_height": preset["Resolution"]["height"], + "isolate": preset["Generic"]["isolate_view"], + "imagePlane": preset["Viewport Options"]["imagePlane"], + "panZoom": preset["Generic"]["pan_zoom"] + } + for key, value in mapping.items(): + creator_attribute_defs_by_key[key].default = value + + self._add_instance_to_context(instance) + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + return instance + + def get_instance_attr_defs(self): + + defs = lib.collect_animation_defs() # Option for using Maya or asset frame range in settings. - frame_range = lib.get_frame_range() - if self.useMayaTimeline: - frame_range = lib.collect_animation_data(fps=True) - for key, value in frame_range.items(): - data[key] = value + if not self.useMayaTimeline: + # Update the defaults to be the asset frame range + frame_range = lib.get_frame_range() + defs_by_key = {attr_def.key: attr_def for attr_def in defs} + for key, value in frame_range.items(): + if key not in defs_by_key: + raise RuntimeError("Attribute definition not found to be " + "updated for key: {}".format(key)) + attr_def = defs_by_key[key] + attr_def.default = value - data["fps"] = lib.collect_animation_data(fps=True)["fps"] + defs.extend([ + NumberDef("review_width", + label="Review width", + tooltip="A value of zero will use the asset resolution.", + decimals=0, + minimum=0, + default=0), + NumberDef("review_height", + label="Review height", + tooltip="A value of zero will use the asset resolution.", + decimals=0, + minimum=0, + default=0), + BoolDef("keepImages", + label="Keep Images", + tooltip="Whether to also publish along the image sequence " + "next to the video reviewable.", + default=False), + BoolDef("isolate", + label="Isolate render members of instance", + tooltip="When enabled only the members of the instance " + "will be included in the playblast review.", + default=False), + BoolDef("imagePlane", + label="Show Image Plane", + default=True), + EnumDef("transparency", + label="Transparency", + items=TRANSPARENCIES), + BoolDef("panZoom", + label="Enable camera pan/zoom", + default=True), + EnumDef("displayLights", + label="Display Lights", + items=lib.DISPLAY_LIGHTS_ENUM), + ]) - data["keepImages"] = self.keepImages - data["transparency"] = self.transparency - data["review_width"] = preset["Resolution"]["width"] - data["review_height"] = preset["Resolution"]["height"] - data["isolate"] = preset["Generic"]["isolate_view"] - data["imagePlane"] = preset["Viewport Options"]["imagePlane"] - data["panZoom"] = preset["Generic"]["pan_zoom"] - data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS - - self.data = data + return defs diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py index 8032e5fbbd..acd5c98f89 100644 --- a/openpype/hosts/maya/plugins/create/create_rig.py +++ b/openpype/hosts/maya/plugins/create/create_rig.py @@ -1,25 +1,32 @@ from maya import cmds -from openpype.hosts.maya.api import ( - lib, - plugin -) +from openpype.hosts.maya.api import plugin -class CreateRig(plugin.Creator): +class CreateRig(plugin.MayaCreator): """Artist-friendly rig with controls to direct motion""" - name = "rigDefault" + identifier = "io.openpype.creators.maya.rig" label = "Rig" family = "rig" icon = "wheelchair" - def process(self): + def create(self, subset_name, instance_data, pre_create_data): - with lib.undo_chunk(): - instance = super(CreateRig, self).process() + instance = super(CreateRig, self).create(subset_name, + instance_data, + pre_create_data) - self.log.info("Creating Rig instance set up ...") - controls = cmds.sets(name="controls_SET", empty=True) - pointcache = cmds.sets(name="out_SET", empty=True) - cmds.sets([controls, pointcache], forceElement=instance) + instance_node = instance.get("instance_node") + + self.log.info("Creating Rig instance set up ...") + # TODO:change name (_controls_SET -> _rigs_SET) + controls = cmds.sets(name=subset_name + "_controls_SET", empty=True) + # TODO:change name (_out_SET -> _geo_SET) + pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True) + skeleton = cmds.sets( + name=subset_name + "_skeletonAnim_SET", empty=True) + skeleton_mesh = cmds.sets( + name=subset_name + "_skeletonMesh_SET", empty=True) + cmds.sets([controls, pointcache, + skeleton, skeleton_mesh], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_setdress.py b/openpype/hosts/maya/plugins/create/create_setdress.py index 4246183fdb..23a706380a 100644 --- a/openpype/hosts/maya/plugins/create/create_setdress.py +++ b/openpype/hosts/maya/plugins/create/create_setdress.py @@ -1,16 +1,19 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef -class CreateSetDress(plugin.Creator): +class CreateSetDress(plugin.MayaCreator): """A grouped package of loaded content""" - name = "setdressMain" + identifier = "io.openpype.creators.maya.setdress" label = "Set Dress" family = "setdress" icon = "cubes" - defaults = ["Main", "Anim"] + default_variants = ["Main", "Anim"] - def __init__(self, *args, **kwargs): - super(CreateSetDress, self).__init__(*args, **kwargs) - - self.data["exactSetMembersOnly"] = True + def get_instance_attr_defs(self): + return [ + BoolDef("exactSetMembersOnly", + label="Exact Set Members Only", + default=True) + ] diff --git a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py index 6e72bf5324..3c9a79156a 100644 --- a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py +++ b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py @@ -1,47 +1,63 @@ # -*- coding: utf-8 -*- """Creator for Unreal Skeletal Meshes.""" from openpype.hosts.maya.api import plugin, lib -from openpype.pipeline import legacy_io +from openpype.lib import ( + BoolDef, + TextDef +) + from maya import cmds # noqa -class CreateUnrealSkeletalMesh(plugin.Creator): +class CreateUnrealSkeletalMesh(plugin.MayaCreator): """Unreal Static Meshes with collisions.""" - name = "staticMeshMain" + + identifier = "io.openpype.creators.maya.unrealskeletalmesh" label = "Unreal - Skeletal Mesh" family = "skeletalMesh" icon = "thumbs-up" dynamic_subset_keys = ["asset"] - joint_hints = [] + # Defined in settings + joint_hints = set() - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateUnrealSkeletalMesh, self).__init__(*args, **kwargs) - - @classmethod - def get_dynamic_data( - cls, variant, task_name, asset_id, project_name, host_name - ): - dynamic_data = super(CreateUnrealSkeletalMesh, cls).get_dynamic_data( - variant, task_name, asset_id, project_name, host_name + def apply_settings(self, project_settings): + """Apply project settings to creator""" + settings = ( + project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"] ) - dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET") + self.joint_hints = set(settings.get("joint_hints", [])) + + def get_dynamic_data( + self, variant, task_name, asset_doc, project_name, host_name, instance + ): + """ + The default subset name templates for Unreal include {asset} and thus + we should pass that along as dynamic data. + """ + dynamic_data = super(CreateUnrealSkeletalMesh, self).get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name, instance + ) + dynamic_data["asset"] = asset_doc["name"] return dynamic_data - def process(self): - self.name = "{}_{}".format(self.family, self.name) - with lib.undo_chunk(): - instance = super(CreateUnrealSkeletalMesh, self).process() - content = cmds.sets(instance, query=True) + def create(self, subset_name, instance_data, pre_create_data): + + with lib.undo_chunk(): + instance = super(CreateUnrealSkeletalMesh, self).create( + subset_name, instance_data, pre_create_data) + instance_node = instance.get("instance_node") + + # We reorganize the geometry that was originally added into the + # set into either 'joints_SET' or 'geometry_SET' based on the + # joint_hints from project settings + members = cmds.sets(instance_node, query=True) + cmds.sets(clear=instance_node) - # empty set and process its former content - cmds.sets(content, rm=instance) geometry_set = cmds.sets(name="geometry_SET", empty=True) joints_set = cmds.sets(name="joints_SET", empty=True) - cmds.sets([geometry_set, joints_set], forceElement=instance) - members = cmds.ls(content) or [] + cmds.sets([geometry_set, joints_set], forceElement=instance_node) for node in members: if node in self.joint_hints: @@ -49,20 +65,38 @@ class CreateUnrealSkeletalMesh(plugin.Creator): else: cmds.sets(node, forceElement=geometry_set) - # Add animation data - self.data.update(lib.collect_animation_data()) + def get_instance_attr_defs(self): - # Only renderable visible shapes - self.data["renderableOnly"] = False - # only nodes that are visible - self.data["visibleOnly"] = False - # Include parent groups - self.data["includeParentHierarchy"] = False - # Default to exporting world-space - self.data["worldSpace"] = True - # Default to suspend refresh. - self.data["refresh"] = False + defs = lib.collect_animation_defs() - # Add options for custom attributes - self.data["attr"] = "" - self.data["attrPrefix"] = "" + defs.extend([ + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("refresh", + label="Refresh viewport during export", + default=False), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py index 44cbee0502..025b39fa55 100644 --- a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py +++ b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py @@ -1,58 +1,90 @@ # -*- coding: utf-8 -*- """Creator for Unreal Static Meshes.""" from openpype.hosts.maya.api import plugin, lib -from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io from maya import cmds # noqa -class CreateUnrealStaticMesh(plugin.Creator): +class CreateUnrealStaticMesh(plugin.MayaCreator): """Unreal Static Meshes with collisions.""" - name = "staticMeshMain" + + identifier = "io.openpype.creators.maya.unrealstaticmesh" label = "Unreal - Static Mesh" family = "staticMesh" icon = "cube" dynamic_subset_keys = ["asset"] - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs) - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + # Defined in settings + collision_prefixes = [] + + def apply_settings(self, project_settings): + """Apply project settings to creator""" + settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"] + self.collision_prefixes = settings["collision_prefixes"] - @classmethod def get_dynamic_data( - cls, variant, task_name, asset_id, project_name, host_name + self, variant, task_name, asset_doc, project_name, host_name, instance ): - dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data( - variant, task_name, asset_id, project_name, host_name + """ + The default subset name templates for Unreal include {asset} and thus + we should pass that along as dynamic data. + """ + dynamic_data = super(CreateUnrealStaticMesh, self).get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name, instance ) - dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET") + dynamic_data["asset"] = asset_doc["name"] return dynamic_data - def process(self): - self.name = "{}_{}".format(self.family, self.name) - with lib.undo_chunk(): - instance = super(CreateUnrealStaticMesh, self).process() - content = cmds.sets(instance, query=True) + def create(self, subset_name, instance_data, pre_create_data): + + with lib.undo_chunk(): + instance = super(CreateUnrealStaticMesh, self).create( + subset_name, instance_data, pre_create_data) + instance_node = instance.get("instance_node") + + # We reorganize the geometry that was originally added into the + # set into either 'collision_SET' or 'geometry_SET' based on the + # collision_prefixes from project settings + members = cmds.sets(instance_node, query=True) + cmds.sets(clear=instance_node) - # empty set and process its former content - cmds.sets(content, rm=instance) geometry_set = cmds.sets(name="geometry_SET", empty=True) collisions_set = cmds.sets(name="collisions_SET", empty=True) - cmds.sets([geometry_set, collisions_set], forceElement=instance) + cmds.sets([geometry_set, collisions_set], + forceElement=instance_node) - members = cmds.ls(content, long=True) or [] + members = cmds.ls(members, long=True) or [] children = cmds.listRelatives(members, allDescendents=True, fullPath=True) or [] - children = cmds.ls(children, type="transform") - for node in children: - if cmds.listRelatives(node, type="shape"): - if [ - n for n in self.collision_prefixes - if node.startswith(n) - ]: - cmds.sets(node, forceElement=collisions_set) - else: - cmds.sets(node, forceElement=geometry_set) + transforms = cmds.ls(members + children, type="transform") + for transform in transforms: + + if not cmds.listRelatives(transform, + type="shape", + noIntermediate=True): + # Exclude all transforms that have no direct shapes + continue + + if self.has_collision_prefix(transform): + cmds.sets(transform, forceElement=collisions_set) + else: + cmds.sets(transform, forceElement=geometry_set) + + def has_collision_prefix(self, node_path): + """Return whether node name of path matches collision prefix. + + If the node name matches the collision prefix we add it to the + `collisions_SET` instead of the `geometry_SET`. + + Args: + node_path (str): Maya node path. + + Returns: + bool: Whether the node should be considered a collision mesh. + + """ + node_name = node_path.rsplit("|", 1)[-1] + for prefix in self.collision_prefixes: + if node_name.startswith(prefix): + return True + return False diff --git a/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py b/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py new file mode 100644 index 0000000000..c9f9cd9ba8 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py @@ -0,0 +1,39 @@ +from openpype.hosts.maya.api import ( + lib, + plugin +) +from openpype.lib import NumberDef + + +class CreateYetiCache(plugin.MayaCreator): + """Output for procedural plugin nodes of Yeti """ + + identifier = "io.openpype.creators.maya.unrealyeticache" + label = "Unreal - Yeti Cache" + family = "yeticacheUE" + icon = "pagelines" + + def get_instance_attr_defs(self): + + defs = [ + NumberDef("preroll", + label="Preroll", + minimum=0, + default=0, + decimals=0) + ] + + # Add animation data without step and handles + defs.extend(lib.collect_animation_defs()) + remove = {"step", "handleStart", "handleEnd"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] + + # Add samples after frame range + defs.append( + NumberDef("samples", + label="Samples", + default=3, + decimals=0) + ) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_vrayproxy.py b/openpype/hosts/maya/plugins/create/create_vrayproxy.py index d135073e82..b0a95538e1 100644 --- a/openpype/hosts/maya/plugins/create/create_vrayproxy.py +++ b/openpype/hosts/maya/plugins/create/create_vrayproxy.py @@ -1,10 +1,14 @@ -from openpype.hosts.maya.api import plugin +from openpype.hosts.maya.api import ( + plugin, + lib +) +from openpype.lib import BoolDef -class CreateVrayProxy(plugin.Creator): +class CreateVrayProxy(plugin.MayaCreator): """Alembic pointcache for animated data""" - name = "vrayproxy" + identifier = "io.openpype.creators.maya.vrayproxy" label = "VRay Proxy" family = "vrayproxy" icon = "gears" @@ -12,15 +16,35 @@ class CreateVrayProxy(plugin.Creator): vrmesh = True alembic = True - def __init__(self, *args, **kwargs): - super(CreateVrayProxy, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - self.data["animation"] = False - self.data["frameStart"] = 1 - self.data["frameEnd"] = 1 + defs = [ + BoolDef("animation", + label="Export Animation", + default=False) + ] - # Write vertex colors - self.data["vertexColors"] = False + # Add time range attributes but remove some attributes + # which this instance actually doesn't use + defs.extend(lib.collect_animation_defs()) + remove = {"handleStart", "handleEnd", "step"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] - self.data["vrmesh"] = self.vrmesh - self.data["alembic"] = self.alembic + defs.extend([ + BoolDef("vertexColors", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=False), + BoolDef("vrmesh", + label="Export VRayMesh", + tooltip="Publish a .vrmesh (VRayMesh) file for " + "this VRayProxy", + default=self.vrmesh), + BoolDef("alembic", + label="Export Alembic", + tooltip="Publish a .abc (Alembic) file for " + "this VRayProxy", + default=self.alembic), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_vrayscene.py b/openpype/hosts/maya/plugins/create/create_vrayscene.py index 59d80e6d5b..2726979d30 100644 --- a/openpype/hosts/maya/plugins/create/create_vrayscene.py +++ b/openpype/hosts/maya/plugins/create/create_vrayscene.py @@ -1,266 +1,52 @@ # -*- coding: utf-8 -*- """Create instance of vrayscene.""" -import os -import json -import appdirs -import requests - -from maya import cmds -import maya.app.renderSetup.model.renderSetup as renderSetup from openpype.hosts.maya.api import ( - lib, + lib_rendersettings, plugin ) -from openpype.settings import ( - get_system_settings, - get_project_settings -) - -from openpype.lib import requests_get -from openpype.pipeline import ( - CreatorError, - legacy_io, -) -from openpype.modules import ModulesManager +from openpype.pipeline import CreatorError +from openpype.lib import BoolDef -class CreateVRayScene(plugin.Creator): +class CreateVRayScene(plugin.RenderlayerCreator): """Create Vray Scene.""" - label = "VRay Scene" + identifier = "io.openpype.creators.maya.vrayscene" + family = "vrayscene" + label = "VRay Scene" icon = "cubes" - _project_settings = None + render_settings = {} + singleton_node_name = "vraysceneMain" - def __init__(self, *args, **kwargs): - """Entry.""" - super(CreateVRayScene, self).__init__(*args, **kwargs) - self._rs = renderSetup.instance() - self.data["exportOnFarm"] = False - deadline_settings = get_system_settings()["modules"]["deadline"] + @classmethod + def apply_settings(cls, project_settings): + cls.render_settings = project_settings["maya"]["RenderSettings"] - manager = ModulesManager() - self.deadline_module = manager.modules_by_name["deadline"] + def create(self, subset_name, instance_data, pre_create_data): + # Only allow a single render instance to exist + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") - if not deadline_settings["enabled"]: - self.deadline_servers = {} - return - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + super(CreateVRayScene, self).create(subset_name, + instance_data, + pre_create_data) - try: - default_servers = deadline_settings["deadline_urls"] - project_servers = ( - self._project_settings["deadline"]["deadline_servers"] - ) - self.deadline_servers = { - k: default_servers[k] - for k in project_servers - if k in default_servers - } + # Apply default project render settings on create + if self.render_settings.get("apply_render_settings"): + lib_rendersettings.RenderSettings().set_default_renderer_settings() - if not self.deadline_servers: - self.deadline_servers = default_servers + def get_instance_attr_defs(self): + """Create instance settings.""" - except AttributeError: - # Handle situation were we had only one url for deadline. - # get default deadline webservice url from deadline module - self.deadline_servers = self.deadline_module.deadline_urls - - def process(self): - """Entry point.""" - exists = cmds.ls(self.name) - if exists: - return cmds.warning("%s already exists." % exists[0]) - - use_selection = self.options.get("useSelection") - with lib.undo_chunk(): - self._create_vray_instance_settings() - self.instance = super(CreateVRayScene, self).process() - - index = 1 - namespace_name = "_{}".format(str(self.instance)) - try: - cmds.namespace(rm=namespace_name) - except RuntimeError: - # namespace is not empty, so we leave it untouched - pass - - while(cmds.namespace(exists=namespace_name)): - namespace_name = "_{}{}".format(str(self.instance), index) - index += 1 - - namespace = cmds.namespace(add=namespace_name) - - # add Deadline server selection list - if self.deadline_servers: - cmds.scriptJob( - attributeChange=[ - "{}.deadlineServers".format(self.instance), - self._deadline_webservice_changed - ]) - - # create namespace with instance - layers = self._rs.getRenderLayers() - if use_selection: - print(">>> processing existing layers") - sets = [] - for layer in layers: - print(" - creating set for {}".format(layer.name())) - render_set = cmds.sets( - n="{}:{}".format(namespace, layer.name())) - sets.append(render_set) - cmds.sets(sets, forceElement=self.instance) - - # if no render layers are present, create default one with - # asterix selector - if not layers: - render_layer = self._rs.createRenderLayer('Main') - collection = render_layer.createCollection("defaultCollection") - collection.getSelector().setPattern('*') - - def _deadline_webservice_changed(self): - """Refresh Deadline server dependent options.""" - # get selected server - from maya import cmds - webservice = self.deadline_servers[ - self.server_aliases[ - cmds.getAttr("{}.deadlineServers".format(self.instance)) - ] + return [ + BoolDef("vraySceneMultipleFiles", + label="V-Ray Scene Multiple Files", + default=False), + BoolDef("exportOnFarm", + label="Export on farm", + default=False) ] - pools = self.deadline_module.get_deadline_pools(webservice) - cmds.deleteAttr("{}.primaryPool".format(self.instance)) - cmds.deleteAttr("{}.secondaryPool".format(self.instance)) - cmds.addAttr(self.instance, longName="primaryPool", - attributeType="enum", - enumName=":".join(pools)) - cmds.addAttr(self.instance, longName="secondaryPool", - attributeType="enum", - enumName=":".join(["-"] + pools)) - - def _create_vray_instance_settings(self): - # get pools - pools = [] - - system_settings = get_system_settings()["modules"] - - deadline_enabled = system_settings["deadline"]["enabled"] - muster_enabled = system_settings["muster"]["enabled"] - muster_url = system_settings["muster"]["MUSTER_REST_URL"] - - if deadline_enabled and muster_enabled: - self.log.error( - "Both Deadline and Muster are enabled. " "Cannot support both." - ) - raise CreatorError("Both Deadline and Muster are enabled") - - self.server_aliases = self.deadline_servers.keys() - self.data["deadlineServers"] = self.server_aliases - - if deadline_enabled: - # if default server is not between selected, use first one for - # initial list of pools. - try: - deadline_url = self.deadline_servers["default"] - except KeyError: - deadline_url = [ - self.deadline_servers[k] - for k in self.deadline_servers.keys() - ][0] - - pool_names = self.deadline_module.get_deadline_pools(deadline_url) - - if muster_enabled: - self.log.info(">>> Loading Muster credentials ...") - self._load_credentials() - self.log.info(">>> Getting pools ...") - try: - pools = self._get_muster_pools() - except requests.exceptions.HTTPError as e: - if e.startswith("401"): - self.log.warning("access token expired") - self._show_login() - raise CreatorError("Access token expired") - except requests.exceptions.ConnectionError: - self.log.error("Cannot connect to Muster API endpoint.") - raise CreatorError("Cannot connect to {}".format(muster_url)) - pool_names = [] - for pool in pools: - self.log.info(" - pool: {}".format(pool["name"])) - pool_names.append(pool["name"]) - - self.data["primaryPool"] = pool_names - - self.data["suspendPublishJob"] = False - self.data["priority"] = 50 - self.data["whitelist"] = False - self.data["machineList"] = "" - self.data["vraySceneMultipleFiles"] = False - self.options = {"useSelection": False} # Force no content - - def _load_credentials(self): - """Load Muster credentials. - - Load Muster credentials from file and set ``MUSTER_USER``, - ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets. - - Raises: - CreatorError: If loaded credentials are invalid. - AttributeError: If ``MUSTER_REST_URL`` is not set. - - """ - app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype")) - file_name = "muster_cred.json" - fpath = os.path.join(app_dir, file_name) - file = open(fpath, "r") - muster_json = json.load(file) - self._token = muster_json.get("token", None) - if not self._token: - self._show_login() - raise CreatorError("Invalid access token for Muster") - file.close() - self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") - if not self.MUSTER_REST_URL: - raise AttributeError("Muster REST API url not set") - - def _get_muster_pools(self): - """Get render pools from Muster. - - Raises: - CreatorError: If pool list cannot be obtained from Muster. - - """ - params = {"authToken": self._token} - api_entry = "/api/pools/list" - response = requests_get(self.MUSTER_REST_URL + api_entry, - params=params) - if response.status_code != 200: - if response.status_code == 401: - self.log.warning("Authentication token expired.") - self._show_login() - else: - self.log.error( - ("Cannot get pools from " - "Muster: {}").format(response.status_code) - ) - raise CreatorError("Cannot get pools from Muster") - try: - pools = response.json()["ResponseData"]["pools"] - except ValueError as e: - self.log.error("Invalid response from Muster server {}".format(e)) - raise CreatorError("Invalid response from Muster server") - - return pools - - def _show_login(self): - # authentication token expired so we need to login to Muster - # again to get it. We use Pype API call to show login window. - api_url = "{}/muster/show_login".format( - os.environ["OPENPYPE_WEBSERVER_URL"]) - self.log.debug(api_url) - login_response = requests_get(api_url, timeout=1) - if login_response.status_code != 200: - self.log.error("Cannot show login form to Muster") - raise CreatorError("Cannot show login form to Muster") diff --git a/openpype/hosts/maya/plugins/create/create_workfile.py b/openpype/hosts/maya/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d84753cd7f --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_workfile.py @@ -0,0 +1,88 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating workfiles.""" +from openpype.pipeline import CreatedInstance, AutoCreator +from openpype.client import get_asset_by_name +from openpype.hosts.maya.api import plugin +from maya import cmds + + +class CreateWorkfile(plugin.MayaCreatorBase, AutoCreator): + """Workfile auto-creator.""" + identifier = "io.openpype.creators.maya.workfile" + label = "Workfile" + family = "workfile" + icon = "fa5.file" + + default_variant = "Main" + + def create(self): + + variant = self.default_variant + current_instance = next( + ( + instance for instance in self.create_context.instances + if instance.creator_identifier == self.identifier + ), None) + + project_name = self.project_name + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name + + if current_instance is None: + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + data.update( + self.get_dynamic_data( + variant, task_name, asset_doc, + project_name, host_name, current_instance) + ) + self.log.info("Auto-creating workfile instance...") + current_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(current_instance) + elif ( + current_instance["asset"] != asset_name + or current_instance["task"] != task_name + ): + # Update instance context if is not the same + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + current_instance["asset"] = asset_name + current_instance["task"] = task_name + current_instance["subset"] = subset_name + + def collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + if not node: + node = self.create_node() + created_inst["instance_node"] = node + data = created_inst.data_to_store() + + self.imprint_instance_node(node, data) + + def create_node(self): + node = cmds.sets(empty=True, name="workfileMain") + cmds.setAttr(node + ".hiddenInOutliner", True) + return node diff --git a/openpype/hosts/maya/plugins/create/create_xgen.py b/openpype/hosts/maya/plugins/create/create_xgen.py index 70e23cf47b..eaafb0959a 100644 --- a/openpype/hosts/maya/plugins/create/create_xgen.py +++ b/openpype/hosts/maya/plugins/create/create_xgen.py @@ -1,10 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateXgen(plugin.Creator): +class CreateXgen(plugin.MayaCreator): """Xgen""" - name = "xgen" + identifier = "io.openpype.creators.maya.xgen" label = "Xgen" family = "xgen" icon = "pagelines" diff --git a/openpype/hosts/maya/plugins/create/create_yeti_cache.py b/openpype/hosts/maya/plugins/create/create_yeti_cache.py index e8c3203f21..ca002392d4 100644 --- a/openpype/hosts/maya/plugins/create/create_yeti_cache.py +++ b/openpype/hosts/maya/plugins/create/create_yeti_cache.py @@ -1,30 +1,39 @@ -from collections import OrderedDict - from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import NumberDef -class CreateYetiCache(plugin.Creator): +class CreateYetiCache(plugin.MayaCreator): """Output for procedural plugin nodes of Yeti """ - name = "yetiDefault" + identifier = "io.openpype.creators.maya.yeticache" label = "Yeti Cache" family = "yeticache" icon = "pagelines" - def __init__(self, *args, **kwargs): - super(CreateYetiCache, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - self.data["preroll"] = 0 + defs = [ + NumberDef("preroll", + label="Preroll", + minimum=0, + default=0, + decimals=0) + ] # Add animation data without step and handles - anim_data = lib.collect_animation_data() - anim_data.pop("step") - anim_data.pop("handleStart") - anim_data.pop("handleEnd") - self.data.update(anim_data) + defs.extend(lib.collect_animation_defs()) + remove = {"step", "handleStart", "handleEnd"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] - # Add samples - self.data["samples"] = 3 + # Add samples after frame range + defs.append( + NumberDef("samples", + label="Samples", + default=3, + decimals=0) + ) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_yeti_rig.py b/openpype/hosts/maya/plugins/create/create_yeti_rig.py index 7abe2988cd..445bcf46d8 100644 --- a/openpype/hosts/maya/plugins/create/create_yeti_rig.py +++ b/openpype/hosts/maya/plugins/create/create_yeti_rig.py @@ -6,18 +6,22 @@ from openpype.hosts.maya.api import ( ) -class CreateYetiRig(plugin.Creator): +class CreateYetiRig(plugin.MayaCreator): """Output for procedural plugin nodes ( Yeti / XGen / etc)""" + identifier = "io.openpype.creators.maya.yetirig" label = "Yeti Rig" family = "yetiRig" icon = "usb" - def process(self): + def create(self, subset_name, instance_data, pre_create_data): with lib.undo_chunk(): - instance = super(CreateYetiRig, self).process() + instance = super(CreateYetiRig, self).create(subset_name, + instance_data, + pre_create_data) + instance_node = instance.get("instance_node") self.log.info("Creating Rig instance set up ...") input_meshes = cmds.sets(name="input_SET", empty=True) - cmds.sets(input_meshes, forceElement=instance) + cmds.sets(input_meshes, forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/inventory/import_modelrender.py b/openpype/hosts/maya/plugins/inventory/import_modelrender.py index 8a7390bc8d..4db8c4f2f6 100644 --- a/openpype/hosts/maya/plugins/inventory/import_modelrender.py +++ b/openpype/hosts/maya/plugins/inventory/import_modelrender.py @@ -8,7 +8,7 @@ from openpype.client import ( from openpype.pipeline import ( InventoryAction, get_representation_context, - legacy_io, + get_current_project_name, ) from openpype.hosts.maya.api.lib import ( maintained_selection, @@ -35,7 +35,7 @@ class ImportModelRender(InventoryAction): def process(self, containers): from maya import cmds - project_name = legacy_io.active_project() + project_name = get_current_project_name() for container in containers: con_name = container["objectName"] nodes = [] @@ -68,7 +68,7 @@ class ImportModelRender(InventoryAction): from maya import cmds - project_name = legacy_io.active_project() + project_name = get_current_project_name() repre_docs = get_representations( project_name, version_ids=[version_id], fields=["_id", "name"] ) diff --git a/openpype/hosts/maya/plugins/inventory/import_reference.py b/openpype/hosts/maya/plugins/inventory/import_reference.py index afb1e0e17f..3f3b85ba6c 100644 --- a/openpype/hosts/maya/plugins/inventory/import_reference.py +++ b/openpype/hosts/maya/plugins/inventory/import_reference.py @@ -1,7 +1,7 @@ from maya import cmds from openpype.pipeline import InventoryAction -from openpype.hosts.maya.api.plugin import get_reference_node +from openpype.hosts.maya.api.lib import get_reference_node class ImportReference(InventoryAction): @@ -12,7 +12,6 @@ class ImportReference(InventoryAction): color = "#d8d8d8" def process(self, containers): - references = cmds.ls(type="reference") for container in containers: if container["loader"] != "ReferenceLoader": print("Not a reference, skipping") diff --git a/openpype/hosts/maya/plugins/load/_load_animation.py b/openpype/hosts/maya/plugins/load/_load_animation.py index 2ba5fe6b64..0781735bc4 100644 --- a/openpype/hosts/maya/plugins/load/_load_animation.py +++ b/openpype/hosts/maya/plugins/load/_load_animation.py @@ -1,4 +1,46 @@ import openpype.hosts.maya.api.plugin +import maya.cmds as cmds + + +def _process_reference(file_url, name, namespace, options): + """Load files by referencing scene in Maya. + + Args: + file_url (str): fileapth of the objects to be loaded + name (str): subset name + namespace (str): namespace + options (dict): dict of storing the param + + Returns: + list: list of object nodes + """ + from openpype.hosts.maya.api.lib import unique_namespace + # Get name from asset being loaded + # Assuming name is subset name from the animation, we split the number + # suffix from the name to ensure the namespace is unique + name = name.split("_")[0] + ext = file_url.split(".")[-1] + namespace = unique_namespace( + "{}_".format(name), + format="%03d", + suffix="_{}".format(ext) + ) + + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + + nodes = cmds.file(file_url, + namespace=namespace, + sharedReferenceFile=False, + groupReference=attach_to_root, + groupName=group_name, + reference=True, + returnNewNodes=True) + return nodes class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): @@ -16,36 +58,42 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference(self, context, name, namespace, options): - import maya.cmds as cmds - from openpype.hosts.maya.api.lib import unique_namespace - cmds.loadPlugin("AbcImport.mll", quiet=True) - # Prevent identical alembic nodes from being shared - # Create unique namespace for the cameras - - # Get name from asset being loaded - # Assuming name is subset name from the animation, we split the number - # suffix from the name to ensure the namespace is unique - name = name.split("_")[0] - namespace = unique_namespace( - "{}_".format(name), - format="%03d", - suffix="_abc" - ) - # hero_001 (abc) # asset_counter{optional} - file_url = self.prepare_root_value(self.fname, + path = self.filepath_from_context(context) + file_url = self.prepare_root_value(path, context["project"]["name"]) - nodes = cmds.file(file_url, - namespace=namespace, - sharedReferenceFile=False, - groupReference=True, - groupName=options['group_name'], - reference=True, - returnNewNodes=True) + nodes = _process_reference(file_url, name, namespace, options) # load colorbleed ID attribute self[:] = nodes return nodes + + +class FbxLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): + """Loader to reference an Fbx files""" + + families = ["animation", + "camera"] + representations = ["fbx"] + + label = "Reference animation" + order = -10 + icon = "code-fork" + color = "orange" + + def process_reference(self, context, name, namespace, options): + + cmds.loadPlugin("fbx4maya.mll", quiet=True) + + path = self.filepath_from_context(context) + file_url = self.prepare_root_value(path, + context["project"]["name"]) + + nodes = _process_reference(file_url, name, namespace, options) + + self[:] = nodes + + return nodes diff --git a/openpype/hosts/maya/plugins/load/actions.py b/openpype/hosts/maya/plugins/load/actions.py index 4855f3eed0..d347ef0d08 100644 --- a/openpype/hosts/maya/plugins/load/actions.py +++ b/openpype/hosts/maya/plugins/load/actions.py @@ -5,8 +5,9 @@ import qargparse from openpype.pipeline import load from openpype.hosts.maya.api.lib import ( maintained_selection, - unique_namespace + get_custom_namespace ) +import openpype.hosts.maya.api.plugin class SetFrameRangeLoader(load.LoaderPlugin): @@ -83,7 +84,7 @@ class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): animationEndTime=end) -class ImportMayaLoader(load.LoaderPlugin): +class ImportMayaLoader(openpype.hosts.maya.api.plugin.Loader): """Import action for Maya (unmanaged) Warning: @@ -130,22 +131,25 @@ class ImportMayaLoader(load.LoaderPlugin): if choice is False: return - asset = context['asset'] + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, data, + "import_loader") - namespace = namespace or unique_namespace( - asset["name"] + "_", - prefix="_" if asset["name"][0].isdigit() else "", - suffix="_", - ) + namespace = get_custom_namespace(custom_namespace) + if not options.get("attach_to_root", True): + custom_group_name = namespace + + path = self.filepath_from_context(context) with maintained_selection(): - nodes = cmds.file(self.fname, + nodes = cmds.file(path, i=True, preserveReferences=True, namespace=namespace, returnNewNodes=True, - groupReference=True, - groupName="{}:{}".format(namespace, name)) + groupReference=options.get("attach_to_root", + True), + groupName=custom_group_name) if data.get("clean_import", False): remove_attributes = ["cbId"] diff --git a/openpype/hosts/maya/plugins/load/load_arnold_standin.py b/openpype/hosts/maya/plugins/load/load_arnold_standin.py index 29215bc5c2..2e1329f201 100644 --- a/openpype/hosts/maya/plugins/load/load_arnold_standin.py +++ b/openpype/hosts/maya/plugins/load/load_arnold_standin.py @@ -17,6 +17,7 @@ from openpype.hosts.maya.api.lib import ( ) from openpype.hosts.maya.api.pipeline import containerise + def is_sequence(files): sequence = False collections, remainder = clique.assemble(files, minimum_items=1) @@ -29,11 +30,12 @@ def get_current_session_fps(): session_fps = float(legacy_io.Session.get('AVALON_FPS', 25)) return convert_to_maya_fps(session_fps) + class ArnoldStandinLoader(load.LoaderPlugin): """Load as Arnold standin""" - families = ["ass", "animation", "model", "proxyAbc", "pointcache"] - representations = ["ass", "abc"] + families = ["ass", "animation", "model", "proxyAbc", "pointcache", "usd"] + representations = ["ass", "abc", "usda", "usdc", "usd"] label = "Load as Arnold standin" order = -5 @@ -89,11 +91,12 @@ class ArnoldStandinLoader(load.LoaderPlugin): cmds.parent(standin, root) # Set the standin filepath + repre_path = self.filepath_from_context(context) path, operator = self._setup_proxy( - standin_shape, self.fname, namespace + standin_shape, repre_path, namespace ) cmds.setAttr(standin_shape + ".dso", path, type="string") - sequence = is_sequence(os.listdir(os.path.dirname(self.fname))) + sequence = is_sequence(os.listdir(os.path.dirname(repre_path))) cmds.setAttr(standin_shape + ".useFrameExtension", sequence) fps = float(version["data"].get("fps"))or get_current_session_fps() diff --git a/openpype/hosts/maya/plugins/load/load_assembly.py b/openpype/hosts/maya/plugins/load/load_assembly.py index 275f21be5d..0a2733e03c 100644 --- a/openpype/hosts/maya/plugins/load/load_assembly.py +++ b/openpype/hosts/maya/plugins/load/load_assembly.py @@ -30,7 +30,7 @@ class AssemblyLoader(load.LoaderPlugin): ) containers = setdress.load_package( - filepath=self.fname, + filepath=self.filepath_from_context(context), name=name, namespace=namespace ) diff --git a/openpype/hosts/maya/plugins/load/load_audio.py b/openpype/hosts/maya/plugins/load/load_audio.py index 9e7fd96bdb..90cadb31b1 100644 --- a/openpype/hosts/maya/plugins/load/load_audio.py +++ b/openpype/hosts/maya/plugins/load/load_audio.py @@ -1,12 +1,6 @@ from maya import cmds, mel -from openpype.client import ( - get_asset_by_id, - get_subset_by_id, - get_version_by_id, -) from openpype.pipeline import ( - legacy_io, load, get_representation_path, ) @@ -18,7 +12,7 @@ class AudioLoader(load.LoaderPlugin): """Specific loader of audio.""" families = ["audio"] - label = "Import audio" + label = "Load audio" representations = ["wav"] icon = "volume-up" color = "orange" @@ -27,10 +21,10 @@ class AudioLoader(load.LoaderPlugin): start_frame = cmds.playbackOptions(query=True, min=True) sound_node = cmds.sound( - file=context["representation"]["data"]["path"], offset=start_frame + file=self.filepath_from_context(context), offset=start_frame ) cmds.timeControl( - mel.eval("$tmpVar=$gPlayBackSlider"), + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), edit=True, sound=sound_node, displaySound=True @@ -59,32 +53,50 @@ class AudioLoader(load.LoaderPlugin): assert audio_nodes is not None, "Audio node not found." audio_node = audio_nodes[0] + current_sound = cmds.timeControl( + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), + query=True, + sound=True + ) + activate_sound = current_sound == audio_node + path = get_representation_path(representation) - cmds.setAttr("{}.filename".format(audio_node), path, type="string") + + cmds.sound( + audio_node, + edit=True, + file=path + ) + + # The source start + end does not automatically update itself to the + # length of thew new audio file, even though maya does do that when + # creating a new audio node. So to update we compute it manually. + # This would however override any source start and source end a user + # might have done on the original audio node after load. + audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node)) + audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node)) + duration_in_seconds = audio_frame_count / audio_sample_rate + fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS + source_start = 0 + source_end = (duration_in_seconds * fps) + cmds.setAttr("{}.sourceStart".format(audio_node), source_start) + cmds.setAttr("{}.sourceEnd".format(audio_node), source_end) + + if activate_sound: + # maya by default deactivates it from timeline on file change + cmds.timeControl( + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), + edit=True, + sound=audio_node, + displaySound=True + ) + cmds.setAttr( container["objectName"] + ".representation", str(representation["_id"]), type="string" ) - # Set frame range. - project_name = legacy_io.active_project() - version = get_version_by_id( - project_name, representation["parent"], fields=["parent"] - ) - subset = get_subset_by_id( - project_name, version["parent"], fields=["parent"] - ) - asset = get_asset_by_id( - project_name, subset["parent"], fields=["parent"] - ) - - source_start = 1 - asset["data"]["frameStart"] - source_end = asset["data"]["frameEnd"] - - cmds.setAttr("{}.sourceStart".format(audio_node), source_start) - cmds.setAttr("{}.sourceEnd".format(audio_node), source_end) - def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/maya/plugins/load/load_gpucache.py b/openpype/hosts/maya/plugins/load/load_gpucache.py index 794b21eb5d..344f2fd060 100644 --- a/openpype/hosts/maya/plugins/load/load_gpucache.py +++ b/openpype/hosts/maya/plugins/load/load_gpucache.py @@ -37,7 +37,8 @@ class GpuCacheLoader(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get('model') if c is not None: @@ -56,7 +57,8 @@ class GpuCacheLoader(load.LoaderPlugin): name="{0}Shape".format(transform_name)) # Set the cache filepath - cmds.setAttr(cache + '.cacheFileName', self.fname, type="string") + path = self.filepath_from_context(context) + cmds.setAttr(cache + '.cacheFileName', path, type="string") cmds.setAttr(cache + '.cacheGeomPath', "|", type="string") # root # Lock parenting of the transform and cache diff --git a/openpype/hosts/maya/plugins/load/load_image.py b/openpype/hosts/maya/plugins/load/load_image.py index 552bcc33af..3b1f5442ce 100644 --- a/openpype/hosts/maya/plugins/load/load_image.py +++ b/openpype/hosts/maya/plugins/load/load_image.py @@ -4,7 +4,8 @@ import copy from openpype.lib import EnumDef from openpype.pipeline import ( load, - get_representation_context + get_representation_context, + get_current_host_name, ) from openpype.pipeline.load.utils import get_representation_path_from_context from openpype.pipeline.colorspace import ( @@ -266,7 +267,7 @@ class FileNodeLoader(load.LoaderPlugin): # Assume colorspace from filepath based on project settings project_name = context["project"]["name"] - host_name = os.environ.get("AVALON_APP") + host_name = get_current_host_name() project_settings = get_project_settings(project_name) config_data = get_imageio_config( diff --git a/openpype/hosts/maya/plugins/load/load_image_plane.py b/openpype/hosts/maya/plugins/load/load_image_plane.py index bf13708e9b..117f4f4202 100644 --- a/openpype/hosts/maya/plugins/load/load_image_plane.py +++ b/openpype/hosts/maya/plugins/load/load_image_plane.py @@ -6,9 +6,9 @@ from openpype.client import ( get_version_by_id, ) from openpype.pipeline import ( - legacy_io, load, - get_representation_path + get_representation_path, + get_current_project_name, ) from openpype.hosts.maya.api.pipeline import containerise from openpype.hosts.maya.api.lib import ( @@ -221,7 +221,7 @@ class ImagePlaneLoader(load.LoaderPlugin): type="string") # Set frame range. - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id( project_name, representation["parent"], fields=["parent"] ) diff --git a/openpype/hosts/maya/plugins/load/load_look.py b/openpype/hosts/maya/plugins/load/load_look.py index 8f3e017658..20617c77bf 100644 --- a/openpype/hosts/maya/plugins/load/load_look.py +++ b/openpype/hosts/maya/plugins/load/load_look.py @@ -7,14 +7,14 @@ from qtpy import QtWidgets from openpype.client import get_representation_by_name from openpype.pipeline import ( - legacy_io, + get_current_project_name, get_representation_path, ) import openpype.hosts.maya.api.plugin from openpype.hosts.maya.api import lib from openpype.widgets.message_window import ScrollMessageBox -from openpype.hosts.maya.api.plugin import get_reference_node +from openpype.hosts.maya.api.lib import get_reference_node class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): @@ -29,11 +29,13 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): color = "orange" def process_reference(self, context, name, namespace, options): - import maya.cmds as cmds + from maya import cmds with lib.maintained_selection(): - file_url = self.prepare_root_value(self.fname, - context["project"]["name"]) + file_url = self.prepare_root_value( + file_url=self.filepath_from_context(context), + project_name=context["project"]["name"] + ) nodes = cmds.file(file_url, namespace=namespace, reference=True, @@ -76,7 +78,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): shader_nodes = cmds.ls(members, type='shadingEngine') nodes = set(self._get_nodes_with_shader(shader_nodes)) - project_name = legacy_io.active_project() + project_name = get_current_project_name() json_representation = get_representation_by_name( project_name, "json", representation["parent"] ) @@ -113,8 +115,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): # region compute lookup nodes_by_id = defaultdict(list) - for n in nodes: - nodes_by_id[lib.get_id(n)].append(n) + for node in nodes: + nodes_by_id[lib.get_id(node)].append(node) lib.apply_attributes(attributes, nodes_by_id) def _get_nodes_with_shader(self, shader_nodes): @@ -125,14 +127,16 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): Returns node names """ - import maya.cmds as cmds + from maya import cmds - nodes_list = [] for shader in shader_nodes: - connections = cmds.listConnections(cmds.listHistory(shader, f=1), + future = cmds.listHistory(shader, future=True) + connections = cmds.listConnections(future, type='mesh') if connections: - for connection in connections: - nodes_list.extend(cmds.listRelatives(connection, - shapes=True)) - return nodes_list + # Ensure unique entries only to optimize query and results + connections = list(set(connections)) + return cmds.listRelatives(connections, + shapes=True, + fullPath=True) or [] + return [] diff --git a/openpype/hosts/maya/plugins/load/load_matchmove.py b/openpype/hosts/maya/plugins/load/load_matchmove.py index ee3332bd09..46d1be8300 100644 --- a/openpype/hosts/maya/plugins/load/load_matchmove.py +++ b/openpype/hosts/maya/plugins/load/load_matchmove.py @@ -1,7 +1,6 @@ from maya import mel from openpype.pipeline import load - class MatchmoveLoader(load.LoaderPlugin): """ This will run matchmove script to create track in scene. @@ -18,11 +17,12 @@ class MatchmoveLoader(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - if self.fname.lower().endswith(".py"): - exec(open(self.fname).read()) + path = self.filepath_from_context(context) + if path.lower().endswith(".py"): + exec(open(path).read()) - elif self.fname.lower().endswith(".mel"): - mel.eval('source "{}"'.format(self.fname)) + elif path.lower().endswith(".mel"): + mel.eval('source "{}"'.format(path)) else: self.log.error("Unsupported script type") diff --git a/openpype/hosts/maya/plugins/load/load_maya_usd.py b/openpype/hosts/maya/plugins/load/load_maya_usd.py new file mode 100644 index 0000000000..2fb1a625a5 --- /dev/null +++ b/openpype/hosts/maya/plugins/load/load_maya_usd.py @@ -0,0 +1,108 @@ +# -*- coding: utf-8 -*- +import maya.cmds as cmds + +from openpype.pipeline import ( + load, + get_representation_path, +) +from openpype.pipeline.load import get_representation_path_from_context +from openpype.hosts.maya.api.lib import ( + namespaced, + unique_namespace +) +from openpype.hosts.maya.api.pipeline import containerise + + +class MayaUsdLoader(load.LoaderPlugin): + """Read USD data in a Maya USD Proxy""" + + families = ["model", "usd", "pointcache", "animation"] + representations = ["usd", "usda", "usdc", "usdz", "abc"] + + label = "Load USD to Maya Proxy" + order = -1 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, options=None): + asset = context['asset']['name'] + namespace = namespace or unique_namespace( + asset + "_", + prefix="_" if asset[0].isdigit() else "", + suffix="_", + ) + + # Make sure we can load the plugin + cmds.loadPlugin("mayaUsdPlugin", quiet=True) + + path = get_representation_path_from_context(context) + + # Create the shape + cmds.namespace(addNamespace=namespace) + with namespaced(namespace, new=False): + transform = cmds.createNode("transform", + name=name, + skipSelect=True) + proxy = cmds.createNode('mayaUsdProxyShape', + name="{}Shape".format(name), + parent=transform, + skipSelect=True) + + cmds.connectAttr("time1.outTime", "{}.time".format(proxy)) + cmds.setAttr("{}.filePath".format(proxy), path, type="string") + + # By default, we force the proxy to not use a shared stage because + # when doing so Maya will quite easily allow to save into the + # loaded usd file. Since we are loading published files we want to + # avoid altering them. Unshared stages also save their edits into + # the workfile as an artist might expect it to do. + cmds.setAttr("{}.shareStage".format(proxy), False) + # cmds.setAttr("{}.shareStage".format(proxy), lock=True) + + nodes = [transform, proxy] + self[:] = nodes + + return containerise( + name=name, + namespace=namespace, + nodes=nodes, + context=context, + loader=self.__class__.__name__) + + def update(self, container, representation): + # type: (dict, dict) -> None + """Update container with specified representation.""" + node = container['objectName'] + assert cmds.objExists(node), "Missing container" + + members = cmds.sets(node, query=True) or [] + shapes = cmds.ls(members, type="mayaUsdProxyShape") + + path = get_representation_path(representation) + for shape in shapes: + cmds.setAttr("{}.filePath".format(shape), path, type="string") + + cmds.setAttr("{}.representation".format(node), + str(representation["_id"]), + type="string") + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + # type: (dict) -> None + """Remove loaded container.""" + # Delete container and its contents + if cmds.objExists(container['objectName']): + members = cmds.sets(container['objectName'], query=True) or [] + cmds.delete([container['objectName']] + members) + + # Remove the namespace, if empty + namespace = container['namespace'] + if cmds.namespace(exists=namespace): + members = cmds.namespaceInfo(namespace, listNamespace=True) + if not members: + cmds.namespace(removeNamespace=namespace) + else: + self.log.warning("Namespace not deleted because it " + "still has members: %s", namespace) diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py index 9e0d38df46..cad42b55f9 100644 --- a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py @@ -36,17 +36,17 @@ class MultiverseUsdLoader(load.LoaderPlugin): suffix="_", ) + path = self.filepath_from_context(context) + # Make sure we can load the plugin cmds.loadPlugin("MultiverseForMaya", quiet=True) import multiverse # Create the shape - shape = None - transform = None with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - shape = multiverse.CreateUsdCompound(self.fname) + shape = multiverse.CreateUsdCompound(path) transform = cmds.listRelatives( shape, parent=True, fullPath=True)[0] diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py index 8a25508ac2..be8d78607b 100644 --- a/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py @@ -50,9 +50,10 @@ class MultiverseUsdOverLoader(load.LoaderPlugin): cmds.loadPlugin("MultiverseForMaya", quiet=True) import multiverse + path = self.filepath_from_context(context) nodes = current_usd with maintained_selection(): - multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname) + multiverse.AddUsdCompoundAssetPath(current_usd[0], path) namespace = current_usd[0].split("|")[1].split(":")[0] diff --git a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py index c288e23ded..b3fbfb2ed9 100644 --- a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py @@ -46,18 +46,19 @@ class RedshiftProxyLoader(load.LoaderPlugin): # Ensure Redshift for Maya is loaded. cmds.loadPlugin("redshift4maya", quiet=True) + path = self.filepath_from_context(context) with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - nodes, group_node = self.create_rs_proxy( - name, self.fname) + nodes, group_node = self.create_rs_proxy(name, path) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index 74ca27ff3c..4b704fa706 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -101,7 +101,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): "camerarig", "staticMesh", "skeletalMesh", - "mvLook"] + "mvLook", + "matchmove"] representations = ["ma", "abc", "fbx", "mb"] @@ -118,15 +119,20 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): except ValueError: family = "model" + project_name = context["project"]["name"] # True by default to keep legacy behaviours attach_to_root = options.get("attach_to_root", True) group_name = options["group_name"] + # no group shall be created + if not attach_to_root: + group_name = namespace + + path = self.filepath_from_context(context) with maintained_selection(): cmds.loadPlugin("AbcImport.mll", quiet=True) - file_url = self.prepare_root_value(self.fname, - context["project"]["name"]) + file_url = self.prepare_root_value(path, project_name) nodes = cmds.file(file_url, namespace=namespace, sharedReferenceFile=False, @@ -147,11 +153,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): if current_namespace != ":": group_name = current_namespace + ":" + group_name - group_name = "|" + group_name - self[:] = new_nodes if attach_to_root: + group_name = "|" + group_name roots = cmds.listRelatives(group_name, children=True, fullPath=True) or [] @@ -162,7 +167,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): with parent_nodes(roots, parent=None): cmds.xform(group_name, zeroTransformPivots=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + settings = get_project_settings(project_name) display_handle = settings['maya']['load'].get( 'reference_loader', {} @@ -201,9 +206,14 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): cmds.setAttr("{}.selectHandleZ".format(group_name), cz) if family == "rig": - self._post_process_rig(name, namespace, context, options) + self._post_process_rig(namespace, context, options) else: if "translate" in options: + if not attach_to_root and new_nodes: + root_nodes = cmds.ls(new_nodes, assemblies=True, + long=True) + # we assume only a single root is ever loaded + group_name = root_nodes[0] cmds.setAttr("{}.translate".format(group_name), *options["translate"]) return new_nodes @@ -220,7 +230,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): members = get_container_members(container) self._lock_camera_transforms(members) - def _post_process_rig(self, name, namespace, context, options): + def _post_process_rig(self, namespace, context, options): + nodes = self[:] create_rig_animation_instance( nodes, context, namespace, options=options, log=self.log diff --git a/openpype/hosts/maya/plugins/load/load_rendersetup.py b/openpype/hosts/maya/plugins/load/load_rendersetup.py index 7a2d8b1002..8b85f11958 100644 --- a/openpype/hosts/maya/plugins/load/load_rendersetup.py +++ b/openpype/hosts/maya/plugins/load/load_rendersetup.py @@ -43,8 +43,9 @@ class RenderSetupLoader(load.LoaderPlugin): prefix="_" if asset[0].isdigit() else "", suffix="_", ) - self.log.info(">>> loading json [ {} ]".format(self.fname)) - with open(self.fname, "r") as file: + path = self.filepath_from_context(context) + self.log.info(">>> loading json [ {} ]".format(path)) + with open(path, "r") as file: renderSetup.instance().decode( json.load(file), renderSetup.DECODE_AND_OVERWRITE, None) diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py index 8a386cecfd..0f674a69c4 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py @@ -48,7 +48,8 @@ class LoadVDBtoArnold(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -65,8 +66,9 @@ class LoadVDBtoArnold(load.LoaderPlugin): name="{}Shape".format(root), parent=root) + path = self.filepath_from_context(context) self._set_path(grid_node, - path=self.fname, + path=path, representation=context["representation"]) # Lock the shape node so the user can't delete the transform/shape diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py index 1f02321dc8..28cfdc7129 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py @@ -67,7 +67,8 @@ class LoadVDBtoRedShift(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.createNode("transform", name=label) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -85,7 +86,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin): parent=root) self._set_path(volume_node, - path=self.fname, + path=self.filepath_from_context(context), representation=context["representation"]) nodes = [root, volume_node] diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py index 9267c59c02..46f2dd674d 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py @@ -88,8 +88,9 @@ class LoadVDBtoVRay(load.LoaderPlugin): from openpype.hosts.maya.api.lib import unique_namespace from openpype.hosts.maya.api.pipeline import containerise - assert os.path.exists(self.fname), ( - "Path does not exist: %s" % self.fname + path = self.filepath_from_context(context) + assert os.path.exists(path), ( + "Path does not exist: %s" % path ) try: @@ -126,7 +127,8 @@ class LoadVDBtoVRay(load.LoaderPlugin): label = "{}:{}_VDB".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -146,7 +148,7 @@ class LoadVDBtoVRay(load.LoaderPlugin): cmds.connectAttr("time1.outTime", grid_node + ".currentTime") # Set path - self._set_path(grid_node, self.fname, show_preset_popup=True) + self._set_path(grid_node, path, show_preset_popup=True) # Lock the shape node so the user can't delete the transform/shape # as if it was referenced diff --git a/openpype/hosts/maya/plugins/load/load_vrayproxy.py b/openpype/hosts/maya/plugins/load/load_vrayproxy.py index 64184f9e7b..9d926a33ed 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayproxy.py +++ b/openpype/hosts/maya/plugins/load/load_vrayproxy.py @@ -12,9 +12,9 @@ import maya.cmds as cmds from openpype.client import get_representation_by_name from openpype.settings import get_project_settings from openpype.pipeline import ( - legacy_io, load, - get_representation_path + get_current_project_name, + get_representation_path, ) from openpype.hosts.maya.api.lib import ( maintained_selection, @@ -53,7 +53,9 @@ class VRayProxyLoader(load.LoaderPlugin): family = "vrayproxy" # get all representations for this version - self.fname = self._get_abc(context["version"]["_id"]) or self.fname + filename = self._get_abc(context["version"]["_id"]) + if not filename: + filename = self.filepath_from_context(context) asset_name = context['asset']["name"] namespace = namespace or unique_namespace( @@ -69,14 +71,15 @@ class VRayProxyLoader(load.LoaderPlugin): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): nodes, group_node = self.create_vray_proxy( - name, filename=self.fname) + name, filename=filename) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: @@ -185,12 +188,12 @@ class VRayProxyLoader(load.LoaderPlugin): """ self.log.debug( "Looking for abc in published representations of this version.") - project_name = legacy_io.active_project() + project_name = get_current_project_name() abc_rep = get_representation_by_name(project_name, "abc", version_id) if abc_rep: self.log.debug("Found, we'll link alembic to vray proxy.") file_name = get_representation_path(abc_rep) - self.log.debug("File: {}".format(self.fname)) + self.log.debug("File: {}".format(file_name)) return file_name return "" diff --git a/openpype/hosts/maya/plugins/load/load_vrayscene.py b/openpype/hosts/maya/plugins/load/load_vrayscene.py index d87992f9a7..3a2c3a47f2 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayscene.py +++ b/openpype/hosts/maya/plugins/load/load_vrayscene.py @@ -46,15 +46,18 @@ class VRaySceneLoader(load.LoaderPlugin): with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - nodes, root_node = self.create_vray_scene(name, - filename=self.fname) + nodes, root_node = self.create_vray_scene( + name, + filename=self.filepath_from_context(context) + ) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: diff --git a/openpype/hosts/maya/plugins/load/load_xgen.py b/openpype/hosts/maya/plugins/load/load_xgen.py index 16f2e8e842..2ad6ad55bc 100644 --- a/openpype/hosts/maya/plugins/load/load_xgen.py +++ b/openpype/hosts/maya/plugins/load/load_xgen.py @@ -48,12 +48,11 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): return maya_filepath = self.prepare_root_value( - self.fname, context["project"]["name"] + file_url=self.filepath_from_context(context), + project_name=context["project"]["name"] ) # Reference xgen. Xgen does not like being referenced in under a group. - new_nodes = [] - with maintained_selection(): nodes = cmds.file( maya_filepath, diff --git a/openpype/hosts/maya/plugins/load/load_yeti_cache.py b/openpype/hosts/maya/plugins/load/load_yeti_cache.py index 5ba381050a..4a11ea9a2c 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_cache.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_cache.py @@ -15,6 +15,16 @@ from openpype.hosts.maya.api import lib from openpype.hosts.maya.api.pipeline import containerise +# Do not reset these values on update but only apply on first load +# to preserve any potential local overrides +SKIP_UPDATE_ATTRS = { + "displayOutput", + "viewportDensity", + "viewportWidth", + "viewportLength", +} + + def set_attribute(node, attr, value): """Wrapper of set attribute which ignores None values""" if value is None: @@ -60,15 +70,17 @@ class YetiCacheLoader(load.LoaderPlugin): cmds.loadPlugin("pgYetiMaya", quiet=True) # Create Yeti cache nodes according to settings - settings = self.read_settings(self.fname) + path = self.filepath_from_context(context) + settings = self.read_settings(path) nodes = [] for node in settings["nodes"]: nodes.extend(self.create_node(namespace, node)) group_name = "{}:{}".format(namespace, name) group_node = cmds.group(nodes, name=group_name) + project_name = context["project"]["name"] - settings = get_project_settings(os.environ['AVALON_PROJECT']) + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -203,6 +215,8 @@ class YetiCacheLoader(load.LoaderPlugin): yeti_node = yeti_nodes[0] for attr, value in node_settings["attrs"].items(): + if attr in SKIP_UPDATE_ATTRS: + continue set_attribute(attr, value, yeti_node) cmds.setAttr("{}.representation".format(container_node), @@ -309,7 +323,6 @@ class YetiCacheLoader(load.LoaderPlugin): # Update attributes with defaults attributes = node_settings["attrs"] attributes.update({ - "viewportDensity": 0.1, "verbosity": 2, "fileMode": 1, @@ -319,6 +332,9 @@ class YetiCacheLoader(load.LoaderPlugin): "visibleInRefractions": True }) + if "viewportDensity" not in attributes: + attributes["viewportDensity"] = 0.1 + # Apply attributes to pgYetiMaya node for attr, value in attributes.items(): set_attribute(attr, value, yeti_node) diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py index b8066871b0..6cfcffe27d 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py @@ -19,17 +19,25 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference( self, context, name=None, namespace=None, options=None ): - group_name = options['group_name'] + path = self.filepath_from_context(context) + + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + with lib.maintained_selection(): file_url = self.prepare_root_value( - self.fname, context["project"]["name"] + path, context["project"]["name"] ) nodes = cmds.file( file_url, namespace=namespace, reference=True, returnNewNodes=True, - groupReference=True, + groupReference=attach_to_root, groupName=group_name ) diff --git a/openpype/hosts/maya/plugins/publish/collect_animation.py b/openpype/hosts/maya/plugins/publish/collect_animation.py index 8f523f770b..26a0a01c8b 100644 --- a/openpype/hosts/maya/plugins/publish/collect_animation.py +++ b/openpype/hosts/maya/plugins/publish/collect_animation.py @@ -58,17 +58,3 @@ class CollectAnimationOutputGeometry(pyblish.api.InstancePlugin): if instance.data.get("farm"): instance.data["families"].append("publish.farm") - # Collect user defined attributes. - if not instance.data.get("includeUserDefinedAttributes", False): - return - - user_defined_attributes = set() - for node in hierarchy: - attrs = cmds.listAttr(node, userDefined=True) or list() - shapes = cmds.listRelatives(node, shapes=True) or list() - for shape in shapes: - attrs.extend(cmds.listAttr(shape, userDefined=True) or list()) - - user_defined_attributes.update(attrs) - - instance.data["userDefinedAttributes"] = list(user_defined_attributes) diff --git a/openpype/hosts/maya/plugins/publish/collect_assembly.py b/openpype/hosts/maya/plugins/publish/collect_assembly.py index 2aef9ab908..f64d6bee44 100644 --- a/openpype/hosts/maya/plugins/publish/collect_assembly.py +++ b/openpype/hosts/maya/plugins/publish/collect_assembly.py @@ -35,14 +35,11 @@ class CollectAssembly(pyblish.api.InstancePlugin): # Get all content from the instance instance_lookup = set(cmds.ls(instance, type="transform", long=True)) data = defaultdict(list) - self.log.info(instance_lookup) hierarchy_nodes = [] for container in containers: - self.log.info(container) root = lib.get_container_transforms(container, root=True) - self.log.info(root) if not root or root not in instance_lookup: continue diff --git a/openpype/hosts/maya/plugins/publish/collect_current_file.py b/openpype/hosts/maya/plugins/publish/collect_current_file.py new file mode 100644 index 0000000000..c7105a7f3c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_current_file.py @@ -0,0 +1,16 @@ + +import pyblish.api + +from maya import cmds + + +class CollectCurrentFile(pyblish.api.ContextPlugin): + """Inject the current working file.""" + + order = pyblish.api.CollectorOrder - 0.4 + label = "Maya Current File" + hosts = ['maya'] + + def process(self, context): + """Inject the current working file""" + context.data['currentFile'] = cmds.file(query=True, sceneName=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py b/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py new file mode 100644 index 0000000000..aef8765e9c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +from maya import cmds # noqa +import pyblish.api +from openpype.pipeline import OptionalPyblishPluginMixin + + +class CollectFbxAnimation(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Collect Animated Rig Data for FBX Extractor.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Fbx Animation" + hosts = ["maya"] + families = ["animation"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + skeleton_sets = [ + i for i in instance + if i.endswith("skeletonAnim_SET") + ] + if not skeleton_sets: + return + + instance.data["families"].append("animation.fbx") + instance.data["animated_skeleton"] = [] + for skeleton_set in skeleton_sets: + skeleton_content = cmds.sets(skeleton_set, query=True) + self.log.debug( + "Collected animated skeleton data: {}".format( + skeleton_content + )) + if skeleton_content: + instance.data["animated_skeleton"] = skeleton_content diff --git a/openpype/hosts/maya/plugins/publish/collect_history.py b/openpype/hosts/maya/plugins/publish/collect_history.py index 71f0169971..d4e8c6298b 100644 --- a/openpype/hosts/maya/plugins/publish/collect_history.py +++ b/openpype/hosts/maya/plugins/publish/collect_history.py @@ -18,7 +18,6 @@ class CollectMayaHistory(pyblish.api.InstancePlugin): hosts = ["maya"] label = "Maya History" families = ["rig"] - verbose = False def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/collect_inputs.py b/openpype/hosts/maya/plugins/publish/collect_inputs.py index 895c92762b..30ed21da9c 100644 --- a/openpype/hosts/maya/plugins/publish/collect_inputs.py +++ b/openpype/hosts/maya/plugins/publish/collect_inputs.py @@ -172,7 +172,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): """Collects inputs from nodes in renderlayer, incl. shaders + camera""" # Get the renderlayer - renderlayer = instance.data.get("setMembers") + renderlayer = instance.data.get("renderlayer") if renderlayer == "defaultRenderLayer": # Assume all loaded containers in the scene are inputs diff --git a/openpype/hosts/maya/plugins/publish/collect_instances.py b/openpype/hosts/maya/plugins/publish/collect_instances.py index 74bdc11a2c..5058da3d01 100644 --- a/openpype/hosts/maya/plugins/publish/collect_instances.py +++ b/openpype/hosts/maya/plugins/publish/collect_instances.py @@ -1,12 +1,11 @@ from maya import cmds import pyblish.api -import json from openpype.hosts.maya.api.lib import get_all_children -class CollectInstances(pyblish.api.ContextPlugin): - """Gather instances by objectSet and pre-defined attribute +class CollectNewInstances(pyblish.api.InstancePlugin): + """Gather members for instances and pre-defined attribute This collector takes into account assets that are associated with an objectSet and marked with a unique identifier; @@ -25,134 +24,72 @@ class CollectInstances(pyblish.api.ContextPlugin): """ - label = "Collect Instances" + label = "Collect New Instance Data" order = pyblish.api.CollectorOrder hosts = ["maya"] - def process(self, context): + valid_empty_families = {"workfile", "renderlayer"} - objectset = cmds.ls("*.id", long=True, type="objectSet", - recursive=True, objectsOnly=True) + def process(self, instance): - context.data['objectsets'] = objectset - for objset in objectset: + objset = instance.data.get("instance_node") + if not objset: + self.log.debug("Instance has no `instance_node` data") - if not cmds.attributeQuery("id", node=objset, exists=True): - continue - - id_attr = "{}.id".format(objset) - if cmds.getAttr(id_attr) != "pyblish.avalon.instance": - continue - - # The developer is responsible for specifying - # the family of each instance. - has_family = cmds.attributeQuery("family", - node=objset, - exists=True) - assert has_family, "\"%s\" was missing a family" % objset - - members = cmds.sets(objset, query=True) - if members is None: - self.log.warning("Skipped empty instance: \"%s\" " % objset) - continue - - self.log.info("Creating instance for {}".format(objset)) - - data = dict() - - # Apply each user defined attribute as data - for attr in cmds.listAttr(objset, userDefined=True) or list(): - try: - value = cmds.getAttr("%s.%s" % (objset, attr)) - except Exception: - # Some attributes cannot be read directly, - # such as mesh and color attributes. These - # are considered non-essential to this - # particular publishing pipeline. - value = None - data[attr] = value - - # temporarily translation of `active` to `publish` till issue has - # been resolved, https://github.com/pyblish/pyblish-base/issues/307 - if "active" in data: - data["publish"] = data["active"] + # TODO: We might not want to do this in the future + # Merge creator attributes into instance.data just backwards compatible + # code still runs as expected + creator_attributes = instance.data.get("creator_attributes", {}) + if creator_attributes: + instance.data.update(creator_attributes) + members = cmds.sets(objset, query=True) or [] + if members: # Collect members members = cmds.ls(members, long=True) or [] dag_members = cmds.ls(members, type="dagNode", long=True) children = get_all_children(dag_members) children = cmds.ls(children, noIntermediate=True, long=True) - - parents = [] - if data.get("includeParentHierarchy", True): - # If `includeParentHierarchy` then include the parents - # so they will also be picked up in the instance by validators - parents = self.get_all_parents(members) + parents = ( + self.get_all_parents(members) + if creator_attributes.get("includeParentHierarchy", True) + else [] + ) members_hierarchy = list(set(members + children + parents)) - if 'families' not in data: - data['families'] = [data.get('family')] - - # Create the instance - instance = context.create_instance(objset) instance[:] = members_hierarchy - instance.data["objset"] = objset - # Store the exact members of the object set - instance.data["setMembers"] = members + elif instance.data["family"] not in self.valid_empty_families: + self.log.warning("Empty instance: \"%s\" " % objset) + # Store the exact members of the object set + instance.data["setMembers"] = members - # Define nice label - name = cmds.ls(objset, long=False)[0] # use short name - label = "{0} ({1})".format(name, data["asset"]) + # TODO: This might make more sense as a separate collector + # Convert frame values to integers + for attr_name in ( + "handleStart", "handleEnd", "frameStart", "frameEnd", + ): + value = instance.data.get(attr_name) + if value is not None: + instance.data[attr_name] = int(value) - # Convert frame values to integers - for attr_name in ( - "handleStart", "handleEnd", "frameStart", "frameEnd", - ): - value = data.get(attr_name) - if value is not None: - data[attr_name] = int(value) + # Append start frame and end frame to label if present + if "frameStart" in instance.data and "frameEnd" in instance.data: + # Take handles from context if not set locally on the instance + for key in ["handleStart", "handleEnd"]: + if key not in instance.data: + value = instance.context.data[key] + if value is not None: + value = int(value) + instance.data[key] = value - # Append start frame and end frame to label if present - if "frameStart" in data and "frameEnd" in data: - # Take handles from context if not set locally on the instance - for key in ["handleStart", "handleEnd"]: - if key not in data: - value = context.data[key] - if value is not None: - value = int(value) - data[key] = value - - data["frameStartHandle"] = int( - data["frameStart"] - data["handleStart"] - ) - data["frameEndHandle"] = int( - data["frameEnd"] + data["handleEnd"] - ) - - label += " [{0}-{1}]".format( - data["frameStartHandle"], data["frameEndHandle"] - ) - - instance.data["label"] = label - instance.data.update(data) - self.log.debug("{}".format(instance.data)) - - # Produce diagnostic message for any graphical - # user interface interested in visualising it. - self.log.info("Found: \"%s\" " % instance.data["name"]) - self.log.debug( - "DATA: {} ".format(json.dumps(instance.data, indent=4))) - - def sort_by_family(instance): - """Sort by family""" - return instance.data.get("families", instance.data.get("family")) - - # Sort/grouped by family (preserving local index) - context[:] = sorted(context, key=sort_by_family) - - return context + instance.data["frameStartHandle"] = int( + instance.data["frameStart"] - instance.data["handleStart"] + ) + instance.data["frameEndHandle"] = int( + instance.data["frameEnd"] + instance.data["handleEnd"] + ) def get_all_parents(self, nodes): """Get all parents by using string operations (optimization) diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py index 287ddc228b..db042963c6 100644 --- a/openpype/hosts/maya/plugins/publish/collect_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_look.py @@ -17,11 +17,6 @@ SHAPE_ATTRS = ["castsShadows", "visibleInRefractions", "doubleSided", "opposite"] - -RENDERER_NODE_TYPES = [ - # redshift - "RedshiftMeshParameters" -] SHAPE_ATTRS = set(SHAPE_ATTRS) @@ -36,12 +31,13 @@ def get_pxr_multitexture_file_attrs(node): FILE_NODES = { + # maya "file": "fileTextureName", - + # arnold (mtoa) "aiImage": "filename", - + # redshift "RedshiftNormalMap": "tex0", - + # renderman "PxrBump": "filename", "PxrNormalMap": "filename", "PxrMultiTexture": get_pxr_multitexture_file_attrs, @@ -49,6 +45,22 @@ FILE_NODES = { "PxrTexture": "filename" } +# Keep only node types that actually exist +all_node_types = set(cmds.allNodeTypes()) +for node_type in list(FILE_NODES.keys()): + if node_type not in all_node_types: + FILE_NODES.pop(node_type) +del all_node_types + +# Cache pixar dependency node types so we can perform a type lookup against it +PXR_NODES = set() +if cmds.pluginInfo("RenderMan_for_Maya", query=True, loaded=True): + PXR_NODES = set( + cmds.pluginInfo("RenderMan_for_Maya", + query=True, + dependNode=True) + ) + def get_attributes(dictionary, attr, node=None): # type: (dict, str, str) -> list @@ -232,20 +244,17 @@ def get_file_node_files(node): """ paths = get_file_node_paths(node) - sequences = [] - replaces = [] + + # For sequences get all files and filter to only existing files + result = [] for index, path in enumerate(paths): if node_uses_image_sequence(node, path): glob_pattern = seq_to_glob(path) - sequences.extend(glob.glob(glob_pattern)) - replaces.append(index) + result.extend(glob.glob(glob_pattern)) + elif os.path.exists(path): + result.append(path) - for index in replaces: - paths.pop(index) - - paths.extend(sequences) - - return [p for p in paths if os.path.exists(p)] + return result class CollectLook(pyblish.api.InstancePlugin): @@ -260,7 +269,7 @@ class CollectLook(pyblish.api.InstancePlugin): membership relations. Collects: - lookAttribtutes (list): Nodes in instance with their altered attributes + lookAttributes (list): Nodes in instance with their altered attributes lookSetRelations (list): Sets and their memberships lookSets (list): List of set names included in the look @@ -285,82 +294,41 @@ class CollectLook(pyblish.api.InstancePlugin): instance: Instance to collect. """ - self.log.info("Looking for look associations " - "for %s" % instance.data['name']) + self.log.debug("Looking for look associations " + "for %s" % instance.data['name']) + + # Lookup set (optimization) + instance_lookup = set(cmds.ls(instance, long=True)) # Discover related object sets - self.log.info("Gathering sets ...") + self.log.debug("Gathering sets ...") sets = self.collect_sets(instance) # Lookup set (optimization) instance_lookup = set(cmds.ls(instance, long=True)) - self.log.info("Gathering set relations ...") - # Ensure iteration happen in a list so we can remove keys from the + self.log.debug("Gathering set relations ...") + # Ensure iteration happen in a list to allow removing keys from the # dict within the loop - - # skipped types of attribute on render specific nodes - disabled_types = ["message", "TdataCompound"] - for obj_set in list(sets): self.log.debug("From {}".format(obj_set)) - - # if node is specified as renderer node type, it will be - # serialized with its attributes. - if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES: - self.log.info("- {} is {}".format( - obj_set, cmds.nodeType(obj_set))) - - node_attrs = [] - - # serialize its attributes so they can be recreated on look - # load. - for attr in cmds.listAttr(obj_set): - # skip publishedNodeInfo attributes as they break - # getAttr() and we don't need them anyway - if attr.startswith("publishedNodeInfo"): - continue - - # skip attributes types defined in 'disabled_type' list - if cmds.getAttr("{}.{}".format(obj_set, attr), type=True) in disabled_types: # noqa - continue - - node_attrs.append(( - attr, - cmds.getAttr("{}.{}".format(obj_set, attr)), - cmds.getAttr( - "{}.{}".format(obj_set, attr), type=True) - )) - - for member in cmds.ls( - cmds.sets(obj_set, query=True), long=True): - member_data = self.collect_member_data(member, - instance_lookup) - if not member_data: - continue - - # Add information of the node to the members list - sets[obj_set]["members"].append(member_data) - # Get all nodes of the current objectSet (shadingEngine) for member in cmds.ls(cmds.sets(obj_set, query=True), long=True): member_data = self.collect_member_data(member, instance_lookup) - if not member_data: - continue - - # Add information of the node to the members list - sets[obj_set]["members"].append(member_data) + if member_data: + # Add information of the node to the members list + sets[obj_set]["members"].append(member_data) # Remove sets that didn't have any members assigned in the end # Thus the data will be limited to only what we need. - self.log.info("obj_set {}".format(sets[obj_set])) if not sets[obj_set]["members"]: - self.log.info( - "Removing redundant set information: {}".format(obj_set)) + self.log.debug( + "Removing redundant set information: {}".format(obj_set) + ) sets.pop(obj_set, None) - self.log.info("Gathering attribute changes to instance members..") + self.log.debug("Gathering attribute changes to instance members..") attributes = self.collect_attributes_changed(instance) # Store data on the instance @@ -382,35 +350,28 @@ class CollectLook(pyblish.api.InstancePlugin): "rman__displacement" ] if look_sets: - materials = [] + self.log.debug("Found look sets: {}".format(look_sets)) + # Get all material attrs for all look sets to retrieve their inputs + existing_attrs = [] for look in look_sets: - for at in shader_attrs: - try: - con = cmds.listConnections("{}.{}".format(look, at)) - except ValueError: - # skip attributes that are invalid in current - # context. For example in the case where - # Arnold is not enabled. - continue - if con: - materials.extend(con) + for attr in shader_attrs: + if cmds.attributeQuery(attr, node=look, exists=True): + existing_attrs.append("{}.{}".format(look, attr)) + materials = cmds.listConnections(existing_attrs, + source=True, + destination=False) or [] - self.log.info("Found materials:\n{}".format(materials)) + self.log.debug("Found materials:\n{}".format(materials)) - self.log.info("Found the following sets:\n{}".format(look_sets)) + self.log.debug("Found the following sets:\n{}".format(look_sets)) # Get the entire node chain of the look sets - # history = cmds.listHistory(look_sets) - history = [] - for material in materials: - history.extend(cmds.listHistory(material, ac=True)) - - # handle VrayPluginNodeMtl node - see #1397 - vray_plugin_nodes = cmds.ls( - history, type="VRayPluginNodeMtl", long=True) - for vray_node in vray_plugin_nodes: - history.extend(cmds.listHistory(vray_node, ac=True)) + # history = cmds.listHistory(look_sets, allConnections=True) + history = cmds.listHistory(materials, allConnections=True) + # Since we retrieved history only of the connected materials + # connected to the look sets above we now add direct history + # for some of the look sets directly # handling render attribute sets render_set_types = [ "VRayDisplacement", @@ -428,20 +389,26 @@ class CollectLook(pyblish.api.InstancePlugin): or [] ) - all_supported_nodes = FILE_NODES.keys() - files = [] - for node_type in all_supported_nodes: - files.extend(cmds.ls(history, type=node_type, long=True)) + # Ensure unique entries only + history = list(set(history)) - self.log.info("Collected file nodes:\n{}".format(files)) + files = cmds.ls(history, + # It's important only node types are passed that + # exist (e.g. for loaded plugins) because otherwise + # the result will turn back empty + type=list(FILE_NODES.keys()), + long=True) + + # Sort for log readability + files.sort() + + self.log.debug("Collected file nodes:\n{}".format(files)) # Collect textures if any file nodes are found - instance.data["resources"] = [] - for n in files: - for res in self.collect_resources(n): - instance.data["resources"].append(res) - - self.log.info("Collected resources: {}".format( - instance.data["resources"])) + resources = [] + for node in files: # sort for log readability + resources.extend(self.collect_resources(node)) + instance.data["resources"] = resources + self.log.debug("Collected resources: {}".format(resources)) # Log warning when no relevant sets were retrieved for the look. if ( @@ -456,7 +423,7 @@ class CollectLook(pyblish.api.InstancePlugin): instance.extend(shader for shader in look_sets if shader not in instance_lookup) - self.log.info("Collected look for %s" % instance) + self.log.debug("Collected look for %s" % instance) def collect_sets(self, instance): """Collect all objectSets which are of importance for publishing @@ -536,14 +503,14 @@ class CollectLook(pyblish.api.InstancePlugin): # Collect changes to "custom" attributes node_attrs = get_look_attrs(node) - self.log.info( - "Node \"{0}\" attributes: {1}".format(node, node_attrs) - ) - # Only include if there are any properties we care about if not node_attrs: continue + self.log.debug( + "Node \"{0}\" attributes: {1}".format(node, node_attrs) + ) + node_attributes = {} for attr in node_attrs: if not cmds.attributeQuery(attr, node=node, exists=True): @@ -574,14 +541,14 @@ class CollectLook(pyblish.api.InstancePlugin): Returns: dict """ - self.log.debug("processing: {}".format(node)) - all_supported_nodes = FILE_NODES.keys() - if cmds.nodeType(node) not in all_supported_nodes: + if cmds.nodeType(node) not in FILE_NODES: self.log.error( "Unsupported file node: {}".format(cmds.nodeType(node))) raise AssertionError("Unsupported file node") - self.log.debug(" - got {}".format(cmds.nodeType(node))) + self.log.debug( + "Collecting resource: {} ({})".format(node, cmds.nodeType(node)) + ) attributes = get_attributes(FILE_NODES, cmds.nodeType(node), node) for attribute in attributes: @@ -589,39 +556,34 @@ class CollectLook(pyblish.api.InstancePlugin): node, attribute )) - computed_attribute = "{}.{}".format(node, attribute) - if attribute == "fileTextureName": - computed_attribute = node + ".computedFileTextureNamePattern" - self.log.info(" - file source: {}".format(source)) + self.log.debug(" - file source: {}".format(source)) color_space_attr = "{}.colorSpace".format(node) try: color_space = cmds.getAttr(color_space_attr) except ValueError: # node doesn't have colorspace attribute color_space = "Raw" + # Compare with the computed file path, e.g. the one with # the pattern in it, to generate some logging information # about this difference - computed_source = cmds.getAttr(computed_attribute) - if source != computed_source: - self.log.debug("Detected computed file pattern difference " - "from original pattern: {0} " - "({1} -> {2})".format(node, - source, - computed_source)) + # Only for file nodes with `fileTextureName` attribute + if attribute == "fileTextureName": + computed_source = cmds.getAttr( + "{}.computedFileTextureNamePattern".format(node) + ) + if source != computed_source: + self.log.debug("Detected computed file pattern difference " + "from original pattern: {0} " + "({1} -> {2})".format(node, + source, + computed_source)) # renderman allows nodes to have filename attribute empty while # you can have another incoming connection from different node. - pxr_nodes = set() - if cmds.pluginInfo("RenderMan_for_Maya", query=True, loaded=True): - pxr_nodes = set( - cmds.pluginInfo("RenderMan_for_Maya", - query=True, - dependNode=True) - ) - if not source and cmds.nodeType(node) in pxr_nodes: - self.log.info("Renderman: source is empty, skipping...") + if not source and cmds.nodeType(node) in PXR_NODES: + self.log.debug("Renderman: source is empty, skipping...") continue # We replace backslashes with forward slashes because V-Ray # can't handle the UDIM files with the backslashes in the @@ -630,14 +592,14 @@ class CollectLook(pyblish.api.InstancePlugin): files = get_file_node_files(node) if len(files) == 0: - self.log.error("No valid files found from node `%s`" % node) + self.log.debug("No valid files found from node `%s`" % node) - self.log.info("collection of resource done:") - self.log.info(" - node: {}".format(node)) - self.log.info(" - attribute: {}".format(attribute)) - self.log.info(" - source: {}".format(source)) - self.log.info(" - file: {}".format(files)) - self.log.info(" - color space: {}".format(color_space)) + self.log.debug("collection of resource done:") + self.log.debug(" - node: {}".format(node)) + self.log.debug(" - attribute: {}".format(attribute)) + self.log.debug(" - source: {}".format(source)) + self.log.debug(" - file: {}".format(files)) + self.log.debug(" - color space: {}".format(color_space)) # Define the resource yield { diff --git a/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py b/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py index 33fc7a025f..bcb979edfc 100644 --- a/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py @@ -268,7 +268,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin): cmds.loadPlugin("MultiverseForMaya", quiet=True) import multiverse - self.log.info("Processing mvLook for '{}'".format(instance)) + self.log.debug("Processing mvLook for '{}'".format(instance)) nodes = set() for node in instance: @@ -281,13 +281,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin): long=True) nodes.update(nodes_of_interest) - files = [] sets = {} instance.data["resources"] = [] publishMipMap = instance.data["publishMipMap"] for node in nodes: - self.log.info("Getting resources for '{}'".format(node)) + self.log.debug("Getting resources for '{}'".format(node)) # We know what nodes need to be collected, now we need to # extract the materials overrides. @@ -380,12 +379,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin): if len(files) == 0: self.log.error("No valid files found from node `%s`" % node) - self.log.info("collection of resource done:") - self.log.info(" - node: {}".format(node)) - self.log.info(" - attribute: {}".format(fname_attrib)) - self.log.info(" - source: {}".format(source)) - self.log.info(" - file: {}".format(files)) - self.log.info(" - color space: {}".format(color_space)) + self.log.debug("collection of resource done:") + self.log.debug(" - node: {}".format(node)) + self.log.debug(" - attribute: {}".format(fname_attrib)) + self.log.debug(" - source: {}".format(source)) + self.log.debug(" - file: {}".format(files)) + self.log.debug(" - color space: {}".format(color_space)) # Define the resource resource = {"node": node, @@ -406,14 +405,14 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin): extra_files = [] self.log.debug("Expecting MipMaps, going to look for them.") for fname in files: - self.log.info("Checking '{}' for mipmaps".format(fname)) + self.log.debug("Checking '{}' for mipmaps".format(fname)) if is_mipmap(fname): self.log.debug(" - file is already MipMap, skipping.") continue mipmap = get_mipmap(fname) if mipmap: - self.log.info(" mipmap found for '{}'".format(fname)) + self.log.debug(" mipmap found for '{}'".format(fname)) extra_files.append(mipmap) else: self.log.warning(" no mipmap found for '{}'".format(fname)) diff --git a/openpype/hosts/maya/plugins/publish/collect_pointcache.py b/openpype/hosts/maya/plugins/publish/collect_pointcache.py index d0430c5612..5578a57f31 100644 --- a/openpype/hosts/maya/plugins/publish/collect_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/collect_pointcache.py @@ -16,14 +16,16 @@ class CollectPointcache(pyblish.api.InstancePlugin): instance.data["families"].append("publish.farm") proxy_set = None - for node in instance.data["setMembers"]: - if cmds.nodeType(node) != "objectSet": - continue - members = cmds.sets(node, query=True) - if members is None: - self.log.warning("Skipped empty objectset: \"%s\" " % node) - continue + for node in cmds.ls(instance.data["setMembers"], + exactType="objectSet"): + # Find proxy_SET objectSet in the instance for proxy meshes if node.endswith("proxy_SET"): + members = cmds.sets(node, query=True) + if members is None: + self.log.debug("Skipped empty proxy_SET: \"%s\" " % node) + continue + self.log.debug("Found proxy set: {}".format(node)) + proxy_set = node instance.data["proxy"] = [] instance.data["proxyRoots"] = [] @@ -36,24 +38,10 @@ class CollectPointcache(pyblish.api.InstancePlugin): cmds.listRelatives(member, shapes=True, fullPath=True) ) self.log.debug( - "proxy members: {}".format(instance.data["proxy"]) + "Found proxy members: {}".format(instance.data["proxy"]) ) + break if proxy_set: instance.remove(proxy_set) instance.data["setMembers"].remove(proxy_set) - - # Collect user defined attributes. - if not instance.data.get("includeUserDefinedAttributes", False): - return - - user_defined_attributes = set() - for node in instance: - attrs = cmds.listAttr(node, userDefined=True) or list() - shapes = cmds.listRelatives(node, shapes=True) or list() - for shape in shapes: - attrs.extend(cmds.listAttr(shape, userDefined=True) or list()) - - user_defined_attributes.update(attrs) - - instance.data["userDefinedAttributes"] = list(user_defined_attributes) diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index babd494758..886c2b4caa 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -39,27 +39,29 @@ Provides: instance -> pixelAspect """ -import re import os import platform import json from maya import cmds -import maya.app.renderSetup.model.renderSetup as renderSetup import pyblish.api +from openpype.pipeline import KnownPublishError from openpype.lib import get_formatted_current_time -from openpype.pipeline import legacy_io -from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501 +from openpype.hosts.maya.api.lib_renderproducts import ( + get as get_layer_render_products, + UnsupportedRendererException +) from openpype.hosts.maya.api import lib -class CollectMayaRender(pyblish.api.ContextPlugin): +class CollectMayaRender(pyblish.api.InstancePlugin): """Gather all publishable render layers from renderSetup.""" order = pyblish.api.CollectorOrder + 0.01 hosts = ["maya"] + families = ["renderlayer"] label = "Collect Render Layers" sync_workfile_version = False @@ -69,388 +71,257 @@ class CollectMayaRender(pyblish.api.ContextPlugin): "underscore": "_" } - def process(self, context): - """Entry point to collector.""" - render_instance = None + def process(self, instance): - for instance in context: - if "rendering" in instance.data["families"]: - render_instance = instance - render_instance.data["remove"] = True + # TODO: Re-add force enable of workfile instance? + # TODO: Re-add legacy layer support with LAYER_ prefix but in Creator + # TODO: Set and collect active state of RenderLayer in Creator using + # renderlayer.isRenderable() + context = instance.context - # make sure workfile instance publishing is enabled - if "workfile" in instance.data["families"]: - instance.data["publish"] = True - - if not render_instance: - self.log.info( - "No render instance found, skipping render " - "layer collection." - ) - return - - render_globals = render_instance - collected_render_layers = render_instance.data["setMembers"] + layer = instance.data["transientData"]["layer"] + objset = instance.data.get("instance_node") filepath = context.data["currentFile"].replace("\\", "/") - asset = legacy_io.Session["AVALON_ASSET"] workspace = context.data["workspaceDir"] - # Retrieve render setup layers - rs = renderSetup.instance() - maya_render_layers = { - layer.name(): layer for layer in rs.getRenderLayers() + # check if layer is renderable + if not layer.isRenderable(): + msg = "Render layer [ {} ] is not " "renderable".format( + layer.name() + ) + self.log.warning(msg) + + # detect if there are sets (subsets) to attach render to + sets = cmds.sets(objset, query=True) or [] + attach_to = [] + for s in sets: + if not cmds.attributeQuery("family", node=s, exists=True): + continue + + attach_to.append( + { + "version": None, # we need integrator for that + "subset": s, + "family": cmds.getAttr("{}.family".format(s)), + } + ) + self.log.debug(" -> attach render to: {}".format(s)) + + layer_name = layer.name() + + # collect all frames we are expecting to be rendered + # return all expected files for all cameras and aovs in given + # frame range + try: + layer_render_products = get_layer_render_products(layer.name()) + except UnsupportedRendererException as exc: + raise KnownPublishError(exc) + render_products = layer_render_products.layer_data.products + assert render_products, "no render products generated" + expected_files = [] + multipart = False + for product in render_products: + if product.multipart: + multipart = True + product_name = product.productName + if product.camera and layer_render_products.has_camera_token(): + product_name = "{}{}".format( + product.camera, + "_{}".format(product_name) if product_name else "") + expected_files.append( + { + product_name: layer_render_products.get_files( + product) + }) + + has_cameras = any(product.camera for product in render_products) + assert has_cameras, "No render cameras found." + + self.log.debug("multipart: {}".format( + multipart)) + assert expected_files, "no file names were generated, this is a bug" + self.log.debug( + "expected files: {}".format( + json.dumps(expected_files, indent=4, sort_keys=True) + ) + ) + + # if we want to attach render to subset, check if we have AOV's + # in expectedFiles. If so, raise error as we cannot attach AOV + # (considered to be subset on its own) to another subset + if attach_to: + assert isinstance(expected_files, list), ( + "attaching multiple AOVs or renderable cameras to " + "subset is not supported" + ) + + # append full path + aov_dict = {} + image_directory = os.path.join( + cmds.workspace(query=True, rootDirectory=True), + cmds.workspace(fileRuleEntry="images") + ) + # replace relative paths with absolute. Render products are + # returned as list of dictionaries. + publish_meta_path = None + for aov in expected_files: + full_paths = [] + aov_first_key = list(aov.keys())[0] + for file in aov[aov_first_key]: + full_path = os.path.join(image_directory, file) + full_path = full_path.replace("\\", "/") + full_paths.append(full_path) + publish_meta_path = os.path.dirname(full_path) + aov_dict[aov_first_key] = full_paths + full_exp_files = [aov_dict] + self.log.debug(full_exp_files) + + if publish_meta_path is None: + raise KnownPublishError("Unable to detect any expected output " + "images for: {}. Make sure you have a " + "renderable camera and a valid frame " + "range set for your renderlayer." + "".format(instance.name)) + + frame_start_render = int(self.get_render_attribute( + "startFrame", layer=layer_name)) + frame_end_render = int(self.get_render_attribute( + "endFrame", layer=layer_name)) + + if (int(context.data["frameStartHandle"]) == frame_start_render + and int(context.data["frameEndHandle"]) == frame_end_render): # noqa: W503, E501 + + handle_start = context.data["handleStart"] + handle_end = context.data["handleEnd"] + frame_start = context.data["frameStart"] + frame_end = context.data["frameEnd"] + frame_start_handle = context.data["frameStartHandle"] + frame_end_handle = context.data["frameEndHandle"] + else: + handle_start = 0 + handle_end = 0 + frame_start = frame_start_render + frame_end = frame_end_render + frame_start_handle = frame_start_render + frame_end_handle = frame_end_render + + # find common path to store metadata + # so if image prefix is branching to many directories + # metadata file will be located in top-most common + # directory. + # TODO: use `os.path.commonpath()` after switch to Python 3 + publish_meta_path = os.path.normpath(publish_meta_path) + common_publish_meta_path = os.path.splitdrive( + publish_meta_path)[0] + if common_publish_meta_path: + common_publish_meta_path += os.path.sep + for part in publish_meta_path.replace( + common_publish_meta_path, "").split(os.path.sep): + common_publish_meta_path = os.path.join( + common_publish_meta_path, part) + if part == layer_name: + break + + # TODO: replace this terrible linux hotfix with real solution :) + if platform.system().lower() in ["linux", "darwin"]: + common_publish_meta_path = "/" + common_publish_meta_path + + self.log.debug( + "Publish meta path: {}".format(common_publish_meta_path)) + + # Get layer specific settings, might be overrides + colorspace_data = lib.get_color_management_preferences() + data = { + "farm": True, + "attachTo": attach_to, + + "multipartExr": multipart, + "review": instance.data.get("review") or False, + + # Frame range + "handleStart": handle_start, + "handleEnd": handle_end, + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_start_handle, + "frameEndHandle": frame_end_handle, + "byFrameStep": int( + self.get_render_attribute("byFrameStep", + layer=layer_name)), + + # Renderlayer + "renderer": self.get_render_attribute( + "currentRenderer", layer=layer_name).lower(), + "setMembers": layer._getLegacyNodeName(), # legacy renderlayer + "renderlayer": layer_name, + + # todo: is `time` and `author` still needed? + "time": get_formatted_current_time(), + "author": context.data["user"], + + # Add source to allow tracing back to the scene from + # which was submitted originally + "source": filepath, + "expectedFiles": full_exp_files, + "publishRenderMetadataFolder": common_publish_meta_path, + "renderProducts": layer_render_products, + "resolutionWidth": lib.get_attr_in_layer( + "defaultResolution.width", layer=layer_name + ), + "resolutionHeight": lib.get_attr_in_layer( + "defaultResolution.height", layer=layer_name + ), + "pixelAspect": lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer_name + ), + + # todo: Following are likely not needed due to collecting from the + # instance itself if they are attribute definitions + "tileRendering": instance.data.get("tileRendering") or False, # noqa: E501 + "tilesX": instance.data.get("tilesX") or 2, + "tilesY": instance.data.get("tilesY") or 2, + "convertToScanline": instance.data.get( + "convertToScanline") or False, + "useReferencedAovs": instance.data.get( + "useReferencedAovs") or instance.data.get( + "vrayUseReferencedAovs") or False, + "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501 + "renderSetupIncludeLights": instance.data.get( + "renderSetupIncludeLights" + ), + "colorspaceConfig": colorspace_data["config"], + "colorspaceDisplay": colorspace_data["display"], + "colorspaceView": colorspace_data["view"], } - for layer in collected_render_layers: - if layer.startswith("LAYER_"): - # this is support for legacy mode where render layers - # started with `LAYER_` prefix. - layer_name_pattern = r"^LAYER_(.*)" - else: - # new way is to prefix render layer name with instance - # namespace. - layer_name_pattern = r"^.+:(.*)" + rr_settings = ( + context.data["system_settings"]["modules"]["royalrender"] + ) + if rr_settings["enabled"]: + data["rrPathName"] = instance.data.get("rrPathName") + self.log.debug(data["rrPathName"]) - # todo: We should have a more explicit way to link the renderlayer - match = re.match(layer_name_pattern, layer) - if not match: - msg = "Invalid layer name in set [ {} ]".format(layer) - self.log.warning(msg) - continue + if self.sync_workfile_version: + data["version"] = context.data["version"] + for _instance in context: + if _instance.data['family'] == "workfile": + _instance.data["version"] = context.data["version"] - expected_layer_name = match.group(1) - self.log.info("Processing '{}' as layer [ {} ]" - "".format(layer, expected_layer_name)) - - # check if layer is part of renderSetup - if expected_layer_name not in maya_render_layers: - msg = "Render layer [ {} ] is not in " "Render Setup".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # check if layer is renderable - if not maya_render_layers[expected_layer_name].isRenderable(): - msg = "Render layer [ {} ] is not " "renderable".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # detect if there are sets (subsets) to attach render to - sets = cmds.sets(layer, query=True) or [] - attach_to = [] - for s in sets: - if not cmds.attributeQuery("family", node=s, exists=True): - continue - - attach_to.append( - { - "version": None, # we need integrator for that - "subset": s, - "family": cmds.getAttr("{}.family".format(s)), - } - ) - self.log.info(" -> attach render to: {}".format(s)) - - layer_name = "rs_{}".format(expected_layer_name) - - # collect all frames we are expecting to be rendered - # return all expected files for all cameras and aovs in given - # frame range - layer_render_products = get_layer_render_products(layer_name) - render_products = layer_render_products.layer_data.products - assert render_products, "no render products generated" - exp_files = [] - multipart = False - for product in render_products: - if product.multipart: - multipart = True - product_name = product.productName - if product.camera and layer_render_products.has_camera_token(): - product_name = "{}{}".format( - product.camera, - "_" + product_name if product_name else "") - exp_files.append( - { - product_name: layer_render_products.get_files( - product) - }) - - has_cameras = any(product.camera for product in render_products) - assert has_cameras, "No render cameras found." - - self.log.info("multipart: {}".format( - multipart)) - assert exp_files, "no file names were generated, this is bug" - self.log.info( - "expected files: {}".format( - json.dumps(exp_files, indent=4, sort_keys=True) - ) - ) - - # if we want to attach render to subset, check if we have AOV's - # in expectedFiles. If so, raise error as we cannot attach AOV - # (considered to be subset on its own) to another subset - if attach_to: - assert isinstance(exp_files, list), ( - "attaching multiple AOVs or renderable cameras to " - "subset is not supported" - ) - - # append full path - aov_dict = {} - default_render_file = context.data.get('project_settings')\ - .get('maya')\ - .get('RenderSettings')\ - .get('default_render_image_folder') or "" - # replace relative paths with absolute. Render products are - # returned as list of dictionaries. - publish_meta_path = None - for aov in exp_files: - full_paths = [] - aov_first_key = list(aov.keys())[0] - for file in aov[aov_first_key]: - full_path = os.path.join(workspace, default_render_file, - file) - full_path = full_path.replace("\\", "/") - full_paths.append(full_path) - publish_meta_path = os.path.dirname(full_path) - aov_dict[aov_first_key] = full_paths - full_exp_files = [aov_dict] - - frame_start_render = int(self.get_render_attribute( - "startFrame", layer=layer_name)) - frame_end_render = int(self.get_render_attribute( - "endFrame", layer=layer_name)) - - if (int(context.data['frameStartHandle']) == frame_start_render - and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - - handle_start = context.data['handleStart'] - handle_end = context.data['handleEnd'] - frame_start = context.data['frameStart'] - frame_end = context.data['frameEnd'] - frame_start_handle = context.data['frameStartHandle'] - frame_end_handle = context.data['frameEndHandle'] - else: - handle_start = 0 - handle_end = 0 - frame_start = frame_start_render - frame_end = frame_end_render - frame_start_handle = frame_start_render - frame_end_handle = frame_end_render - - # find common path to store metadata - # so if image prefix is branching to many directories - # metadata file will be located in top-most common - # directory. - # TODO: use `os.path.commonpath()` after switch to Python 3 - publish_meta_path = os.path.normpath(publish_meta_path) - common_publish_meta_path = os.path.splitdrive( - publish_meta_path)[0] - if common_publish_meta_path: - common_publish_meta_path += os.path.sep - for part in publish_meta_path.replace( - common_publish_meta_path, "").split(os.path.sep): - common_publish_meta_path = os.path.join( - common_publish_meta_path, part) - if part == expected_layer_name: - break - - # TODO: replace this terrible linux hotfix with real solution :) - if platform.system().lower() in ["linux", "darwin"]: - common_publish_meta_path = "/" + common_publish_meta_path - - self.log.info( - "Publish meta path: {}".format(common_publish_meta_path)) - - self.log.info(full_exp_files) - self.log.info("collecting layer: {}".format(layer_name)) - # Get layer specific settings, might be overrides - colorspace_data = lib.get_color_management_preferences() - data = { - "subset": expected_layer_name, - "attachTo": attach_to, - "setMembers": layer_name, - "multipartExr": multipart, - "review": render_instance.data.get("review") or False, - "publish": True, - - "handleStart": handle_start, - "handleEnd": handle_end, - "frameStart": frame_start, - "frameEnd": frame_end, - "frameStartHandle": frame_start_handle, - "frameEndHandle": frame_end_handle, - "byFrameStep": int( - self.get_render_attribute("byFrameStep", - layer=layer_name)), - "renderer": self.get_render_attribute( - "currentRenderer", layer=layer_name).lower(), - # instance subset - "family": "renderlayer", - "families": ["renderlayer"], - "asset": asset, - "time": get_formatted_current_time(), - "author": context.data["user"], - # Add source to allow tracing back to the scene from - # which was submitted originally - "source": filepath, - "expectedFiles": full_exp_files, - "publishRenderMetadataFolder": common_publish_meta_path, - "renderProducts": layer_render_products, - "resolutionWidth": lib.get_attr_in_layer( - "defaultResolution.width", layer=layer_name - ), - "resolutionHeight": lib.get_attr_in_layer( - "defaultResolution.height", layer=layer_name - ), - "pixelAspect": lib.get_attr_in_layer( - "defaultResolution.pixelAspect", layer=layer_name - ), - "tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501 - "tilesX": render_instance.data.get("tilesX") or 2, - "tilesY": render_instance.data.get("tilesY") or 2, - "priority": render_instance.data.get("priority"), - "convertToScanline": render_instance.data.get( - "convertToScanline") or False, - "useReferencedAovs": render_instance.data.get( - "useReferencedAovs") or render_instance.data.get( - "vrayUseReferencedAovs") or False, - "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501 - "renderSetupIncludeLights": render_instance.data.get( - "renderSetupIncludeLights" - ), - "colorspaceConfig": colorspace_data["config"], - "colorspaceDisplay": colorspace_data["display"], - "colorspaceView": colorspace_data["view"], - "strict_error_checking": render_instance.data.get( - "strict_error_checking", True - ) - } - - # Collect Deadline url if Deadline module is enabled - deadline_settings = ( - context.data["system_settings"]["modules"]["deadline"] - ) - if deadline_settings["enabled"]: - data["deadlineUrl"] = render_instance.data["deadlineUrl"] - - if self.sync_workfile_version: - data["version"] = context.data["version"] - - for instance in context: - if instance.data['family'] == "workfile": - instance.data["version"] = context.data["version"] - - # handle standalone renderers - if render_instance.data.get("vrayScene") is True: - data["families"].append("vrayscene_render") - - if render_instance.data.get("assScene") is True: - data["families"].append("assscene_render") - - # Include (optional) global settings - # Get global overrides and translate to Deadline values - overrides = self.parse_options(str(render_globals)) - data.update(**overrides) - - # get string values for pools - primary_pool = overrides["renderGlobals"]["Pool"] - secondary_pool = overrides["renderGlobals"].get("SecondaryPool") - data["primaryPool"] = primary_pool - data["secondaryPool"] = secondary_pool - - # Define nice label - label = "{0} ({1})".format(expected_layer_name, data["asset"]) - label += " [{0}-{1}]".format( - int(data["frameStartHandle"]), int(data["frameEndHandle"]) - ) - - instance = context.create_instance(expected_layer_name) - instance.data["label"] = label - instance.data["farm"] = True - instance.data.update(data) - - def parse_options(self, render_globals): - """Get all overrides with a value, skip those without. - - Here's the kicker. These globals override defaults in the submission - integrator, but an empty value means no overriding is made. - Otherwise, Frames would override the default frames set under globals. - - Args: - render_globals (str): collection of render globals - - Returns: - dict: only overrides with values - - """ - attributes = lib.read(render_globals) - - options = {"renderGlobals": {}} - options["renderGlobals"]["Priority"] = attributes["priority"] - - # Check for specific pools - pool_a, pool_b = self._discover_pools(attributes) - options["renderGlobals"].update({"Pool": pool_a}) - if pool_b: - options["renderGlobals"].update({"SecondaryPool": pool_b}) - - # Machine list - machine_list = attributes["machineList"] - if machine_list: - key = "Whitelist" if attributes["whitelist"] else "Blacklist" - options["renderGlobals"][key] = machine_list - - # Suspend publish job - state = "Suspended" if attributes["suspendPublishJob"] else "Active" - options["publishJobState"] = state - - chunksize = attributes.get("framesPerTask", 1) - options["renderGlobals"]["ChunkSize"] = chunksize + # Define nice label + label = "{0} ({1})".format(layer_name, instance.data["asset"]) + label += " [{0}-{1}]".format( + int(data["frameStartHandle"]), int(data["frameEndHandle"]) + ) + data["label"] = label # Override frames should be False if extendFrames is False. This is # to ensure it doesn't go off doing crazy unpredictable things - override_frames = False - extend_frames = attributes.get("extendFrames", False) - if extend_frames: - override_frames = attributes.get("overrideExistingFrame", False) + extend_frames = instance.data.get("extendFrames", False) + if not extend_frames: + instance.data["overrideExistingFrame"] = False - options["extendFrames"] = extend_frames - options["overrideExistingFrame"] = override_frames - - maya_render_plugin = "MayaBatch" - - options["mayaRenderPlugin"] = maya_render_plugin - - return options - - def _discover_pools(self, attributes): - - pool_a = None - pool_b = None - - # Check for specific pools - pool_b = [] - if "primaryPool" in attributes: - pool_a = attributes["primaryPool"] - if "secondaryPool" in attributes: - pool_b = attributes["secondaryPool"] - - else: - # Backwards compatibility - pool_str = attributes.get("pools", None) - if pool_str: - pool_a, pool_b = pool_str.split(";") - - # Ensure empty entry token is caught - if pool_b == "-": - pool_b = None - - return pool_a, pool_b + # Update the instace + instance.data.update(data) @staticmethod def get_render_attribute(attr, layer): diff --git a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py index 9666499c42..035c531a9b 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py +++ b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py @@ -37,7 +37,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin): # Get renderer renderer = instance.data["renderer"] - self.log.info("Renderer found: {}".format(renderer)) + self.log.debug("Renderer found: {}".format(renderer)) rp_node_types = {"vray": ["VRayRenderElement", "VRayRenderElementSet"], "arnold": ["aiAOV"], @@ -50,7 +50,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin): result = [] # Collect all AOVs / Render Elements - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] node_type = rp_node_types[renderer] render_elements = cmds.ls(type=node_type) @@ -66,8 +66,8 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin): result.append(render_pass) - self.log.info("Found {} render elements / AOVs for " - "'{}'".format(len(result), instance.data["subset"])) + self.log.debug("Found {} render elements / AOVs for " + "'{}'".format(len(result), instance.data["subset"])) instance.data["renderPasses"] = result diff --git a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py index 93a37d8693..4443e2e0db 100644 --- a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py +++ b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py @@ -19,13 +19,14 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin): if "vrayscene_layer" in instance.data.get("families", []): layer = instance.data.get("layer") else: - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] - self.log.info("layer: {}".format(layer)) cameras = cmds.ls(type="camera", long=True) - renderable = [c for c in cameras if - get_attr_in_layer("%s.renderable" % c, layer)] + renderable = [cam for cam in cameras if + get_attr_in_layer("{}.renderable".format(cam), layer)] - self.log.info("Found cameras %s: %s" % (len(renderable), renderable)) + self.log.debug( + "Found renderable cameras %s: %s", len(renderable), renderable + ) instance.data["cameras"] = renderable diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 5c190a4a7b..586939a3b8 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -18,14 +18,10 @@ class CollectReview(pyblish.api.InstancePlugin): def process(self, instance): - self.log.debug('instance: {}'.format(instance)) - - task = legacy_io.Session["AVALON_TASK"] - # Get panel. instance.data["panel"] = cmds.playblast( activeEditor=True - ).split("|")[-1] + ).rsplit("|", 1)[-1] # get cameras members = instance.data['setMembers'] @@ -34,11 +30,12 @@ class CollectReview(pyblish.api.InstancePlugin): camera = cameras[0] if cameras else None context = instance.context - objectset = context.data['objectsets'] + objectset = { + i.data.get("instance_node") for i in context + } - # Convert enum attribute index to string for Display Lights. - index = instance.data.get("displayLights", 0) - display_lights = lib.DISPLAY_LIGHTS_VALUES[index] + # Collect display lights. + display_lights = instance.data.get("displayLights", "default") if display_lights == "project_settings": settings = instance.context.data["project_settings"] settings = settings["maya"]["publish"]["ExtractPlayblast"] @@ -60,7 +57,7 @@ class CollectReview(pyblish.api.InstancePlugin): burninDataMembers["focalLength"] = focal_length # Account for nested instances like model. - reviewable_subsets = list(set(members) & set(objectset)) + reviewable_subsets = list(set(members) & objectset) if reviewable_subsets: if len(reviewable_subsets) > 1: raise KnownPublishError( @@ -97,7 +94,11 @@ class CollectReview(pyblish.api.InstancePlugin): data["frameStart"] = instance.data["frameStart"] data["frameEnd"] = instance.data["frameEnd"] data['step'] = instance.data['step'] - data['fps'] = instance.data['fps'] + # this (with other time related data) should be set on + # representations. Once plugins like Extract Review start + # using representations, this should be removed from here + # as Extract Playblast is already adding fps to representation. + data['fps'] = context.data['fps'] data['review_width'] = instance.data['review_width'] data['review_height'] = instance.data['review_height'] data["isolate"] = instance.data["isolate"] @@ -106,12 +107,16 @@ class CollectReview(pyblish.api.InstancePlugin): data["displayLights"] = display_lights data["burninDataMembers"] = burninDataMembers + for key, value in instance.data["publish_attributes"].items(): + data["publish_attributes"][key] = value + # The review instance must be active cmds.setAttr(str(instance) + '.active', 1) instance.data['remove'] = True else: + task = legacy_io.Session["AVALON_TASK"] legacy_subset_name = task + 'Review' asset_doc = instance.context.data['assetEntity'] project_name = legacy_io.active_project() @@ -133,6 +138,11 @@ class CollectReview(pyblish.api.InstancePlugin): instance.data["frameEndHandle"] instance.data["displayLights"] = display_lights instance.data["burninDataMembers"] = burninDataMembers + # this (with other time related data) should be set on + # representations. Once plugins like Extract Review start + # using representations, this should be removed from here + # as Extract Playblast is already adding fps to representation. + instance.data["fps"] = instance.context.data["fps"] # make ftrack publishable instance.data.setdefault("families", []).append('ftrack') diff --git a/openpype/hosts/maya/plugins/publish/collect_rig_sets.py b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py new file mode 100644 index 0000000000..34ff26a8b8 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py @@ -0,0 +1,40 @@ +import pyblish.api +from maya import cmds + + +class CollectRigSets(pyblish.api.InstancePlugin): + """Ensure rig contains pipeline-critical content + + Every rig must contain at least two object sets: + "controls_SET" - Set of all animatable controls + "out_SET" - Set of all cacheable meshes + + """ + + order = pyblish.api.CollectorOrder + 0.05 + label = "Collect Rig Sets" + hosts = ["maya"] + families = ["rig"] + + accepted_output = ["mesh", "transform"] + accepted_controllers = ["transform"] + + def process(self, instance): + + # Find required sets by suffix + searching = {"controls_SET", "out_SET", + "skeletonAnim_SET", "skeletonMesh_SET"} + found = {} + for node in cmds.ls(instance, exactType="objectSet"): + for suffix in searching: + if node.endswith(suffix): + found[suffix] = node + searching.remove(suffix) + break + if not searching: + break + + self.log.debug("Found sets: {}".format(found)) + rig_sets = instance.data.setdefault("rig_sets", {}) + for name, objset in found.items(): + rig_sets[name] = objset diff --git a/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py b/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py new file mode 100644 index 0000000000..31f0eca88c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +from maya import cmds # noqa +import pyblish.api + + +class CollectSkeletonMesh(pyblish.api.InstancePlugin): + """Collect Static Rig Data for FBX Extractor.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Skeleton Mesh" + hosts = ["maya"] + families = ["rig"] + + def process(self, instance): + skeleton_mesh_set = instance.data["rig_sets"].get( + "skeletonMesh_SET") + if not skeleton_mesh_set: + self.log.debug( + "No skeletonMesh_SET found. " + "Skipping collecting of skeleton mesh..." + ) + return + + # Store current frame to ensure single frame export + frame = cmds.currentTime(query=True) + instance.data["frameStart"] = frame + instance.data["frameEnd"] = frame + + instance.data["skeleton_mesh"] = [] + + skeleton_mesh_content = cmds.sets( + skeleton_mesh_set, query=True) or [] + if not skeleton_mesh_content: + self.log.debug( + "No object nodes in skeletonMesh_SET. " + "Skipping collecting of skeleton mesh..." + ) + return + instance.data["families"] += ["rig.fbx"] + instance.data["skeleton_mesh"] = skeleton_mesh_content + self.log.debug( + "Collected skeletonMesh_SET members: {}".format( + skeleton_mesh_content + )) diff --git a/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py b/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py index 79d0856fa0..03b6c4a188 100644 --- a/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py +++ b/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py @@ -19,7 +19,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin): instance.data["geometryMembers"] = cmds.sets( geometry_set, query=True) - self.log.info("geometry: {}".format( + self.log.debug("geometry: {}".format( pformat(instance.data.get("geometryMembers")))) collision_set = [ @@ -29,7 +29,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin): instance.data["collisionMembers"] = cmds.sets( collision_set, query=True) - self.log.info("collisions: {}".format( + self.log.debug("collisions: {}".format( pformat(instance.data.get("collisionMembers")))) frame = cmds.currentTime(query=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_user_defined_attributes.py b/openpype/hosts/maya/plugins/publish/collect_user_defined_attributes.py new file mode 100644 index 0000000000..16fef2e168 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_user_defined_attributes.py @@ -0,0 +1,39 @@ +from maya import cmds + +import pyblish.api + + +class CollectUserDefinedAttributes(pyblish.api.InstancePlugin): + """Collect user defined attributes for nodes in instance.""" + + order = pyblish.api.CollectorOrder + 0.45 + families = ["pointcache", "animation", "usd"] + label = "Collect User Defined Attributes" + hosts = ["maya"] + + def process(self, instance): + + # Collect user defined attributes. + if not instance.data.get("includeUserDefinedAttributes", False): + return + + if "out_hierarchy" in instance.data: + # animation family + nodes = instance.data["out_hierarchy"] + else: + nodes = instance[:] + if not nodes: + return + + shapes = cmds.listRelatives(nodes, shapes=True, fullPath=True) or [] + nodes = set(nodes).union(shapes) + + attrs = cmds.listAttr(list(nodes), userDefined=True) or [] + user_defined_attributes = list(sorted(set(attrs))) + instance.data["userDefinedAttributes"] = user_defined_attributes + + self.log.debug( + "Collected user defined attributes: {}".format( + ", ".join(user_defined_attributes) + ) + ) diff --git a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py index 0bae9656f3..b9181337a9 100644 --- a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py +++ b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py @@ -24,129 +24,91 @@ class CollectVrayScene(pyblish.api.InstancePlugin): def process(self, instance): """Collector entry point.""" - collected_render_layers = instance.data["setMembers"] - instance.data["remove"] = True + context = instance.context - _rs = renderSetup.instance() - # current_layer = _rs.getVisibleRenderLayer() + layer = instance.data["transientData"]["layer"] + layer_name = layer.name() + + renderer = self.get_render_attribute("currentRenderer", + layer=layer_name) + if renderer != "vray": + self.log.warning("Layer '{}' renderer is not set to V-Ray".format( + layer_name + )) # collect all frames we are expecting to be rendered - renderer = cmds.getAttr( - "defaultRenderGlobals.currentRenderer" - ).lower() + frame_start_render = int(self.get_render_attribute( + "startFrame", layer=layer_name)) + frame_end_render = int(self.get_render_attribute( + "endFrame", layer=layer_name)) - if renderer != "vray": - raise AssertionError("Vray is not enabled.") + if (int(context.data['frameStartHandle']) == frame_start_render + and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - maya_render_layers = { - layer.name(): layer for layer in _rs.getRenderLayers() + handle_start = context.data['handleStart'] + handle_end = context.data['handleEnd'] + frame_start = context.data['frameStart'] + frame_end = context.data['frameEnd'] + frame_start_handle = context.data['frameStartHandle'] + frame_end_handle = context.data['frameEndHandle'] + else: + handle_start = 0 + handle_end = 0 + frame_start = frame_start_render + frame_end = frame_end_render + frame_start_handle = frame_start_render + frame_end_handle = frame_end_render + + # Get layer specific settings, might be overrides + data = { + "subset": layer_name, + "layer": layer_name, + # TODO: This likely needs fixing now + # Before refactor: cmds.sets(layer, q=True) or ["*"] + "setMembers": ["*"], + "review": False, + "publish": True, + "handleStart": handle_start, + "handleEnd": handle_end, + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_start_handle, + "frameEndHandle": frame_end_handle, + "byFrameStep": int( + self.get_render_attribute("byFrameStep", + layer=layer_name)), + "renderer": renderer, + # instance subset + "family": "vrayscene_layer", + "families": ["vrayscene_layer"], + "time": get_formatted_current_time(), + "author": context.data["user"], + # Add source to allow tracing back to the scene from + # which was submitted originally + "source": context.data["currentFile"].replace("\\", "/"), + "resolutionWidth": lib.get_attr_in_layer( + "defaultResolution.height", layer=layer_name + ), + "resolutionHeight": lib.get_attr_in_layer( + "defaultResolution.width", layer=layer_name + ), + "pixelAspect": lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer_name + ), + "priority": instance.data.get("priority"), + "useMultipleSceneFiles": instance.data.get( + "vraySceneMultipleFiles") } - layer_list = [] - for layer in collected_render_layers: - # every layer in set should start with `LAYER_` prefix - try: - expected_layer_name = re.search(r"^.+:(.*)", layer).group(1) - except IndexError: - msg = "Invalid layer name in set [ {} ]".format(layer) - self.log.warning(msg) - continue + instance.data.update(data) - self.log.info("processing %s" % layer) - # check if layer is part of renderSetup - if expected_layer_name not in maya_render_layers: - msg = "Render layer [ {} ] is not in " "Render Setup".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # check if layer is renderable - if not maya_render_layers[expected_layer_name].isRenderable(): - msg = "Render layer [ {} ] is not " "renderable".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - layer_name = "rs_{}".format(expected_layer_name) - - self.log.debug(expected_layer_name) - layer_list.append(expected_layer_name) - - frame_start_render = int(self.get_render_attribute( - "startFrame", layer=layer_name)) - frame_end_render = int(self.get_render_attribute( - "endFrame", layer=layer_name)) - - if (int(context.data['frameStartHandle']) == frame_start_render - and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - - handle_start = context.data['handleStart'] - handle_end = context.data['handleEnd'] - frame_start = context.data['frameStart'] - frame_end = context.data['frameEnd'] - frame_start_handle = context.data['frameStartHandle'] - frame_end_handle = context.data['frameEndHandle'] - else: - handle_start = 0 - handle_end = 0 - frame_start = frame_start_render - frame_end = frame_end_render - frame_start_handle = frame_start_render - frame_end_handle = frame_end_render - - # Get layer specific settings, might be overrides - data = { - "subset": expected_layer_name, - "layer": layer_name, - "setMembers": cmds.sets(layer, q=True) or ["*"], - "review": False, - "publish": True, - "handleStart": handle_start, - "handleEnd": handle_end, - "frameStart": frame_start, - "frameEnd": frame_end, - "frameStartHandle": frame_start_handle, - "frameEndHandle": frame_end_handle, - "byFrameStep": int( - self.get_render_attribute("byFrameStep", - layer=layer_name)), - "renderer": self.get_render_attribute("currentRenderer", - layer=layer_name), - # instance subset - "family": "vrayscene_layer", - "families": ["vrayscene_layer"], - "asset": legacy_io.Session["AVALON_ASSET"], - "time": get_formatted_current_time(), - "author": context.data["user"], - # Add source to allow tracing back to the scene from - # which was submitted originally - "source": context.data["currentFile"].replace("\\", "/"), - "resolutionWidth": lib.get_attr_in_layer( - "defaultResolution.height", layer=layer_name - ), - "resolutionHeight": lib.get_attr_in_layer( - "defaultResolution.width", layer=layer_name - ), - "pixelAspect": lib.get_attr_in_layer( - "defaultResolution.pixelAspect", layer=layer_name - ), - "priority": instance.data.get("priority"), - "useMultipleSceneFiles": instance.data.get( - "vraySceneMultipleFiles") - } - - # Define nice label - label = "{0} ({1})".format(expected_layer_name, data["asset"]) - label += " [{0}-{1}]".format( - int(data["frameStartHandle"]), int(data["frameEndHandle"]) - ) - - instance = context.create_instance(expected_layer_name) - instance.data["label"] = label - instance.data.update(data) + # Define nice label + label = "{0} ({1})".format(layer_name, instance.data["asset"]) + label += " [{0}-{1}]".format( + int(data["frameStartHandle"]), int(data["frameEndHandle"]) + ) + instance.data["label"] = label def get_render_attribute(self, attr, layer): """Get attribute from render options. diff --git a/openpype/hosts/maya/plugins/publish/collect_workfile.py b/openpype/hosts/maya/plugins/publish/collect_workfile.py index 12d86869ea..e2b64f1ebd 100644 --- a/openpype/hosts/maya/plugins/publish/collect_workfile.py +++ b/openpype/hosts/maya/plugins/publish/collect_workfile.py @@ -1,46 +1,30 @@ import os import pyblish.api -from maya import cmds -from openpype.pipeline import legacy_io - -class CollectWorkfile(pyblish.api.ContextPlugin): - """Inject the current working file into context""" +class CollectWorkfileData(pyblish.api.InstancePlugin): + """Inject data into Workfile instance""" order = pyblish.api.CollectorOrder - 0.01 label = "Maya Workfile" hosts = ['maya'] + families = ["workfile"] - def process(self, context): + def process(self, instance): """Inject the current working file""" - current_file = cmds.file(query=True, sceneName=True) - context.data['currentFile'] = current_file + context = instance.context + current_file = instance.context.data['currentFile'] folder, file = os.path.split(current_file) filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] - - data = {} - - # create instance - instance = context.create_instance(name=filename) - subset = 'workfile' + task.capitalize() - - data.update({ - "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), - "label": subset, - "publish": True, - "family": 'workfile', - "families": ['workfile'], + data = { # noqa "setMembers": [current_file], "frameStart": context.data['frameStart'], "frameEnd": context.data['frameEnd'], "handleStart": context.data['handleStart'], "handleEnd": context.data['handleEnd'] - }) + } data['representations'] = [{ 'name': ext.lstrip("."), @@ -50,8 +34,3 @@ class CollectWorkfile(pyblish.api.ContextPlugin): }] instance.data.update(data) - - self.log.info('Collected instance: {}'.format(file)) - self.log.info('Scene path: {}'.format(current_file)) - self.log.info('staging Dir: {}'.format(folder)) - self.log.info('subset: {}'.format(subset)) diff --git a/openpype/hosts/maya/plugins/publish/collect_xgen.py b/openpype/hosts/maya/plugins/publish/collect_xgen.py index 46968f7d1a..45648e1776 100644 --- a/openpype/hosts/maya/plugins/publish/collect_xgen.py +++ b/openpype/hosts/maya/plugins/publish/collect_xgen.py @@ -67,5 +67,5 @@ class CollectXgen(pyblish.api.InstancePlugin): data["transfers"] = transfers - self.log.info(data) + self.log.debug(data) instance.data.update(data) diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py index e6b5ca4260..7a1516997a 100644 --- a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py +++ b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py @@ -4,12 +4,23 @@ import pyblish.api from openpype.hosts.maya.api import lib -SETTINGS = {"renderDensity", - "renderWidth", - "renderLength", - "increaseRenderBounds", - "imageSearchPath", - "cbId"} + +SETTINGS = { + # Preview + "displayOutput", + "colorR", "colorG", "colorB", + "viewportDensity", + "viewportWidth", + "viewportLength", + # Render attributes + "renderDensity", + "renderWidth", + "renderLength", + "increaseRenderBounds", + "imageSearchPath", + # Pipeline specific + "cbId" +} class CollectYetiCache(pyblish.api.InstancePlugin): @@ -28,7 +39,7 @@ class CollectYetiCache(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.45 label = "Collect Yeti Cache" - families = ["yetiRig", "yeticache"] + families = ["yetiRig", "yeticache", "yeticacheUE"] hosts = ["maya"] def process(self, instance): @@ -39,10 +50,6 @@ class CollectYetiCache(pyblish.api.InstancePlugin): # Get yeti nodes and their transforms yeti_shapes = cmds.ls(instance, type="pgYetiMaya") for shape in yeti_shapes: - shape_data = {"transform": None, - "name": shape, - "cbId": lib.get_id(shape), - "attrs": None} # Get specific node attributes attr_data = {} @@ -58,9 +65,12 @@ class CollectYetiCache(pyblish.api.InstancePlugin): parent = cmds.listRelatives(shape, parent=True)[0] transform_data = {"name": parent, "cbId": lib.get_id(parent)} - # Store collected data - shape_data["attrs"] = attr_data - shape_data["transform"] = transform_data + shape_data = { + "transform": transform_data, + "name": shape, + "cbId": lib.get_id(shape), + "attrs": attr_data, + } settings["nodes"].append(shape_data) diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py index bc15edd9e0..df761cde13 100644 --- a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py +++ b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py @@ -119,7 +119,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin): texture_filenames = [] if image_search_paths: - # TODO: Somehow this uses OS environment path separator, `:` vs `;` # Later on check whether this is pipeline OS cross-compatible. image_search_paths = [p for p in @@ -130,13 +129,13 @@ class CollectYetiRig(pyblish.api.InstancePlugin): # List all related textures texture_filenames = cmds.pgYetiCommand(node, listTextures=True) - self.log.info("Found %i texture(s)" % len(texture_filenames)) + self.log.debug("Found %i texture(s)" % len(texture_filenames)) # Get all reference nodes reference_nodes = cmds.pgYetiGraph(node, listNodes=True, type="reference") - self.log.info("Found %i reference node(s)" % len(reference_nodes)) + self.log.debug("Found %i reference node(s)" % len(reference_nodes)) if texture_filenames and not image_search_paths: raise ValueError("pgYetiMaya node '%s' is missing the path to the " diff --git a/openpype/hosts/maya/plugins/publish/extract_active_view_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_active_view_thumbnail.py new file mode 100644 index 0000000000..483ae6d9d3 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_active_view_thumbnail.py @@ -0,0 +1,60 @@ +import maya.api.OpenMaya as om +import maya.api.OpenMayaUI as omui + +import pyblish.api +import tempfile + +from openpype.hosts.maya.api.lib import IS_HEADLESS + + +class ExtractActiveViewThumbnail(pyblish.api.InstancePlugin): + """Set instance thumbnail to a screengrab of current active viewport. + + This makes it so that if an instance does not have a thumbnail set yet that + it will get a thumbnail of the currently active view at the time of + publishing as a fallback. + + """ + order = pyblish.api.ExtractorOrder + 0.49 + label = "Active View Thumbnail" + families = ["workfile"] + hosts = ["maya"] + + def process(self, instance): + if IS_HEADLESS: + self.log.debug( + "Skip extraction of active view thumbnail, due to being in" + "headless mode." + ) + return + + thumbnail = instance.data.get("thumbnailPath") + if not thumbnail: + view_thumbnail = self.get_view_thumbnail(instance) + if not view_thumbnail: + return + + self.log.debug("Setting instance thumbnail path to: {}".format( + view_thumbnail + )) + instance.data["thumbnailPath"] = view_thumbnail + + def get_view_thumbnail(self, instance): + cache_key = "__maya_view_thumbnail" + context = instance.context + + if cache_key not in context.data: + # Generate only a single thumbnail, even for multiple instances + with tempfile.NamedTemporaryFile(suffix="_thumbnail.jpg", + delete=False) as f: + path = f.name + + view = omui.M3dView.active3dView() + image = om.MImage() + view.readColorBuffer(image, True) + image.writeToFile(path, "jpg") + self.log.debug("Generated thumbnail: {}".format(path)) + + context.data["cleanupFullPaths"].append(path) + context.data[cache_key] = path + return context.data[cache_key] diff --git a/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py b/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py index 102f0e46a2..46cc9090bb 100644 --- a/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py +++ b/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py @@ -100,7 +100,7 @@ class ExtractArnoldSceneSource(publish.Extractor): instance.data["representations"].append(representation) - self.log.info( + self.log.debug( "Extracted instance {} to: {}".format(instance.name, staging_dir) ) @@ -126,7 +126,7 @@ class ExtractArnoldSceneSource(publish.Extractor): instance.data["representations"].append(representation) def _extract(self, nodes, attribute_data, kwargs): - self.log.info( + self.log.debug( "Writing {} with:\n{}".format(kwargs["filename"], kwargs) ) filenames = [] @@ -180,12 +180,12 @@ class ExtractArnoldSceneSource(publish.Extractor): with lib.attribute_values(attribute_data): with lib.maintained_selection(): - self.log.info( + self.log.debug( "Writing: {}".format(duplicate_nodes) ) cmds.select(duplicate_nodes, noExpand=True) - self.log.info( + self.log.debug( "Extracting ass sequence with: {}".format(kwargs) ) @@ -194,6 +194,6 @@ class ExtractArnoldSceneSource(publish.Extractor): for file in exported_files: filenames.append(os.path.split(file)[1]) - self.log.info("Exported: {}".format(filenames)) + self.log.debug("Exported: {}".format(filenames)) return filenames, nodes_by_id diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py index 35932003ee..86ffdcef24 100644 --- a/openpype/hosts/maya/plugins/publish/extract_assembly.py +++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py @@ -27,11 +27,11 @@ class ExtractAssembly(publish.Extractor): json_filename = "{}.json".format(instance.name) json_path = os.path.join(staging_dir, json_filename) - self.log.info("Dumping scene data for debugging ..") + self.log.debug("Dumping scene data for debugging ..") with open(json_path, "w") as filepath: json.dump(instance.data["scenedata"], filepath, ensure_ascii=False) - self.log.info("Extracting point cache ..") + self.log.debug("Extracting pointcache ..") cmds.select(instance.data["nodesHierarchy"]) # Run basic alembic exporter diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py index aa445a0387..43803743bc 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py @@ -6,17 +6,21 @@ from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractCameraAlembic(publish.Extractor): +class ExtractCameraAlembic(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract a Camera as Alembic. - The cameras gets baked to world space by default. Only when the instance's + The camera gets baked to world space by default. Only when the instance's `bakeToWorldSpace` is set to False it will include its full hierarchy. + 'camera' family expects only single camera, if multiple cameras are needed, + 'matchmove' is better choice. + """ - label = "Camera (Alembic)" + label = "Extract Camera (Alembic)" hosts = ["maya"] - families = ["camera"] + families = ["camera", "matchmove"] bake_attributes = [] def process(self, instance): @@ -35,10 +39,11 @@ class ExtractCameraAlembic(publish.Extractor): # validate required settings assert isinstance(step, float), "Step must be a float value" - camera = cameras[0] # Define extract output file path dir_path = self.staging_dir(instance) + if not os.path.exists(dir_path): + os.makedirs(dir_path) filename = "{0}.abc".format(instance.name) path = os.path.join(dir_path, filename) @@ -64,9 +69,10 @@ class ExtractCameraAlembic(publish.Extractor): # if baked, drop the camera hierarchy to maintain # clean output and backwards compatibility - camera_root = cmds.listRelatives( - camera, parent=True, fullPath=True)[0] - job_str += ' -root {0}'.format(camera_root) + camera_roots = cmds.listRelatives( + cameras, parent=True, fullPath=True) + for camera_root in camera_roots: + job_str += ' -root {0}'.format(camera_root) for member in members: descendants = cmds.listRelatives(member, @@ -94,7 +100,7 @@ class ExtractCameraAlembic(publish.Extractor): "Attributes to bake must be specified as a list" ) for attr in self.bake_attributes: - self.log.info("Adding {} attribute".format(attr)) + self.log.debug("Adding {} attribute".format(attr)) job_str += " -attr {0}".format(attr) with lib.evaluation("off"): @@ -112,5 +118,5 @@ class ExtractCameraAlembic(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index 7467fa027d..38cf00bbdd 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -2,11 +2,15 @@ """Extract camera as Maya Scene.""" import os import itertools +import contextlib from maya import cmds from openpype.pipeline import publish from openpype.hosts.maya.api import lib +from openpype.lib import ( + BoolDef +) def massage_ma_file(path): @@ -78,7 +82,8 @@ def unlock(plug): cmds.disconnectAttr(source, destination) -class ExtractCameraMayaScene(publish.Extractor): +class ExtractCameraMayaScene(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract a Camera as Maya Scene. This will create a duplicate of the camera that will be baked *with* @@ -88,17 +93,22 @@ class ExtractCameraMayaScene(publish.Extractor): The cameras gets baked to world space by default. Only when the instance's `bakeToWorldSpace` is set to False it will include its full hierarchy. + 'camera' family expects only single camera, if multiple cameras are needed, + 'matchmove' is better choice. + Note: The extracted Maya ascii file gets "massaged" removing the uuid values so they are valid for older versions of Fusion (e.g. 6.4) """ - label = "Camera (Maya Scene)" + label = "Extract Camera (Maya Scene)" hosts = ["maya"] - families = ["camera"] + families = ["camera", "matchmove"] scene_type = "ma" + keep_image_planes = True + def process(self, instance): """Plugin entry point.""" # get settings @@ -106,12 +116,12 @@ class ExtractCameraMayaScene(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break except KeyError: @@ -131,15 +141,15 @@ class ExtractCameraMayaScene(publish.Extractor): "bake to world space is ignored...") # get cameras - members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True, - long=True, dag=True) - cameras = cmds.ls(members, leaf=True, shapes=True, long=True, - dag=True, type="camera") + members = set(cmds.ls(instance.data['setMembers'], leaf=True, + shapes=True, long=True, dag=True)) + cameras = set(cmds.ls(members, leaf=True, shapes=True, long=True, + dag=True, type="camera")) # validate required settings assert isinstance(step, float), "Step must be a float value" - camera = cameras[0] - transform = cmds.listRelatives(camera, parent=True, fullPath=True) + transforms = cmds.listRelatives(list(cameras), + parent=True, fullPath=True) # Define extract output file path dir_path = self.staging_dir(instance) @@ -151,23 +161,21 @@ class ExtractCameraMayaScene(publish.Extractor): with lib.evaluation("off"): with lib.suspended_refresh(): if bake_to_worldspace: - self.log.info( - "Performing camera bakes: {}".format(transform)) baked = lib.bake_to_world_space( - transform, + transforms, frame_range=[start, end], step=step ) - baked_camera_shapes = cmds.ls(baked, - type="camera", - dag=True, - shapes=True, - long=True) + baked_camera_shapes = set(cmds.ls(baked, + type="camera", + dag=True, + shapes=True, + long=True)) - members = members + baked_camera_shapes - members.remove(camera) + members.update(baked_camera_shapes) + members.difference_update(cameras) else: - baked_camera_shapes = cmds.ls(cameras, + baked_camera_shapes = cmds.ls(list(cameras), type="camera", dag=True, shapes=True, @@ -186,19 +194,28 @@ class ExtractCameraMayaScene(publish.Extractor): unlock(plug) cmds.setAttr(plug, value) - self.log.info("Performing extraction..") - cmds.select(cmds.ls(members, dag=True, - shapes=True, long=True), noExpand=True) - cmds.file(path, - force=True, - typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 - exportSelected=True, - preserveReferences=False, - constructionHistory=False, - channels=True, # allow animation - constraints=False, - shader=False, - expressions=False) + attr_values = self.get_attr_values_from_data( + instance.data) + keep_image_planes = attr_values.get("keep_image_planes") + + with transfer_image_planes(sorted(cameras), + sorted(baked_camera_shapes), + keep_image_planes): + + self.log.info("Performing extraction..") + cmds.select(cmds.ls(list(members), dag=True, + shapes=True, long=True), + noExpand=True) + cmds.file(path, + force=True, + typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 + exportSelected=True, + preserveReferences=False, + constructionHistory=False, + channels=True, # allow animation + constraints=False, + shader=False, + expressions=False) # Delete the baked hierarchy if bake_to_worldspace: @@ -217,5 +234,64 @@ class ExtractCameraMayaScene(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, path)) + + @classmethod + def get_attribute_defs(cls): + defs = super(ExtractCameraMayaScene, cls).get_attribute_defs() + + defs.extend([ + BoolDef("keep_image_planes", + label="Keep Image Planes", + tooltip="Preserving connected image planes on camera", + default=cls.keep_image_planes), + + ]) + + return defs + + +@contextlib.contextmanager +def transfer_image_planes(source_cameras, target_cameras, + keep_input_connections): + """Reattaches image planes to baked or original cameras. + + Baked cameras are duplicates of original ones. + This attaches it to duplicated camera properly and after + export it reattaches it back to original to keep image plane in workfile. + """ + originals = {} + try: + for source_camera, target_camera in zip(source_cameras, + target_cameras): + image_planes = cmds.listConnections(source_camera, + type="imagePlane") or [] + + # Split of the parent path they are attached - we want + # the image plane node name. + # TODO: Does this still mean the image plane name is unique? + image_planes = [x.split("->", 1)[1] for x in image_planes] + + if not image_planes: + continue + + originals[source_camera] = [] + for image_plane in image_planes: + if keep_input_connections: + if source_camera == target_camera: + continue + _attach_image_plane(target_camera, image_plane) + else: # explicitly dettaching image planes + cmds.imagePlane(image_plane, edit=True, detach=True) + originals[source_camera].append(image_plane) + yield + finally: + for camera, image_planes in originals.items(): + for image_plane in image_planes: + _attach_image_plane(camera, image_plane) + + +def _attach_image_plane(camera, image_plane): + cmds.imagePlane(image_plane, edit=True, detach=True) + cmds.imagePlane(image_plane, edit=True, camera=camera) diff --git a/openpype/hosts/maya/plugins/publish/extract_fbx.py b/openpype/hosts/maya/plugins/publish/extract_fbx.py index 9af3acef65..4f7eaf57bf 100644 --- a/openpype/hosts/maya/plugins/publish/extract_fbx.py +++ b/openpype/hosts/maya/plugins/publish/extract_fbx.py @@ -33,11 +33,11 @@ class ExtractFBX(publish.Extractor): # to format it into a string in a mel expression path = path.replace('\\', '/') - self.log.info("Extracting FBX to: {0}".format(path)) + self.log.debug("Extracting FBX to: {0}".format(path)) members = instance.data["setMembers"] - self.log.info("Members: {0}".format(members)) - self.log.info("Instance: {0}".format(instance[:])) + self.log.debug("Members: {0}".format(members)) + self.log.debug("Instance: {0}".format(instance[:])) fbx_exporter.set_options_from_instance(instance) @@ -58,4 +58,4 @@ class ExtractFBX(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract FBX successful to: {0}".format(path)) + self.log.debug("Extract FBX successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py b/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py new file mode 100644 index 0000000000..8288bc9329 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- +import os + +from maya import cmds # noqa +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.maya.api import fbx +from openpype.hosts.maya.api.lib import ( + namespaced, get_namespace, strip_namespace +) + + +class ExtractFBXAnimation(publish.Extractor): + """Extract Rig in FBX format from Maya. + + This extracts the rig in fbx with the constraints + and referenced asset content included. + This also optionally extract animated rig in fbx with + geometries included. + + """ + order = pyblish.api.ExtractorOrder + label = "Extract Animation (FBX)" + hosts = ["maya"] + families = ["animation.fbx"] + + def process(self, instance): + # Define output path + staging_dir = self.staging_dir(instance) + filename = "{0}.fbx".format(instance.name) + path = os.path.join(staging_dir, filename) + path = path.replace("\\", "/") + + fbx_exporter = fbx.FBXExtractor(log=self.log) + out_members = instance.data.get("animated_skeleton", []) + # Export + instance.data["constraints"] = True + instance.data["skeletonDefinitions"] = True + instance.data["referencedAssetsContent"] = True + fbx_exporter.set_options_from_instance(instance) + # Export from the rig's namespace so that the exported + # FBX does not include the namespace but preserves the node + # names as existing in the rig workfile + namespace = get_namespace(out_members[0]) + relative_out_members = [ + strip_namespace(node, namespace) for node in out_members + ] + with namespaced( + ":" + namespace, + new=False, + relative_names=True + ) as namespace: + fbx_exporter.export(relative_out_members, path) + + representations = instance.data.setdefault("representations", []) + representations.append({ + 'name': 'fbx', + 'ext': 'fbx', + 'files': filename, + "stagingDir": staging_dir + }) + + self.log.debug( + "Extracted FBX animation to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_gltf.py b/openpype/hosts/maya/plugins/publish/extract_gltf.py index ac258ffb3d..6d72d28525 100644 --- a/openpype/hosts/maya/plugins/publish/extract_gltf.py +++ b/openpype/hosts/maya/plugins/publish/extract_gltf.py @@ -20,14 +20,10 @@ class ExtractGLB(publish.Extractor): filename = "{0}.glb".format(instance.name) path = os.path.join(staging_dir, filename) - self.log.info("Extracting GLB to: {}".format(path)) - cmds.loadPlugin("maya2glTF", quiet=True) nodes = instance[:] - self.log.info("Instance: {0}".format(nodes)) - start_frame = instance.data('frameStart') or \ int(cmds.playbackOptions(query=True, animationStartTime=True))# noqa @@ -48,6 +44,7 @@ class ExtractGLB(publish.Extractor): "vno": True # visibleNodeOnly } + self.log.debug("Extracting GLB to: {}".format(path)) with lib.maintained_selection(): cmds.select(nodes, hi=True, noExpand=True) extract_gltf(staging_dir, @@ -65,4 +62,4 @@ class ExtractGLB(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract GLB successful to: {0}".format(path)) + self.log.debug("Extract GLB successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py b/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py index 422f5ad019..16436c6fe4 100644 --- a/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py +++ b/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py @@ -60,6 +60,6 @@ class ExtractGPUCache(publish.Extractor): instance.data["representations"].append(representation) - self.log.info( + self.log.debug( "Extracted instance {} to: {}".format(instance.name, staging_dir) ) diff --git a/openpype/hosts/maya/plugins/publish/extract_import_reference.py b/openpype/hosts/maya/plugins/publish/extract_import_reference.py index 51c82dde92..1fdee28d0c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_import_reference.py +++ b/openpype/hosts/maya/plugins/publish/extract_import_reference.py @@ -8,10 +8,12 @@ import tempfile from openpype.lib import run_subprocess from openpype.pipeline import publish +from openpype.pipeline.publish import OptionalPyblishPluginMixin from openpype.hosts.maya.api import lib -class ExtractImportReference(publish.Extractor): +class ExtractImportReference(publish.Extractor, + OptionalPyblishPluginMixin): """ Extract the scene with imported reference. @@ -28,20 +30,23 @@ class ExtractImportReference(publish.Extractor): tmp_format = "_tmp" @classmethod - def apply_settings(cls, project_setting, system_settings): - cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa + def apply_settings(cls, project_settings): + cls.active = project_settings["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa def process(self, instance): + if not self.is_active(instance.data): + return + ext_mapping = ( instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break @@ -64,7 +69,7 @@ class ExtractImportReference(publish.Extractor): reference_path = os.path.join(dir_path, ref_scene_name) tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name - self.log.info("Performing extraction..") + self.log.debug("Performing extraction..") # This generates script for mayapy to take care of reference # importing outside current session. It is passing current scene @@ -106,7 +111,7 @@ print("*** Done") # process until handles are closed by context manager. with tempfile.TemporaryDirectory() as tmp_dir_name: tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py") - self.log.info("Using script file: {}".format(tmp_script_path)) + self.log.debug("Using script file: {}".format(tmp_script_path)) with open(tmp_script_path, "wt") as tmp: tmp.write(script) @@ -144,9 +149,9 @@ print("*** Done") "stagingDir": os.path.dirname(current_name), "outputName": "imported" } - self.log.info("%s" % ref_representation) + self.log.debug(ref_representation) instance.data["representations"].append(ref_representation) - self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name, - reference_path)) + self.log.debug("Extracted instance '%s' to : '%s'" % (ref_scene_name, + reference_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_layout.py b/openpype/hosts/maya/plugins/publish/extract_layout.py index 7921fca069..75920b44a2 100644 --- a/openpype/hosts/maya/plugins/publish/extract_layout.py +++ b/openpype/hosts/maya/plugins/publish/extract_layout.py @@ -6,7 +6,7 @@ from maya import cmds from maya.api import OpenMaya as om from openpype.client import get_representation_by_id -from openpype.pipeline import legacy_io, publish +from openpype.pipeline import publish class ExtractLayout(publish.Extractor): @@ -23,14 +23,14 @@ class ExtractLayout(publish.Extractor): stagingdir = self.staging_dir(instance) # Perform extraction - self.log.info("Performing extraction..") + self.log.debug("Performing extraction..") if "representations" not in instance.data: instance.data["representations"] = [] json_data = [] # TODO representation queries can be refactored to be faster - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for asset in cmds.sets(str(instance), query=True): # Find the container @@ -64,7 +64,7 @@ class ExtractLayout(publish.Extractor): fields=["parent", "context.family"] ) - self.log.info(representation) + self.log.debug(representation) version_id = representation.get("parent") family = representation.get("context").get("family") @@ -159,5 +159,5 @@ class ExtractLayout(publish.Extractor): } instance.data["representations"].append(json_representation) - self.log.info("Extracted instance '%s' to: %s", - instance.name, json_representation) + self.log.debug("Extracted instance '%s' to: %s", + instance.name, json_representation) diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 3cc95a0b2e..635c2c425c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -1,12 +1,12 @@ # -*- coding: utf-8 -*- """Maya look extractor.""" +import sys from abc import ABCMeta, abstractmethod from collections import OrderedDict import contextlib import json import logging import os -import platform import tempfile import six import attr @@ -15,10 +15,17 @@ import pyblish.api from maya import cmds # noqa -from openpype.lib.vendor_bin_utils import find_executable -from openpype.lib import source_hash, run_subprocess, get_oiio_tools_path +from openpype.lib import ( + find_executable, + source_hash, + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) + from openpype.pipeline import legacy_io, publish, KnownPublishError from openpype.hosts.maya.api import lib +from openpype import AYON_SERVER_ENABLED # Modes for transfer COPY = 1 @@ -51,6 +58,12 @@ def find_paths_by_hash(texture_hash): str: path to texture if found. """ + if AYON_SERVER_ENABLED: + raise KnownPublishError( + "This is a bug. \"find_paths_by_hash\" is not compatible with " + "AYON." + ) + key = "data.sourceHashes.{0}".format(texture_hash) return legacy_io.distinct(key, {"type": "version"}) @@ -164,6 +177,24 @@ class MakeRSTexBin(TextureProcessor): source ] + # if color management is enabled we pass color space information + if color_management["enabled"]: + config_path = color_management["config"] + if not os.path.exists(config_path): + raise RuntimeError("OCIO config not found at: " + "{}".format(config_path)) + + if not os.getenv("OCIO"): + self.log.debug( + "OCIO environment variable not set." + "Setting it with OCIO config from Maya." + ) + os.environ["OCIO"] = config_path + + self.log.debug("converting colorspace {0} to redshift render " + "colorspace".format(colorspace)) + subprocess_args.extend(["-cs", colorspace]) + hash_args = ["rstex"] texture_hash = source_hash(source, *hash_args) @@ -174,11 +205,11 @@ class MakeRSTexBin(TextureProcessor): self.log.debug(" ".join(subprocess_args)) try: - run_subprocess(subprocess_args) + run_subprocess(subprocess_args, logger=self.log) except Exception: self.log.error("Texture .rstexbin conversion failed", exc_info=True) - raise + six.reraise(*sys.exc_info()) return TextureResult( path=destination, @@ -267,12 +298,11 @@ class MakeTX(TextureProcessor): """ - maketx_path = get_oiio_tools_path("maketx") - - if not maketx_path: - raise AssertionError( - "OIIO 'maketx' tool not found. Result: {}".format(maketx_path) - ) + try: + maketx_args = get_oiio_tool_args("maketx") + except ToolNotFoundError: + raise KnownPublishError( + "OpenImageIO is not available on the machine") # Define .tx filepath in staging if source file is not .tx fname, ext = os.path.splitext(os.path.basename(source)) @@ -302,7 +332,7 @@ class MakeTX(TextureProcessor): render_colorspace = color_management["rendering_space"] - self.log.info("tx: converting colorspace {0} " + self.log.debug("tx: converting colorspace {0} " "-> {1}".format(colorspace, render_colorspace)) args.extend(["--colorconvert", colorspace, render_colorspace]) @@ -326,10 +356,9 @@ class MakeTX(TextureProcessor): if not os.path.exists(resources_dir): os.makedirs(resources_dir) - self.log.info("Generating .tx file for %s .." % source) + self.log.debug("Generating .tx file for %s .." % source) - subprocess_args = [ - maketx_path, + subprocess_args = maketx_args + [ "-v", # verbose "-u", # update mode # --checknan doesn't influence the output file but aborts the @@ -412,12 +441,12 @@ class ExtractLook(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break except KeyError: @@ -444,12 +473,12 @@ class ExtractLook(publish.Extractor): # Remove all members of the sets so they are not included in the # exported file by accident - self.log.info("Processing sets..") + self.log.debug("Processing sets..") lookdata = instance.data["lookData"] relationships = lookdata["relationships"] sets = list(relationships.keys()) if not sets: - self.log.info("No sets found") + self.log.debug("No sets found for the look") return # Specify texture processing executables to activate @@ -462,7 +491,7 @@ class ExtractLook(publish.Extractor): "rstex": MakeRSTexBin }.items(): if instance.data.get(key, False): - processor = Processor() + processor = Processor(log=self.log) processor.apply_settings(context.data["system_settings"], context.data["project_settings"]) processors.append(processor) @@ -481,7 +510,7 @@ class ExtractLook(publish.Extractor): remap = results["attrRemap"] # Extract in correct render layer - self.log.info("Extracting look maya scene file: {}".format(maya_path)) + self.log.debug("Extracting look maya scene file: {}".format(maya_path)) layer = instance.data.get("renderlayer", "defaultRenderLayer") with lib.renderlayer(layer): # TODO: Ensure membership edits don't become renderlayer overrides @@ -507,12 +536,12 @@ class ExtractLook(publish.Extractor): ) # Write the JSON data - self.log.info("Extract json..") data = { "attributes": lookdata["attributes"], "relationships": relationships } + self.log.debug("Extracting json file: {}".format(json_path)) with open(json_path, "w") as f: json.dump(data, f) @@ -553,8 +582,8 @@ class ExtractLook(publish.Extractor): # Source hash for the textures instance.data["sourceHashes"] = hashes - self.log.info("Extracted instance '%s' to: %s" % (instance.name, - maya_path)) + self.log.debug("Extracted instance '%s' to: %s" % (instance.name, + maya_path)) def _set_resource_result_colorspace(self, resource, colorspace): """Update resource resulting colorspace after texture processing""" @@ -585,14 +614,19 @@ class ExtractLook(publish.Extractor): resources = instance.data["resources"] color_management = lib.get_color_management_preferences() - # Temporary fix to NOT create hardlinks on windows machines - if platform.system().lower() == "windows": - self.log.info( + # TODO: Temporary disable all hardlinking, due to the feature not being + # used or properly working. + self.log.info( + "Forcing copy instead of hardlink." + ) + force_copy = True + + if not force_copy and platform.system().lower() == "windows": + # Temporary fix to NOT create hardlinks on windows machines + self.log.warning( "Forcing copy instead of hardlink due to issues on Windows..." ) force_copy = True - else: - force_copy = instance.data.get("forceCopy", False) destinations_cache = {} @@ -667,11 +701,11 @@ class ExtractLook(publish.Extractor): destination = get_resource_destination_cached(source) if force_copy or texture_result.transfer_mode == COPY: transfers.append((source, destination)) - self.log.info('file will be copied {} -> {}'.format( + self.log.debug('file will be copied {} -> {}'.format( source, destination)) elif texture_result.transfer_mode == HARDLINK: hardlinks.append((source, destination)) - self.log.info('file will be hardlinked {} -> {}'.format( + self.log.debug('file will be hardlinked {} -> {}'.format( source, destination)) # Store the hashes from hash to destination to include in the @@ -703,7 +737,7 @@ class ExtractLook(publish.Extractor): color_space_attr = "{}.colorSpace".format(node) remap[color_space_attr] = resource["result_color_space"] - self.log.info("Finished remapping destinations ...") + self.log.debug("Finished remapping destinations ...") return { "fileTransfers": transfers, @@ -811,8 +845,8 @@ class ExtractLook(publish.Extractor): if not processed_result: raise RuntimeError("Texture Processor {} returned " "no result.".format(processor)) - self.log.info("Generated processed " - "texture: {}".format(processed_result.path)) + self.log.debug("Generated processed " + "texture: {}".format(processed_result.path)) # TODO: Currently all processors force copy instead of allowing # hardlinks using source hashes. This should be refactored @@ -823,7 +857,7 @@ class ExtractLook(publish.Extractor): if not force_copy: existing = self._get_existing_hashed_texture(filepath) if existing: - self.log.info("Found hash in database, preparing hardlink..") + self.log.debug("Found hash in database, preparing hardlink..") return TextureResult( path=filepath, file_hash=texture_hash, diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py index c2411ca651..ab170fe48c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py +++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py @@ -29,12 +29,12 @@ class ExtractMayaSceneRaw(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break except KeyError: @@ -63,7 +63,7 @@ class ExtractMayaSceneRaw(publish.Extractor): selection += self._get_loaded_containers(members) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): cmds.select(selection, noExpand=True) cmds.file(path, @@ -87,7 +87,8 @@ class ExtractMayaSceneRaw(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, path)) + self.log.debug("Extracted instance '%s' to: %s" % (instance.name, + path)) @staticmethod def _get_loaded_containers(members): diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_usd.py b/openpype/hosts/maya/plugins/publish/extract_maya_usd.py new file mode 100644 index 0000000000..8c32ac1e39 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_maya_usd.py @@ -0,0 +1,293 @@ +import os +import six +import json +import contextlib + +from maya import cmds + +import pyblish.api +from openpype.pipeline import publish +from openpype.hosts.maya.api.lib import maintained_selection + + +@contextlib.contextmanager +def usd_export_attributes(nodes, attrs=None, attr_prefixes=None, mapping=None): + """Define attributes for the given nodes that should be exported. + + MayaUSDExport will export custom attributes if the Maya node has a + string attribute `USD_UserExportedAttributesJson` that provides an + export mapping for the maya attributes. This context manager will try + to autogenerate such an attribute during the export to include attributes + for the export. + + Arguments: + nodes (List[str]): Nodes to process. + attrs (Optional[List[str]]): Full name of attributes to include. + attr_prefixes (Optional[List[str]]): Prefixes of attributes to include. + mapping (Optional[Dict[Dict]]): A mapping per attribute name for the + conversion to a USD attribute, including renaming, defining type, + converting attribute precision, etc. This match the usual + `USD_UserExportedAttributesJson` json mapping of `mayaUSDExport`. + When no mapping provided for an attribute it will use `{}` as + value. + + Examples: + >>> with usd_export_attributes( + >>> ["pCube1"], attrs="myDoubleAttributeAsFloat", mapping={ + >>> "myDoubleAttributeAsFloat": { + >>> "usdAttrName": "my:namespace:attrib", + >>> "translateMayaDoubleToUsdSinglePrecision": True, + >>> } + >>> }) + + """ + # todo: this might be better done with a custom export chaser + # see `chaser` argument for `mayaUSDExport` + + import maya.api.OpenMaya as om + + if not attrs and not attr_prefixes: + # context manager does nothing + yield + return + + if attrs is None: + attrs = [] + if attr_prefixes is None: + attr_prefixes = [] + if mapping is None: + mapping = {} + + usd_json_attr = "USD_UserExportedAttributesJson" + strings = attrs + ["{}*".format(prefix) for prefix in attr_prefixes] + context_state = {} + for node in set(nodes): + node_attrs = cmds.listAttr(node, st=strings) + if not node_attrs: + # Nothing to do for this node + continue + + node_attr_data = {} + for node_attr in set(node_attrs): + node_attr_data[node_attr] = mapping.get(node_attr, {}) + + if cmds.attributeQuery(usd_json_attr, node=node, exists=True): + existing_node_attr_value = cmds.getAttr( + "{}.{}".format(node, usd_json_attr) + ) + if existing_node_attr_value and existing_node_attr_value != "{}": + # Any existing attribute mappings in an existing + # `USD_UserExportedAttributesJson` attribute always take + # precedence over what this function tries to imprint + existing_node_attr_data = json.loads(existing_node_attr_value) + node_attr_data.update(existing_node_attr_data) + + context_state[node] = json.dumps(node_attr_data) + + sel = om.MSelectionList() + dg_mod = om.MDGModifier() + fn_string = om.MFnStringData() + fn_typed = om.MFnTypedAttribute() + try: + for node, value in context_state.items(): + data = fn_string.create(value) + sel.clear() + if cmds.attributeQuery(usd_json_attr, node=node, exists=True): + # Set the attribute value + sel.add("{}.{}".format(node, usd_json_attr)) + plug = sel.getPlug(0) + dg_mod.newPlugValue(plug, data) + else: + # Create attribute with the value as default value + sel.add(node) + node_obj = sel.getDependNode(0) + attr_obj = fn_typed.create(usd_json_attr, + usd_json_attr, + om.MFnData.kString, + data) + dg_mod.addAttribute(node_obj, attr_obj) + dg_mod.doIt() + yield + finally: + dg_mod.undoIt() + + +class ExtractMayaUsd(publish.Extractor): + """Extractor for Maya USD Asset data. + + Upon publish a .usd (or .usdz) asset file will typically be written. + """ + + label = "Extract Maya USD Asset" + hosts = ["maya"] + families = ["mayaUsd"] + + @property + def options(self): + """Overridable options for Maya USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + # TODO: Support more `mayaUSDExport` parameters + return { + "defaultUSDFormat": str, + "stripNamespaces": bool, + "mergeTransformAndShape": bool, + "exportDisplayColor": bool, + "exportColorSets": bool, + "exportInstances": bool, + "exportUVs": bool, + "exportVisibility": bool, + "exportComponentTags": bool, + "exportRefsAsInstanceable": bool, + "eulerFilter": bool, + "renderableOnly": bool, + "jobContext": (list, None) # optional list + # "worldspace": bool, + } + + @property + def default_options(self): + """The default options for Maya USD Export.""" + + # TODO: Support more `mayaUSDExport` parameters + return { + "defaultUSDFormat": "usdc", + "stripNamespaces": False, + "mergeTransformAndShape": False, + "exportDisplayColor": False, + "exportColorSets": True, + "exportInstances": True, + "exportUVs": True, + "exportVisibility": True, + "exportComponentTags": True, + "exportRefsAsInstanceable": False, + "eulerFilter": True, + "renderableOnly": False, + "jobContext": None + # "worldspace": False + } + + def parse_overrides(self, instance, options): + """Inspect data of instance to determine overridden options""" + + for key in instance.data: + if key not in self.options: + continue + + # Ensure the data is of correct type + value = instance.data[key] + if isinstance(value, six.text_type): + value = str(value) + if not isinstance(value, self.options[key]): + self.log.warning( + "Overridden attribute {key} was of " + "the wrong type: {invalid_type} " + "- should have been {valid_type}".format( + key=key, + invalid_type=type(value).__name__, + valid_type=self.options[key].__name__)) + continue + + options[key] = value + + return options + + def filter_members(self, members): + # Can be overridden by inherited classes + return members + + def process(self, instance): + + # Load plugin first + cmds.loadPlugin("mayaUsdPlugin", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{0}.usd".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace('\\', '/') + + # Parse export options + options = self.default_options + options = self.parse_overrides(instance, options) + self.log.debug("Export options: {0}".format(options)) + + # Perform extraction + self.log.debug("Performing extraction ...") + + members = instance.data("setMembers") + self.log.debug('Collected objects: {}'.format(members)) + members = self.filter_members(members) + if not members: + self.log.error('No members!') + return + + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + + def parse_attr_str(attr_str): + result = list() + for attr in attr_str.split(","): + attr = attr.strip() + if not attr: + continue + result.append(attr) + return result + + attrs = parse_attr_str(instance.data.get("attr", "")) + attrs += instance.data.get("userDefinedAttributes", []) + attrs += ["cbId"] + attr_prefixes = parse_attr_str(instance.data.get("attrPrefix", "")) + + self.log.debug('Exporting USD: {} / {}'.format(file_path, members)) + with maintained_selection(): + with usd_export_attributes(instance[:], + attrs=attrs, + attr_prefixes=attr_prefixes): + cmds.mayaUSDExport(file=file_path, + frameRange=(start, end), + frameStride=instance.data.get("step", 1.0), + exportRoots=members, + **options) + + representation = { + 'name': "usd", + 'ext': "usd", + 'files': file_name, + 'stagingDir': staging_dir + } + instance.data.setdefault("representations", []).append(representation) + + self.log.debug( + "Extracted instance {} to {}".format(instance.name, file_path) + ) + + +class ExtractMayaUsdAnim(ExtractMayaUsd): + """Extractor for Maya USD Animation Sparse Cache data. + + This will extract the sparse cache data from the scene and generate a + USD file with all the animation data. + + Upon publish a .usd sparse cache will be written. + """ + label = "Extract Maya USD Animation Sparse Cache" + families = ["animation", "mayaUsd"] + match = pyblish.api.Subset + + def filter_members(self, members): + out_set = next((i for i in members if i.endswith("out_SET")), None) + + if out_set is None: + self.log.warning("Expecting out_SET") + return None + + members = cmds.ls(cmds.sets(out_set, query=True), long=True) + return members diff --git a/openpype/hosts/maya/plugins/publish/extract_model.py b/openpype/hosts/maya/plugins/publish/extract_model.py index 7c8c3a2981..29c952ebbc 100644 --- a/openpype/hosts/maya/plugins/publish/extract_model.py +++ b/openpype/hosts/maya/plugins/publish/extract_model.py @@ -8,7 +8,8 @@ from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractModel(publish.Extractor): +class ExtractModel(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract as Model (Maya Scene). Only extracts contents based on the original "setMembers" data to ensure @@ -31,16 +32,19 @@ class ExtractModel(publish.Extractor): def process(self, instance): """Plugin entry point.""" + if not self.is_active(instance.data): + return + ext_mapping = ( instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break except KeyError: @@ -52,7 +56,7 @@ class ExtractModel(publish.Extractor): path = os.path.join(stagingdir, filename) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") # Get only the shape contents we need in such a way that we avoid # taking along intermediateObjects @@ -98,4 +102,5 @@ class ExtractModel(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, path)) + self.log.debug("Extracted instance '%s' to: %s" % (instance.name, + path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py index 6fe7cf0d55..c2bebeaee6 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py @@ -101,10 +101,10 @@ class ExtractMultiverseLook(publish.Extractor): # Parse export options options = self.default_options - self.log.info("Export options: {0}".format(options)) + self.log.debug("Export options: {0}".format(options)) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): members = instance.data("setMembers") @@ -114,7 +114,7 @@ class ExtractMultiverseLook(publish.Extractor): type="mvUsdCompoundShape", noIntermediate=True, long=True) - self.log.info('Collected object {}'.format(members)) + self.log.debug('Collected object {}'.format(members)) if len(members) > 1: self.log.error('More than one member: {}'.format(members)) @@ -153,5 +153,5 @@ class ExtractMultiverseLook(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance {} to {}".format( + self.log.debug("Extracted instance {} to {}".format( instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py index 4399eacda1..60185bb152 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py @@ -28,7 +28,7 @@ class ExtractMultiverseUsd(publish.Extractor): label = "Extract Multiverse USD Asset" hosts = ["maya"] - families = ["usd"] + families = ["mvUsd"] scene_type = "usd" file_formats = ["usd", "usda", "usdz"] @@ -150,7 +150,6 @@ class ExtractMultiverseUsd(publish.Extractor): return options def get_default_options(self): - self.log.info("ExtractMultiverseUsd get_default_options") return self.default_options def filter_members(self, members): @@ -173,19 +172,19 @@ class ExtractMultiverseUsd(publish.Extractor): # Parse export options options = self.get_default_options() options = self.parse_overrides(instance, options) - self.log.info("Export options: {0}".format(options)) + self.log.debug("Export options: {0}".format(options)) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): members = instance.data("setMembers") - self.log.info('Collected objects: {}'.format(members)) + self.log.debug('Collected objects: {}'.format(members)) members = self.filter_members(members) if not members: self.log.error('No members!') return - self.log.info(' - filtered: {}'.format(members)) + self.log.debug(' - filtered: {}'.format(members)) import multiverse @@ -229,7 +228,7 @@ class ExtractMultiverseUsd(publish.Extractor): self.log.debug(" - {}={}".format(key, value)) setattr(asset_write_opts, key, value) - self.log.info('WriteAsset: {} / {}'.format(file_path, members)) + self.log.debug('WriteAsset: {} / {}'.format(file_path, members)) multiverse.WriteAsset(file_path, members, asset_write_opts) if "representations" not in instance.data: @@ -243,7 +242,7 @@ class ExtractMultiverseUsd(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance {} to {}".format( + self.log.debug("Extracted instance {} to {}".format( instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py index a62729c198..7966c4fa93 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py @@ -105,14 +105,14 @@ class ExtractMultiverseUsdComposition(publish.Extractor): # Parse export options options = self.default_options options = self.parse_overrides(instance, options) - self.log.info("Export options: {0}".format(options)) + self.log.debug("Export options: {0}".format(options)) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): members = instance.data("setMembers") - self.log.info('Collected object {}'.format(members)) + self.log.debug('Collected object {}'.format(members)) import multiverse @@ -175,5 +175,5 @@ class ExtractMultiverseUsdComposition(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance {} to {}".format( - instance.name, file_path)) + self.log.debug("Extracted instance {} to {}".format(instance.name, + file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py index cf610ac6b4..e4a97db6e4 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py @@ -87,10 +87,10 @@ class ExtractMultiverseUsdOverride(publish.Extractor): # Parse export options options = self.default_options - self.log.info("Export options: {0}".format(options)) + self.log.debug("Export options: {0}".format(options)) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): members = instance.data("setMembers") @@ -100,7 +100,7 @@ class ExtractMultiverseUsdOverride(publish.Extractor): type="mvUsdCompoundShape", noIntermediate=True, long=True) - self.log.info("Collected object {}".format(members)) + self.log.debug("Collected object {}".format(members)) # TODO: Deal with asset, composition, override with options. import multiverse @@ -153,5 +153,5 @@ class ExtractMultiverseUsdOverride(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance {} to {}".format( + self.log.debug("Extracted instance {} to {}".format( instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_obj.py b/openpype/hosts/maya/plugins/publish/extract_obj.py index 518b0f0ff8..ca94130d09 100644 --- a/openpype/hosts/maya/plugins/publish/extract_obj.py +++ b/openpype/hosts/maya/plugins/publish/extract_obj.py @@ -30,7 +30,7 @@ class ExtractObj(publish.Extractor): # The export requires forward slashes because we need to # format it into a string in a mel expression - self.log.info("Extracting OBJ to: {0}".format(path)) + self.log.debug("Extracting OBJ to: {0}".format(path)) members = instance.data("setMembers") members = cmds.ls(members, @@ -39,8 +39,8 @@ class ExtractObj(publish.Extractor): type=("mesh", "nurbsCurve"), noIntermediate=True, long=True) - self.log.info("Members: {0}".format(members)) - self.log.info("Instance: {0}".format(instance[:])) + self.log.debug("Members: {0}".format(members)) + self.log.debug("Instance: {0}".format(instance[:])) if not cmds.pluginInfo('objExport', query=True, loaded=True): cmds.loadPlugin('objExport') @@ -74,4 +74,4 @@ class ExtractObj(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract OBJ successful to: {0}".format(path)) + self.log.debug("Extract OBJ successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 9580c13841..cfab239da3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -48,7 +48,7 @@ class ExtractPlayblast(publish.Extractor): self.log.debug("playblast path {}".format(path)) def process(self, instance): - self.log.info("Extracting capture..") + self.log.debug("Extracting capture..") # get scene fps fps = instance.data.get("fps") or instance.context.data.get("fps") @@ -62,7 +62,7 @@ class ExtractPlayblast(publish.Extractor): if end is None: end = cmds.playbackOptions(query=True, animationEndTime=True) - self.log.info("start: {}, end: {}".format(start, end)) + self.log.debug("start: {}, end: {}".format(start, end)) # get cameras camera = instance.data["review_camera"] @@ -119,7 +119,7 @@ class ExtractPlayblast(publish.Extractor): filename = "{0}".format(instance.name) path = os.path.join(stagingdir, filename) - self.log.info("Outputting images to %s" % path) + self.log.debug("Outputting images to %s" % path) preset["filename"] = path preset["overwrite"] = True @@ -237,7 +237,7 @@ class ExtractPlayblast(publish.Extractor): self.log.debug("collection head {}".format(filebase)) if filebase in filename: frame_collection = collection - self.log.info( + self.log.debug( "we found collection of interest {}".format( str(frame_collection))) diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py index f44c13767c..0cc802fa7a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py @@ -45,7 +45,7 @@ class ExtractAlembic(publish.Extractor): attr_prefixes = instance.data.get("attrPrefix", "").split(";") attr_prefixes = [value for value in attr_prefixes if value.strip()] - self.log.info("Extracting pointcache..") + self.log.debug("Extracting pointcache..") dirname = self.staging_dir(instance) parent_dir = self.staging_dir(instance) @@ -86,7 +86,6 @@ class ExtractAlembic(publish.Extractor): end=end)) suspend = not instance.data.get("refresh", False) - self.log.info(nodes) with suspended_refresh(suspend=suspend): with maintained_selection(): cmds.select(nodes, noExpand=True) @@ -108,13 +107,14 @@ class ExtractAlembic(publish.Extractor): } instance.data["representations"].append(representation) - instance.context.data["cleanupFullPaths"].append(path) + if not instance.data.get("stagingDir_persistent", False): + instance.context.data["cleanupFullPaths"].append(path) - self.log.info("Extracted {} to {}".format(instance, dirname)) + self.log.debug("Extracted {} to {}".format(instance, dirname)) # Extract proxy. if not instance.data.get("proxy"): - self.log.info("No proxy nodes found. Skipping proxy extraction.") + self.log.debug("No proxy nodes found. Skipping proxy extraction.") return path = path.replace(".abc", "_proxy.abc") diff --git a/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py index cf6351fdca..d9bec87cfd 100644 --- a/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py +++ b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py @@ -32,7 +32,7 @@ class ExtractProxyAlembic(publish.Extractor): attr_prefixes = instance.data.get("attrPrefix", "").split(";") attr_prefixes = [value for value in attr_prefixes if value.strip()] - self.log.info("Extracting Proxy Alembic..") + self.log.debug("Extracting Proxy Alembic..") dirname = self.staging_dir(instance) filename = "{name}.abc".format(**instance.data) @@ -80,9 +80,10 @@ class ExtractProxyAlembic(publish.Extractor): } instance.data["representations"].append(representation) - instance.context.data["cleanupFullPaths"].append(path) + if not instance.data.get("stagingDir_persistent", False): + instance.context.data["cleanupFullPaths"].append(path) - self.log.info("Extracted {} to {}".format(instance, dirname)) + self.log.debug("Extracted {} to {}".format(instance, dirname)) # remove the bounding box bbox_master = cmds.ls("bbox_grp") cmds.delete(bbox_master) diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py index 4377275635..3868270b79 100644 --- a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py @@ -29,15 +29,21 @@ class ExtractRedshiftProxy(publish.Extractor): if not anim_on: # Remove animation information because it is not required for # non-animated subsets - instance.data.pop("proxyFrameStart", None) - instance.data.pop("proxyFrameEnd", None) + keys = ["frameStart", + "frameEnd", + "handleStart", + "handleEnd", + "frameStartHandle", + "frameEndHandle"] + for key in keys: + instance.data.pop(key, None) else: - start_frame = instance.data["proxyFrameStart"] - end_frame = instance.data["proxyFrameEnd"] + start_frame = instance.data["frameStartHandle"] + end_frame = instance.data["frameEndHandle"] rs_options = "{}startFrame={};endFrame={};frameStep={};".format( rs_options, start_frame, - end_frame, instance.data["proxyFrameStep"] + end_frame, instance.data["step"] ) root, ext = os.path.splitext(file_path) @@ -48,12 +54,12 @@ class ExtractRedshiftProxy(publish.Extractor): for frame in range( int(start_frame), int(end_frame) + 1, - int(instance.data["proxyFrameStep"]), + int(instance.data["step"]), )] # vertex_colors = instance.data.get("vertexColors", False) # Write out rs file - self.log.info("Writing: '%s'" % file_path) + self.log.debug("Writing: '%s'" % file_path) with maintained_selection(): cmds.select(instance.data["setMembers"], noExpand=True) cmds.file(file_path, @@ -74,9 +80,7 @@ class ExtractRedshiftProxy(publish.Extractor): 'files': repr_files, "stagingDir": staging_dir, } - if anim_on: - representation["frameStart"] = instance.data["proxyFrameStart"] instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" - % (instance.name, staging_dir)) + self.log.debug("Extracted instance '%s' to: %s" + % (instance.name, staging_dir)) diff --git a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py index 5970c038a4..7e21f5282e 100644 --- a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py +++ b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py @@ -37,5 +37,5 @@ class ExtractRenderSetup(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info( + self.log.debug( "Extracted instance '%s' to: %s" % (instance.name, json_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_rig.py b/openpype/hosts/maya/plugins/publish/extract_rig.py index c71a2f710d..1ffc9a7dae 100644 --- a/openpype/hosts/maya/plugins/publish/extract_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_rig.py @@ -22,13 +22,13 @@ class ExtractRig(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( - "Using {} as scene type".format(self.scene_type)) + self.log.debug( + "Using '.{}' as scene type".format(self.scene_type)) break except AttributeError: # no preset found @@ -39,7 +39,7 @@ class ExtractRig(publish.Extractor): path = os.path.join(dir_path, filename) # Perform extraction - self.log.info("Performing extraction ...") + self.log.debug("Performing extraction ...") with maintained_selection(): cmds.select(instance, noExpand=True) cmds.file(path, @@ -63,4 +63,4 @@ class ExtractRig(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, path)) + self.log.debug("Extracted instance '%s' to: %s", instance.name, path) diff --git a/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py b/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py new file mode 100644 index 0000000000..50c1fb3bde --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +import os + +from maya import cmds # noqa +import pyblish.api + +from openpype.pipeline import publish +from openpype.pipeline.publish import OptionalPyblishPluginMixin +from openpype.hosts.maya.api import fbx + + +class ExtractSkeletonMesh(publish.Extractor, + OptionalPyblishPluginMixin): + """Extract Rig in FBX format from Maya. + + This extracts the rig in fbx with the constraints + and referenced asset content included. + This also optionally extract animated rig in fbx with + geometries included. + + """ + order = pyblish.api.ExtractorOrder + label = "Extract Skeleton Mesh" + hosts = ["maya"] + families = ["rig.fbx"] + + def process(self, instance): + if not self.is_active(instance.data): + return + # Define output path + staging_dir = self.staging_dir(instance) + filename = "{0}.fbx".format(instance.name) + path = os.path.join(staging_dir, filename) + + fbx_exporter = fbx.FBXExtractor(log=self.log) + out_set = instance.data.get("skeleton_mesh", []) + + instance.data["constraints"] = True + instance.data["skeletonDefinitions"] = True + + fbx_exporter.set_options_from_instance(instance) + + # Export + fbx_exporter.export(out_set, path) + + representations = instance.data.setdefault("representations", []) + representations.append({ + 'name': 'fbx', + 'ext': 'fbx', + 'files': filename, + "stagingDir": staging_dir + }) + + self.log.debug("Extract FBX to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index 4160ac4cb2..c0be3d77db 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -24,7 +24,7 @@ class ExtractThumbnail(publish.Extractor): families = ["review"] def process(self, instance): - self.log.info("Extracting capture..") + self.log.debug("Extracting capture..") camera = instance.data["review_camera"] @@ -92,11 +92,10 @@ class ExtractThumbnail(publish.Extractor): "Create temp directory {} for thumbnail".format(dst_staging) ) # Store new staging to cleanup paths - instance.context.data["cleanupFullPaths"].append(dst_staging) filename = "{0}".format(instance.name) path = os.path.join(dst_staging, filename) - self.log.info("Outputting images to %s" % path) + self.log.debug("Outputting images to %s" % path) preset["filename"] = path preset["overwrite"] = True @@ -159,7 +158,7 @@ class ExtractThumbnail(publish.Extractor): _, thumbnail = os.path.split(playblast) - self.log.info("file list {}".format(thumbnail)) + self.log.debug("file list {}".format(thumbnail)) if "representations" not in instance.data: instance.data["representations"] = [] diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py index e1f847f31a..9c2f55a1ef 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py @@ -32,7 +32,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor): optional = True def process(self, instance): - self.log.info("Extracting pointcache..") + self.log.debug("Extracting pointcache..") geo = cmds.listRelatives( instance.data.get("geometry"), allDescendents=True, fullPath=True) @@ -57,9 +57,9 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor): # to format it into a string in a mel expression path = path.replace('\\', '/') - self.log.info("Extracting ABC to: {0}".format(path)) - self.log.info("Members: {0}".format(nodes)) - self.log.info("Instance: {0}".format(instance[:])) + self.log.debug("Extracting ABC to: {0}".format(path)) + self.log.debug("Members: {0}".format(nodes)) + self.log.debug("Instance: {0}".format(instance[:])) options = { "step": instance.data.get("step", 1.0), @@ -74,7 +74,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor): "worldSpace": instance.data.get("worldSpace", True) } - self.log.info("Options: {}".format(options)) + self.log.debug("Options: {}".format(options)) if int(cmds.about(version=True)) >= 2017: # Since Maya 2017 alembic supports multiple uv sets - write them. @@ -105,4 +105,4 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract ABC successful to: {0}".format(path)) + self.log.debug("Extract ABC successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py index b162ce47f7..96175a07d7 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py @@ -46,9 +46,9 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor): # to format it into a string in a mel expression path = path.replace('\\', '/') - self.log.info("Extracting FBX to: {0}".format(path)) - self.log.info("Members: {0}".format(to_extract)) - self.log.info("Instance: {0}".format(instance[:])) + self.log.debug("Extracting FBX to: {0}".format(path)) + self.log.debug("Members: {0}".format(to_extract)) + self.log.debug("Instance: {0}".format(instance[:])) fbx_exporter.set_options_from_instance(instance) @@ -70,7 +70,7 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor): renamed_to_extract.append("|".join(node_path)) with renamed(original_parent, parent_node): - self.log.info("Extracting: {}".format(renamed_to_extract, path)) + self.log.debug("Extracting: {}".format(renamed_to_extract, path)) fbx_exporter.export(renamed_to_extract, path) if "representations" not in instance.data: @@ -84,4 +84,4 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract FBX successful to: {0}".format(path)) + self.log.debug("Extract FBX successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py index 44f0615a27..26ab0827e4 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py @@ -37,15 +37,15 @@ class ExtractUnrealStaticMesh(publish.Extractor): # to format it into a string in a mel expression path = path.replace('\\', '/') - self.log.info("Extracting FBX to: {0}".format(path)) - self.log.info("Members: {0}".format(members)) - self.log.info("Instance: {0}".format(instance[:])) + self.log.debug("Extracting FBX to: {0}".format(path)) + self.log.debug("Members: {0}".format(members)) + self.log.debug("Instance: {0}".format(instance[:])) fbx_exporter.set_options_from_instance(instance) with maintained_selection(): with parent_nodes(members): - self.log.info("Un-parenting: {}".format(members)) + self.log.debug("Un-parenting: {}".format(members)) fbx_exporter.export(members, path) if "representations" not in instance.data: @@ -59,4 +59,4 @@ class ExtractUnrealStaticMesh(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extract FBX successful to: {0}".format(path)) + self.log.debug("Extract FBX successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py b/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py new file mode 100644 index 0000000000..963285093e --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py @@ -0,0 +1,61 @@ +import os + +from maya import cmds + +from openpype.pipeline import publish + + +class ExtractYetiCache(publish.Extractor): + """Producing Yeti cache files using scene time range. + + This will extract Yeti cache file sequence and fur settings. + """ + + label = "Extract Yeti Cache" + hosts = ["maya"] + families = ["yeticacheUE"] + + def process(self, instance): + + yeti_nodes = cmds.ls(instance, type="pgYetiMaya") + if not yeti_nodes: + raise RuntimeError("No pgYetiMaya nodes found in the instance") + + # Define extract output file path + dirname = self.staging_dir(instance) + + # Collect information for writing cache + start_frame = instance.data["frameStartHandle"] + end_frame = instance.data["frameEndHandle"] + preroll = instance.data["preroll"] + if preroll > 0: + start_frame -= preroll + + kwargs = {} + samples = instance.data.get("samples", 0) + if samples == 0: + kwargs.update({"sampleTimes": "0.0 1.0"}) + else: + kwargs.update({"samples": samples}) + + self.log.debug(f"Writing out cache {start_frame} - {end_frame}") + filename = f"{instance.name}.abc" + path = os.path.join(dirname, filename) + cmds.pgYetiCommand(yeti_nodes, + writeAlembic=path, + range=(start_frame, end_frame), + asUnrealAbc=True, + **kwargs) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + 'stagingDir': dirname + } + instance.data["representations"].append(representation) + + self.log.debug(f"Extracted {instance} to {dirname}") diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py index df16c6c357..21dfcfffc5 100644 --- a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py @@ -43,7 +43,7 @@ class ExtractVRayProxy(publish.Extractor): vertex_colors = instance.data.get("vertexColors", False) # Write out vrmesh file - self.log.info("Writing: '%s'" % file_path) + self.log.debug("Writing: '%s'" % file_path) with maintained_selection(): cmds.select(instance.data["setMembers"], noExpand=True) cmds.vrayCreateProxy(exportType=1, @@ -68,5 +68,5 @@ class ExtractVRayProxy(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" - % (instance.name, staging_dir)) + self.log.debug("Extracted instance '%s' to: %s" + % (instance.name, staging_dir)) diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py index 8442df1611..b0615149a9 100644 --- a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py +++ b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py @@ -20,13 +20,13 @@ class ExtractVrayscene(publish.Extractor): def process(self, instance): """Plugin entry point.""" if instance.data.get("exportOnFarm"): - self.log.info("vrayscenes will be exported on farm.") + self.log.debug("vrayscenes will be exported on farm.") raise NotImplementedError( "exporting vrayscenes is not implemented") # handle sequence if instance.data.get("vraySceneMultipleFiles"): - self.log.info("vrayscenes will be exported on farm.") + self.log.debug("vrayscenes will be exported on farm.") raise NotImplementedError( "exporting vrayscene sequences not implemented yet") @@ -40,7 +40,6 @@ class ExtractVrayscene(publish.Extractor): layer_name = instance.data.get("layer") staging_dir = self.staging_dir(instance) - self.log.info("staging: {}".format(staging_dir)) template = cmds.getAttr("{}.vrscene_filename".format(node)) start_frame = instance.data.get( "frameStartHandle") if instance.data.get( @@ -56,21 +55,21 @@ class ExtractVrayscene(publish.Extractor): staging_dir, "vrayscene", *formatted_name.split("/")) # Write out vrscene file - self.log.info("Writing: '%s'" % file_path) + self.log.debug("Writing: '%s'" % file_path) with maintained_selection(): if "*" not in instance.data["setMembers"]: - self.log.info( + self.log.debug( "Exporting: {}".format(instance.data["setMembers"])) set_members = instance.data["setMembers"] cmds.select(set_members, noExpand=True) else: - self.log.info("Exporting all ...") + self.log.debug("Exporting all ...") set_members = cmds.ls( long=True, objectsOnly=True, geometry=True, lights=True, cameras=True) cmds.select(set_members, noExpand=True) - self.log.info("Appending layer name {}".format(layer_name)) + self.log.debug("Appending layer name {}".format(layer_name)) set_members.append(layer_name) export_in_rs_layer( @@ -93,8 +92,8 @@ class ExtractVrayscene(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" - % (instance.name, staging_dir)) + self.log.debug("Extracted instance '%s' to: %s" + % (instance.name, staging_dir)) @staticmethod def format_vray_output_filename( diff --git a/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py b/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py index 0d2a97bc4b..4bd01c2df2 100644 --- a/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py +++ b/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py @@ -241,7 +241,7 @@ class ExtractWorkfileXgen(publish.Extractor): data[palette] = {attr: old_value} cmds.setAttr(node_attr, value, type="string") - self.log.info( + self.log.debug( "Setting \"{}\" on \"{}\"".format(value, node_attr) ) diff --git a/openpype/hosts/maya/plugins/publish/extract_xgen.py b/openpype/hosts/maya/plugins/publish/extract_xgen.py index 3c9d0bd344..8409330e49 100644 --- a/openpype/hosts/maya/plugins/publish/extract_xgen.py +++ b/openpype/hosts/maya/plugins/publish/extract_xgen.py @@ -77,7 +77,7 @@ class ExtractXgen(publish.Extractor): xgenm.exportPalette( instance.data["xgmPalette"].replace("|", ""), temp_xgen_path ) - self.log.info("Extracted to {}".format(temp_xgen_path)) + self.log.debug("Extracted to {}".format(temp_xgen_path)) # Import xgen onto the duplicate. with maintained_selection(): @@ -118,7 +118,7 @@ class ExtractXgen(publish.Extractor): expressions=True ) - self.log.info("Extracted to {}".format(maya_filepath)) + self.log.debug("Extracted to {}".format(maya_filepath)) if os.path.exists(temp_xgen_path): os.remove(temp_xgen_path) diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py index b61f599cab..b113e02219 100644 --- a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py +++ b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py @@ -39,7 +39,7 @@ class ExtractYetiCache(publish.Extractor): else: kwargs.update({"samples": samples}) - self.log.info( + self.log.debug( "Writing out cache {} - {}".format(start_frame, end_frame)) # Start writing the files for snap shot # will be replace by the Yeti node name @@ -53,7 +53,7 @@ class ExtractYetiCache(publish.Extractor): cache_files = [x for x in os.listdir(dirname) if x.endswith(".fur")] - self.log.info("Writing metadata file") + self.log.debug("Writing metadata file") settings = instance.data["fursettings"] fursettings_path = os.path.join(dirname, "yeti.fursettings") with open(fursettings_path, "w") as fp: @@ -63,7 +63,7 @@ class ExtractYetiCache(publish.Extractor): if "representations" not in instance.data: instance.data["representations"] = [] - self.log.info("cache files: {}".format(cache_files[0])) + self.log.debug("cache files: {}".format(cache_files[0])) # Workaround: We do not explicitly register these files with the # representation solely so that we can write multiple sequences @@ -87,4 +87,4 @@ class ExtractYetiCache(publish.Extractor): } ) - self.log.info("Extracted {} to {}".format(instance, dirname)) + self.log.debug("Extracted {} to {}".format(instance, dirname)) diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py index 1d0c5e88c3..da67cb911f 100644 --- a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py @@ -104,12 +104,12 @@ class ExtractYetiRig(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] - self.log.info( + self.log.debug( "Using {} as scene type".format(self.scene_type)) break except KeyError: @@ -127,7 +127,7 @@ class ExtractYetiRig(publish.Extractor): maya_path = os.path.join(dirname, "yeti_rig.{}".format(self.scene_type)) - self.log.info("Writing metadata file") + self.log.debug("Writing metadata file: {}".format(settings_path)) image_search_path = resources_dir = instance.data["resourcesDir"] @@ -147,7 +147,7 @@ class ExtractYetiRig(publish.Extractor): dst = os.path.join(image_search_path, os.path.basename(file)) instance.data['transfers'].append([src, dst]) - self.log.info("adding transfer {} -> {}". format(src, dst)) + self.log.debug("adding transfer {} -> {}". format(src, dst)) # Ensure the imageSearchPath is being remapped to the publish folder attr_value = {"%s.imageSearchPath" % n: str(image_search_path) for @@ -182,7 +182,7 @@ class ExtractYetiRig(publish.Extractor): if "representations" not in instance.data: instance.data["representations"] = [] - self.log.info("rig file: {}".format(maya_path)) + self.log.debug("rig file: {}".format(maya_path)) instance.data["representations"].append( { 'name': self.scene_type, @@ -191,7 +191,7 @@ class ExtractYetiRig(publish.Extractor): 'stagingDir': dirname } ) - self.log.info("settings file: {}".format(settings_path)) + self.log.debug("settings file: {}".format(settings_path)) instance.data["representations"].append( { 'name': 'rigsettings', @@ -201,6 +201,6 @@ class ExtractYetiRig(publish.Extractor): } ) - self.log.info("Extracted {} to {}".format(instance, dirname)) + self.log.debug("Extracted {} to {}".format(instance, dirname)) cmds.select(clear=True) diff --git a/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml new file mode 100644 index 0000000000..40169b28f9 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml @@ -0,0 +1,21 @@ + + + +Maya scene units +## Invalid maya scene units + +Detected invalid maya scene units: + +{issues} + + + +### How to repair? + +You can automatically repair the scene units by clicking the Repair action on +the right. + +After that restart publishing with Reload button. + + + diff --git a/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml new file mode 100644 index 0000000000..2ef4bc95c2 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml @@ -0,0 +1,29 @@ + + + +Missing node ids +## Nodes found with missing `cbId` + +Nodes were detected in your scene which are missing required `cbId` +attributes for identification further in the pipeline. + +### How to repair? + +The node ids are auto-generated on scene save, and thus the easiest fix is to +save your scene again. + +After that restart publishing with Reload button. + + +### Invalid nodes + +{nodes} + + +### How could this happen? + +This often happens if you've generated new nodes but haven't saved your scene +after creating the new nodes. + + + diff --git a/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py b/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py index d8e8554b68..759aa23258 100644 --- a/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py +++ b/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py @@ -23,7 +23,7 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin): for palette, data in xgen_attributes.items(): for attr, value in data.items(): node_attr = "{}.{}".format(palette, attr) - self.log.info( + self.log.debug( "Setting \"{}\" on \"{}\"".format(value, node_attr) ) cmds.setAttr(node_attr, value, type="string") @@ -32,5 +32,5 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin): # Need to save the scene, cause the attribute changes above does not # mark the scene as modified so user can exit without committing the # changes. - self.log.info("Saving changes.") + self.log.debug("Saving changes.") cmds.file(save=True) diff --git a/openpype/hosts/maya/plugins/publish/validate_animated_reference.py b/openpype/hosts/maya/plugins/publish/validate_animated_reference.py new file mode 100644 index 0000000000..4537892d6d --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_animated_reference.py @@ -0,0 +1,66 @@ +import pyblish.api +import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) +from maya import cmds + + +class ValidateAnimatedReferenceRig(pyblish.api.InstancePlugin): + """Validate all nodes in skeletonAnim_SET are referenced""" + + order = ValidateContentsOrder + hosts = ["maya"] + families = ["animation.fbx"] + label = "Animated Reference Rig" + accepted_controllers = ["transform", "locator"] + actions = [openpype.hosts.maya.api.action.SelectInvalidAction] + + def process(self, instance): + animated_sets = instance.data.get("animated_skeleton", []) + if not animated_sets: + self.log.debug( + "No nodes found in skeletonAnim_SET. " + "Skipping validation of animated reference rig..." + ) + return + + for animated_reference in animated_sets: + is_referenced = cmds.referenceQuery( + animated_reference, isNodeReferenced=True) + if not bool(is_referenced): + raise PublishValidationError( + "All the content in skeletonAnim_SET" + " should be referenced nodes" + ) + invalid_controls = self.validate_controls(animated_sets) + if invalid_controls: + raise PublishValidationError( + "All the content in skeletonAnim_SET" + " should be transforms" + ) + + @classmethod + def validate_controls(self, set_members): + """Check if the controller set contains only accepted node types. + + Checks if all its set members are within the hierarchy of the root + Checks if the node types of the set members valid + + Args: + set_members: list of nodes of the skeleton_anim_set + hierarchy: list of nodes which reside under the root node + + Returns: + errors (list) + """ + + # Validate control types + invalid = [] + set_members = cmds.ls(set_members, long=True) + for node in set_members: + if cmds.nodeType(node) not in self.accepted_controllers: + invalid.append(node) + + return invalid diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_content.py b/openpype/hosts/maya/plugins/publish/validate_animation_content.py index 9dbb09a046..99acdc7b8f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_content.py @@ -1,6 +1,9 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) class ValidateAnimationContent(pyblish.api.InstancePlugin): @@ -47,4 +50,5 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Animation content is invalid. See log.") + raise PublishValidationError( + "Animation content is invalid. See log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py index 5a527031be..6f5f03ab39 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py @@ -6,6 +6,7 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -35,8 +36,10 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with mismatching " - "IDs: {0}".format(invalid)) + # TODO: Message formatting can be improved + raise PublishValidationError("Nodes found with mismatching " + "IDs: {0}".format(invalid), + title="Invalid node ids") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py index 6975d583bb..49913fa42b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py +++ b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py @@ -23,11 +23,13 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): def process(self, instance): # we cannot ask this until user open render settings as - # `defaultArnoldRenderOptions` doesn't exists + # `defaultArnoldRenderOptions` doesn't exist + errors = [] + try: - relative_texture = cmds.getAttr( + absolute_texture = cmds.getAttr( "defaultArnoldRenderOptions.absolute_texture_paths") - relative_procedural = cmds.getAttr( + absolute_procedural = cmds.getAttr( "defaultArnoldRenderOptions.absolute_procedural_paths") texture_search_path = cmds.getAttr( "defaultArnoldRenderOptions.tspath" @@ -42,10 +44,11 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): scene_dir, scene_basename = os.path.split(cmds.file(q=True, loc=True)) scene_name, _ = os.path.splitext(scene_basename) - assert self.maya_is_true(relative_texture) is not True, \ - ("Texture path is set to be absolute") - assert self.maya_is_true(relative_procedural) is not True, \ - ("Procedural path is set to be absolute") + + if self.maya_is_true(absolute_texture): + errors.append("Texture path is set to be absolute") + if self.maya_is_true(absolute_procedural): + errors.append("Procedural path is set to be absolute") anatomy = instance.context.data["anatomy"] @@ -57,15 +60,20 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): for k in keys: paths.append("[{}]".format(k)) - self.log.info("discovered roots: {}".format(":".join(paths))) + self.log.debug("discovered roots: {}".format(":".join(paths))) - assert ":".join(paths) in texture_search_path, ( - "Project roots are not in texture_search_path" - ) + if ":".join(paths) not in texture_search_path: + errors.append(( + "Project roots {} are not in texture_search_path: {}" + ).format(paths, texture_search_path)) - assert ":".join(paths) in procedural_search_path, ( - "Project roots are not in procedural_search_path" - ) + if ":".join(paths) not in procedural_search_path: + errors.append(( + "Project roots {} are not in procedural_search_path: {}" + ).format(paths, procedural_search_path)) + + if errors: + raise PublishValidationError("\n".join(errors)) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py index 02464b2302..00588cd300 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py @@ -1,6 +1,9 @@ import pyblish.api import maya.cmds as cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError +) class ValidateAssemblyName(pyblish.api.InstancePlugin): @@ -17,7 +20,7 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - cls.log.info("Checking name of {}".format(instance.name)) + cls.log.debug("Checking name of {}".format(instance.name)) content_instance = instance.data.get("setMembers", None) if not content_instance: @@ -47,5 +50,5 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found {} invalid named assembly " + raise PublishValidationError("Found {} invalid named assembly " "items".format(len(invalid))) diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py index 229da63c42..06577f38f7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py @@ -1,6 +1,8 @@ import pyblish.api import openpype.hosts.maya.api.action - +from openpype.pipeline.publish import ( + PublishValidationError +) class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin): """Ensure namespaces are not nested @@ -21,9 +23,9 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin): def process(self, instance): - self.log.info("Checking namespace for %s" % instance.name) + self.log.debug("Checking namespace for %s" % instance.name) if self.get_invalid(instance): - raise RuntimeError("Nested namespaces found") + raise PublishValidationError("Nested namespaces found") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py index d1bca4091b..a24455ebaa 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py @@ -1,9 +1,8 @@ import pyblish.api - from maya import cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import RepairAction +from openpype.pipeline.publish import PublishValidationError, RepairAction class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): @@ -38,8 +37,9 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found {} invalid transforms of assembly " - "items".format(len(invalid))) + raise PublishValidationError( + ("Found {} invalid transforms of assembly " + "items").format(len(invalid))) @classmethod def get_invalid(cls, instance): @@ -90,6 +90,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): """ from qtpy import QtWidgets + from openpype.hosts.maya.api import lib # Store namespace in variable, cosmetics thingy diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py index 7ebd9d7d03..c76d979fbf 100644 --- a/openpype/hosts/maya/plugins/publish/validate_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py @@ -1,17 +1,16 @@ from collections import defaultdict -from maya import cmds - import pyblish.api +from maya import cmds from openpype.hosts.maya.api.lib import set_attribute from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + OptionalPyblishPluginMixin, PublishValidationError, RepairAction, + ValidateContentsOrder) -class ValidateAttributes(pyblish.api.InstancePlugin): +class ValidateAttributes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Ensure attributes are consistent. Attributes to validate and their values comes from the @@ -32,13 +31,16 @@ class ValidateAttributes(pyblish.api.InstancePlugin): attributes = None def process(self, instance): + if not self.is_active(instance.data): + return + # Check for preset existence. if not self.attributes: return invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found attributes with invalid values: {}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py index 13ea53a357..e5745612e9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateCameraAttributes(pyblish.api.InstancePlugin): @@ -65,4 +66,5 @@ class ValidateCameraAttributes(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid camera attributes: %s" % invalid) + raise PublishValidationError( + "Invalid camera attributes: {}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py index 1ce8026fc2..767ac55718 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateCameraContents(pyblish.api.InstancePlugin): @@ -34,7 +35,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): cameras = cmds.ls(shapes, type='camera', long=True) if len(cameras) != 1: cls.log.error("Camera instance must have a single camera. " - "Found {0}: {1}".format(len(cameras), cameras)) + "Found {0}: {1}".format(len(cameras), cameras)) invalid.extend(cameras) # We need to check this edge case because returning an extended @@ -48,10 +49,12 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): "members: {}".format(members)) return members - raise RuntimeError("No cameras found in empty instance.") + raise PublishValidationError( + "No cameras found in empty instance.") if not cls.validate_shapes: - cls.log.info("not validating shapes in the content") + cls.log.debug("Not validating shapes in the camera content" + " because 'validate shapes' is disabled") return invalid # non-camera shapes @@ -60,13 +63,10 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): if shapes: shapes = list(shapes) cls.log.error("Camera instance should only contain camera " - "shapes. Found: {0}".format(shapes)) + "shapes. Found: {0}".format(shapes)) invalid.extend(shapes) - - invalid = list(set(invalid)) - return invalid def process(self, instance): @@ -74,5 +74,5 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid camera contents: " + raise PublishValidationError("Invalid camera contents: " "{0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py index 7ce3cca61a..173fee4179 100644 --- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py @@ -3,12 +3,15 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( - RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError, + RepairAction ) -class ValidateColorSets(pyblish.api.Validator): +class ValidateColorSets(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Validate all meshes in the instance have unlocked normals These can be removed manually through: @@ -20,8 +23,9 @@ class ValidateColorSets(pyblish.api.Validator): hosts = ['maya'] families = ['model'] label = 'Mesh ColorSets' - actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - RepairAction] + actions = [ + openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction + ] optional = True @staticmethod @@ -40,12 +44,15 @@ class ValidateColorSets(pyblish.api.Validator): def process(self, instance): """Raise invalid when any of the meshes have ColorSets""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with " - "Color Sets: {0}".format(invalid)) + raise PublishValidationError( + message="Meshes found with Color Sets: {0}".format(invalid) + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py index 210ee4127c..24da091246 100644 --- a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py +++ b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py @@ -1,13 +1,14 @@ -from maya import cmds - import pyblish.api +from maya import cmds import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import maintained_selection -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateCycleError(pyblish.api.InstancePlugin): +class ValidateCycleError(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate nodes produce no cycle errors.""" order = ValidateContentsOrder + 0.05 @@ -18,9 +19,13 @@ class ValidateCycleError(pyblish.api.InstancePlugin): optional = True def process(self, instance): + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes produce a cycle error: %s" % invalid) + raise PublishValidationError( + "Nodes produce a cycle error: {}".format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py index ccb351c880..a7043b8407 100644 --- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py @@ -4,7 +4,8 @@ from maya import cmds from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, - PublishValidationError + PublishValidationError, + OptionalPyblishPluginMixin ) from openpype.hosts.maya.api.lib_rendersetup import ( get_attr_overrides, @@ -13,7 +14,8 @@ from openpype.hosts.maya.api.lib_rendersetup import ( from maya.app.renderSetup.model.override import AbsOverride -class ValidateFrameRange(pyblish.api.InstancePlugin): +class ValidateFrameRange(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates the frame ranges. This is an optional validator checking if the frame range on instance @@ -40,12 +42,15 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): exclude_families = [] def process(self, instance): + if not self.is_active(instance.data): + return + context = instance.context if instance.data.get("tileRendering"): - self.log.info(( + self.log.debug( "Skipping frame range validation because " "tile rendering is enabled." - )) + ) return frame_start_handle = int(context.data.get("frameStartHandle")) @@ -102,10 +107,12 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): "({}).".format(label.title(), values[1], values[0]) ) - for e in errors: - self.log.error(e) + if errors: + report = "Frame range settings are incorrect.\n\n" + for error in errors: + report += "- {}\n\n".format(error) - assert len(errors) == 0, ("Frame range settings are incorrect") + raise PublishValidationError(report, title="Frame Range incorrect") @classmethod def repair(cls, instance): @@ -150,7 +157,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): def repair_renderlayer(cls, instance): """Apply frame range in render settings""" - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] context = instance.context start_attr = "defaultRenderGlobals.startFrame" diff --git a/openpype/hosts/maya/plugins/publish/validate_glsl_material.py b/openpype/hosts/maya/plugins/publish/validate_glsl_material.py index 10c48da404..3b386c3def 100644 --- a/openpype/hosts/maya/plugins/publish/validate_glsl_material.py +++ b/openpype/hosts/maya/plugins/publish/validate_glsl_material.py @@ -75,7 +75,7 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin): """ meshes = cmds.ls(instance, type="mesh", long=True) - cls.log.info("meshes: {}".format(meshes)) + cls.log.debug("meshes: {}".format(meshes)) # load the glsl shader plugin cmds.loadPlugin("glslShader", quiet=True) @@ -96,8 +96,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin): cls.log.warning("ogsfx shader file " "not found in {}".format(ogsfx_path)) - cls.log.info("Find the ogsfx shader file in " - "default maya directory...") + cls.log.debug("Searching the ogsfx shader file in " + "default maya directory...") # re-direct to search the ogsfx path in maya_dir ogsfx_path = os.getenv("MAYA_APP_DIR") + ogsfx_path if not os.path.exists(ogsfx_path): @@ -130,8 +130,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin): @classmethod def pbs_shader_conversion(cls, main_shader, glsl): - cls.log.info("StringrayPBS detected " - "-> Can do texture conversion") + cls.log.debug("StringrayPBS detected " + "-> Can do texture conversion") for shader in main_shader: # get the file textures related to the PBS Shader @@ -168,8 +168,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin): @classmethod def arnold_shader_conversion(cls, main_shader, glsl): - cls.log.info("aiStandardSurface detected " - "-> Can do texture conversion") + cls.log.debug("aiStandardSurface detected " + "-> Can do texture conversion") for shader in main_shader: # get the file textures related to the PBS Shader diff --git a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py index 53c2cf548a..da065fcf94 100644 --- a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py +++ b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py @@ -4,7 +4,8 @@ from maya import cmds import pyblish.api from openpype.pipeline.publish import ( RepairAction, - ValidateContentsOrder + ValidateContentsOrder, + PublishValidationError ) @@ -21,7 +22,7 @@ class ValidateGLSLPlugin(pyblish.api.InstancePlugin): def process(self, instance): if not cmds.pluginInfo("maya2glTF", query=True, loaded=True): - raise RuntimeError("maya2glTF is not loaded") + raise PublishValidationError("maya2glTF is not loaded") @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py b/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py deleted file mode 100644 index f870c9f8c4..0000000000 --- a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py +++ /dev/null @@ -1,60 +0,0 @@ -from maya import cmds - -import pyblish.api -from openpype.pipeline.publish import ( - ValidateContentsOrder, PublishValidationError, RepairAction -) -from openpype.pipeline import discover_legacy_creator_plugins -from openpype.hosts.maya.api.lib import imprint - - -class ValidateInstanceAttributes(pyblish.api.InstancePlugin): - """Validate Instance Attributes. - - New attributes can be introduced as new features come in. Old instances - will need to be updated with these attributes for the documentation to make - sense, and users do not have to recreate the instances. - """ - - order = ValidateContentsOrder - hosts = ["maya"] - families = ["*"] - label = "Instance Attributes" - plugins_by_family = { - p.family: p for p in discover_legacy_creator_plugins() - } - actions = [RepairAction] - - @classmethod - def get_missing_attributes(self, instance): - plugin = self.plugins_by_family[instance.data["family"]] - subset = instance.data["subset"] - asset = instance.data["asset"] - objset = instance.data["objset"] - - missing_attributes = {} - for key, value in plugin(subset, asset).data.items(): - if not cmds.objExists("{}.{}".format(objset, key)): - missing_attributes[key] = value - - return missing_attributes - - def process(self, instance): - objset = instance.data.get("objset") - if objset is None: - self.log.debug( - "Skipping {} because no objectset found.".format(instance) - ) - return - - missing_attributes = self.get_missing_attributes(instance) - if missing_attributes: - raise PublishValidationError( - "Missing attributes on {}:\n{}".format( - objset, missing_attributes - ) - ) - - @classmethod - def repair(cls, instance): - imprint(instance.data["objset"], cls.get_missing_attributes(instance)) diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py index 63849cfd12..7234f5a025 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py @@ -1,6 +1,9 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): @@ -14,18 +17,23 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): invalid = list() - if not instance.data["setMembers"]: + if not instance.data.get("setMembers"): objectset_name = instance.data['name'] invalid.append(objectset_name) return invalid def process(self, instance): - # Allow renderlayer and workfile to be empty - skip_families = ["workfile", "renderlayer", "rendersetup"] + # Allow renderlayer, rendersetup and workfile to be empty + skip_families = {"workfile", "renderlayer", "rendersetup"} if instance.data.get("family") in skip_families: return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Empty instances found: {0}".format(invalid)) + # Invalid will always be a single entry, we log the single name + name = invalid[0] + raise PublishValidationError( + title="Empty instance", + message="Instance '{0}' is empty".format(name) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py index 41bb414829..4ded57137c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py @@ -3,92 +3,19 @@ from __future__ import absolute_import import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError, + OptionalPyblishPluginMixin +) from maya import cmds -class SelectInvalidInstances(pyblish.api.Action): - """Select invalid instances in Outliner.""" - - label = "Select Instances" - icon = "briefcase" - on = "failed" - - def process(self, context, plugin): - """Process invalid validators and select invalid instances.""" - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - if instances: - self.log.info( - "Selecting invalid nodes: %s" % ", ".join( - [str(x) for x in instances] - ) - ) - self.select(instances) - else: - self.log.info("No invalid nodes found.") - self.deselect() - - def select(self, instances): - cmds.select(instances, replace=True, noExpand=True) - - def deselect(self): - cmds.select(deselect=True) - - -class RepairSelectInvalidInstances(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - # Get the errored instances - failed = [] - for result in context.data["results"]: - if result["error"] is None: - continue - if result["instance"] is None: - continue - if result["instance"] in failed: - continue - if result["plugin"] != plugin: - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - context_asset = context.data["assetEntity"]["name"] - for instance in instances: - self.set_attribute(instance, context_asset) - - def set_attribute(self, instance, context_asset): - cmds.setAttr( - instance.data.get("name") + ".asset", - context_asset, - type="string" - ) - - -class ValidateInstanceInContext(pyblish.api.InstancePlugin): +class ValidateInstanceInContext(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validator to check if instance asset match context asset. When working in per-shot style you always publish data in context of @@ -102,10 +29,49 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin): label = "Instance in same Context" optional = True hosts = ["maya"] - actions = [SelectInvalidInstances, RepairSelectInvalidInstances] + actions = [ + openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction + ] def process(self, instance): + if not self.is_active(instance.data): + return + asset = instance.data.get("asset") - context_asset = instance.context.data["assetEntity"]["name"] - msg = "{} has asset {}".format(instance.name, asset) - assert asset == context_asset, msg + context_asset = self.get_context_asset(instance) + if asset != context_asset: + raise PublishValidationError( + message=( + "Instance '{}' publishes to different asset than current " + "context: {}. Current context: {}".format( + instance.name, asset, context_asset + ) + ), + description=( + "## Publishing to a different asset\n" + "There are publish instances present which are publishing " + "into a different asset than your current context.\n\n" + "Usually this is not what you want but there can be cases " + "where you might want to publish into another asset or " + "shot. If that's the case you can disable the validation " + "on the instance to ignore it." + ) + ) + + @classmethod + def get_invalid(cls, instance): + return [instance.data["instance_node"]] + + @classmethod + def repair(cls, instance): + context_asset = cls.get_context_asset(instance) + instance_node = instance.data["instance_node"] + cmds.setAttr( + "{}.asset".format(instance_node), + context_asset, + type="string" + ) + + @staticmethod + def get_context_asset(instance): + return instance.context.data["assetEntity"]["name"] diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py index bb3dde761c..69e16efe57 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py @@ -2,7 +2,10 @@ import pyblish.api import string import six -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) # Allow only characters, numbers and underscore allowed = set(string.ascii_lowercase + @@ -28,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin): # Ensure subset data if subset is None: - raise RuntimeError("Instance is missing subset " + raise PublishValidationError("Instance is missing subset " "name: {0}".format(subset)) if not isinstance(subset, six.string_types): diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py index 32abe91f48..236adfb03d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py @@ -1,7 +1,8 @@ import maya.cmds as cmds - import pyblish.api + from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import PublishValidationError class ValidateInstancerContent(pyblish.api.InstancePlugin): @@ -20,7 +21,7 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin): members = instance.data['setMembers'] export_members = instance.data['exactExportMembers'] - self.log.info("Contents {0}".format(members)) + self.log.debug("Contents {0}".format(members)) if not len(members) == len(cmds.ls(members, type="instancer")): self.log.error("Instancer can only contain instancers") @@ -52,7 +53,8 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin): error = True if error: - raise RuntimeError("Instancer Content is invalid. See log.") + raise PublishValidationError( + "Instancer Content is invalid. See log.") def check_geometry_hidden(self, export_members): diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py index 3514cf0a98..714c6229d6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py +++ b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py @@ -1,8 +1,9 @@ import os import re + import pyblish.api -VERBOSE = False +from openpype.pipeline.publish import PublishValidationError def is_cache_resource(resource): @@ -70,9 +71,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin): xml = all_files.pop(0) assert xml.endswith(".xml") - if VERBOSE: - cls.log.info("Checking: {0}".format(all_files)) - # Ensure all files exist (including ticks) # The remainder file paths should be the .mcx or .mcc files valdidate_files(all_files) @@ -126,8 +124,8 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin): # for the frames required by the time range. if ticks: ticks = list(sorted(ticks)) - cls.log.info("Found ticks: {0} " - "(substeps: {1})".format(ticks, len(ticks))) + cls.log.debug("Found ticks: {0} " + "(substeps: {1})".format(ticks, len(ticks))) # Check all frames except the last since we don't # require subframes after our time range. @@ -164,5 +162,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin): if invalid: self.log.error("Invalid nodes: {0}".format(invalid)) - raise RuntimeError("Invalid particle caches in instance. " - "See logs for details.") + raise PublishValidationError( + ("Invalid particle caches in instance. " + "See logs for details.")) diff --git a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py index 624074aaf9..eac13053db 100644 --- a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py +++ b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py @@ -2,7 +2,10 @@ import os import pyblish.api import maya.cmds as cmds -from openpype.pipeline.publish import RepairContextAction +from openpype.pipeline.publish import ( + RepairContextAction, + PublishValidationError +) class ValidateLoadedPlugin(pyblish.api.ContextPlugin): @@ -35,7 +38,7 @@ class ValidateLoadedPlugin(pyblish.api.ContextPlugin): invalid = self.get_invalid(context) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found forbidden plugin name: {}".format(", ".join(invalid)) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_look_contents.py b/openpype/hosts/maya/plugins/publish/validate_look_contents.py index 2d38099f0f..433d997840 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_contents.py @@ -1,6 +1,11 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) + + from maya import cmds # noqa @@ -28,19 +33,16 @@ class ValidateLookContents(pyblish.api.InstancePlugin): """Process all the nodes in the instance""" if not instance[:]: - raise RuntimeError("Instance is empty") + raise PublishValidationError("Instance is empty") invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'{}' has invalid look " + raise PublishValidationError("'{}' has invalid look " "content".format(instance.name)) @classmethod def get_invalid(cls, instance): """Get all invalid nodes""" - cls.log.info("Validating look content for " - "'{}'".format(instance.name)) - # check if data has the right attributes and content attributes = cls.validate_lookdata_attributes(instance) # check the looks for ID diff --git a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py index 20f561a892..0109f6ebd5 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py @@ -1,7 +1,10 @@ from maya import cmds import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): @@ -56,4 +59,4 @@ class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): invalid.append(plug) if invalid: - raise RuntimeError("Invalid connections.") + raise PublishValidationError("Invalid connections.") diff --git a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py index a266a0fd74..5075d4050d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py @@ -6,6 +6,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -30,7 +31,7 @@ class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid nodes %s" % (invalid,)) + raise PublishValidationError("Invalid nodes %s" % (invalid,)) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py index f81e511ff3..4e01b55249 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py @@ -1,8 +1,10 @@ from collections import defaultdict import pyblish.api + import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): @@ -33,8 +35,9 @@ class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Members found without non-unique IDs: " - "{0}".format(invalid)) + raise PublishValidationError( + ("Members found without non-unique IDs: " + "{0}").format(invalid)) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py index db6aadae8d..231331411b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py @@ -2,7 +2,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): @@ -37,7 +40,7 @@ class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid node relationships found: " + raise PublishValidationError("Invalid node relationships found: " "{0}".format(invalid)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py index 8434ddde04..657bab0479 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py @@ -1,7 +1,10 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookSets(pyblish.api.InstancePlugin): @@ -48,16 +51,13 @@ class ValidateLookSets(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'{}' has invalid look " + raise PublishValidationError("'{}' has invalid look " "content".format(instance.name)) @classmethod def get_invalid(cls, instance): """Get all invalid nodes""" - cls.log.info("Validating look content for " - "'{}'".format(instance.name)) - relationships = instance.data["lookData"]["relationships"] invalid = [] diff --git a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py index 9b57b06ee7..dbe7a70e6a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py @@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -27,7 +28,7 @@ class ValidateShadingEngine(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found shading engines with incorrect naming:" "\n{}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py index 788e440d12..acd761a944 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateSingleShader(pyblish.api.InstancePlugin): @@ -23,9 +24,9 @@ class ValidateSingleShader(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found shapes which don't have a single shader " - "assigned: " - "\n{}".format(invalid)) + raise PublishValidationError( + ("Found shapes which don't have a single shader " + "assigned:\n{}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_maya_units.py b/openpype/hosts/maya/plugins/publish/validate_maya_units.py index 011df0846c..ae6dc093a9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_maya_units.py +++ b/openpype/hosts/maya/plugins/publish/validate_maya_units.py @@ -7,6 +7,7 @@ from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline.publish import ( RepairContextAction, ValidateSceneOrder, + PublishXmlValidationError ) @@ -26,6 +27,30 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): validate_fps = True + nice_message_format = ( + "- {setting} must be {required_value}. " + "Your scene is set to {current_value}" + ) + log_message_format = ( + "Maya scene {setting} must be '{required_value}'. " + "Current value is '{current_value}'." + ) + + @classmethod + def apply_settings(cls, project_settings): + """Apply project settings to creator""" + settings = ( + project_settings["maya"]["publish"]["ValidateMayaUnits"] + ) + + cls.validate_linear_units = settings.get("validate_linear_units", + cls.validate_linear_units) + cls.linear_units = settings.get("linear_units", cls.linear_units) + cls.validate_angular_units = settings.get("validate_angular_units", + cls.validate_angular_units) + cls.angular_units = settings.get("angular_units", cls.angular_units) + cls.validate_fps = settings.get("validate_fps", cls.validate_fps) + def process(self, context): # Collected units @@ -34,15 +59,14 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): fps = context.data.get('fps') - # TODO replace query with using 'context.data["assetEntity"]' - asset_doc = get_current_project_asset() + asset_doc = context.data["assetEntity"] asset_fps = mayalib.convert_to_maya_fps(asset_doc["data"]["fps"]) self.log.info('Units (linear): {0}'.format(linearunits)) self.log.info('Units (angular): {0}'.format(angularunits)) self.log.info('Units (time): {0} FPS'.format(fps)) - valid = True + invalid = [] # Check if units are correct if ( @@ -50,26 +74,43 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): and linearunits and linearunits != self.linear_units ): - self.log.error("Scene linear units must be {}".format( - self.linear_units)) - valid = False + invalid.append({ + "setting": "Linear units", + "required_value": self.linear_units, + "current_value": linearunits + }) if ( self.validate_angular_units and angularunits and angularunits != self.angular_units ): - self.log.error("Scene angular units must be {}".format( - self.angular_units)) - valid = False + invalid.append({ + "setting": "Angular units", + "required_value": self.angular_units, + "current_value": angularunits + }) if self.validate_fps and fps and fps != asset_fps: - self.log.error( - "Scene must be {} FPS (now is {})".format(asset_fps, fps)) - valid = False + invalid.append({ + "setting": "FPS", + "required_value": asset_fps, + "current_value": fps + }) - if not valid: - raise RuntimeError("Invalid units set.") + if invalid: + + issues = [] + for data in invalid: + self.log.error(self.log_message_format.format(**data)) + issues.append(self.nice_message_format.format(**data)) + issues = "\n".join(issues) + + raise PublishXmlValidationError( + plugin=self, + message="Invalid maya scene units", + formatting_data={"issues": issues} + ) @classmethod def repair(cls, context): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py index a580a1c787..bde78a98b8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py @@ -10,12 +10,15 @@ from openpype.hosts.maya.api.lib import ( set_attribute ) from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, RepairAction, ValidateMeshOrder, + PublishValidationError ) -class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): +class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate the mesh has default Arnold attributes. It compares all Arnold attributes from a default mesh. This is to ensure @@ -30,29 +33,37 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction ] + optional = True - if cmds.getAttr( - "defaultRenderGlobals.currentRenderer").lower() == "arnold": - active = True - else: - active = False + + # cache (will be `dict` when cached) + arnold_mesh_defaults = None @classmethod def get_default_attributes(cls): + + if cls.arnold_mesh_defaults is not None: + # Use from cache + return cls.arnold_mesh_defaults + # Get default arnold attribute values for mesh type. defaults = {} with delete_after() as tmp: - transform = cmds.createNode("transform") + transform = cmds.createNode("transform", skipSelect=True) tmp.append(transform) - mesh = cmds.createNode("mesh", parent=transform) - for attr in cmds.listAttr(mesh, string="ai*"): + mesh = cmds.createNode("mesh", parent=transform, skipSelect=True) + arnold_attributes = cmds.listAttr(mesh, + string="ai*", + fromPlugin=True) or [] + for attr in arnold_attributes: plug = "{}.{}".format(mesh, attr) try: defaults[attr] = get_attribute(plug) - except RuntimeError: + except PublishValidationError: cls.log.debug("Ignoring arnold attribute: {}".format(attr)) + cls.arnold_mesh_defaults = defaults # assign cache return defaults @classmethod @@ -101,10 +112,16 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): ) def process(self, instance): + if not self.is_active(instance.data): + return + + if not cmds.pluginInfo("mtoa", query=True, loaded=True): + # Arnold attributes only exist if plug-in is loaded + return invalid = self.get_invalid_attributes(instance, compute=True) if invalid: - raise RuntimeError( + raise PublishValidationError( "Non-default Arnold attributes found in instance:" " {0}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py index 848d66c4ae..c3264f3d98 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py @@ -4,7 +4,8 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, - ValidateMeshOrder + ValidateMeshOrder, + PublishValidationError ) @@ -49,6 +50,6 @@ class ValidateMeshEmpty(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "Meshes found in instance without any vertices: %s" % invalid ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py index b7836b3e92..c382d1b983 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py @@ -2,11 +2,16 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) from openpype.hosts.maya.api.lib import len_flattened -class ValidateMeshHasUVs(pyblish.api.InstancePlugin): +class ValidateMeshHasUVs(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate the current mesh has UVs. It validates whether the current UV set has non-zero UVs and @@ -66,8 +71,19 @@ class ValidateMeshHasUVs(pyblish.api.InstancePlugin): return invalid def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found in instance without " - "valid UVs: {0}".format(invalid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + + raise PublishValidationError( + title="Mesh has missing UVs", + message="Model meshes are required to have UVs.

" + "Meshes detected with invalid or missing UVs:
" + "{0}".format(names) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py index 664e2b5772..48b4d0f557 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateMeshNoNegativeScale(pyblish.api.Validator): @@ -46,5 +56,9 @@ class ValidateMeshNoNegativeScale(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with negative " - "scale: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with negative scale:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Negative scale" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py index d7711da722..6fd63fb29f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateMeshNonManifold(pyblish.api.Validator): @@ -16,7 +26,7 @@ class ValidateMeshNonManifold(pyblish.api.Validator): order = ValidateMeshOrder hosts = ['maya'] families = ['model'] - label = 'Mesh Non-Manifold Vertices/Edges' + label = 'Mesh Non-Manifold Edges/Vertices' actions = [openpype.hosts.maya.api.action.SelectInvalidAction] @staticmethod @@ -38,5 +48,9 @@ class ValidateMeshNonManifold(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with non-manifold " - "edges/vertices: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with non-manifold edges/vertices:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Non-Manifold Edges/Vertices" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py index b49ba85648..5ec6e5779b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py @@ -3,10 +3,15 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): +class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate meshes don't have edges with a zero length. Based on Maya's polyCleanup 'Edges with zero length'. @@ -65,7 +70,14 @@ class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): def process(self, instance): """Process all meshes""" + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found with zero " - "edge length: {0}".format(invalid)) + label = "Meshes found with zero edge length" + raise PublishValidationError( + message="{}: {}".format(label, invalid), + title=label, + description="{}:\n- ".format(label) + "\n- ".join(invalid) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py index 1b754a9829..7855e79119 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py @@ -6,10 +6,20 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError ) -class ValidateMeshNormalsUnlocked(pyblish.api.Validator): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateMeshNormalsUnlocked(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Validate all meshes in the instance have unlocked normals These can be unlocked manually through: @@ -47,12 +57,18 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator): def process(self, instance): """Raise invalid when any of the meshes have locked normals""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with " - "locked normals: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with locked normals:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Locked normals" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py index 7dd66eed6c..88e1507dd3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py @@ -6,7 +6,18 @@ import maya.api.OpenMaya as om import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class GetOverlappingUVs(object): @@ -225,7 +236,8 @@ class GetOverlappingUVs(object): return faces -class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): +class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """ Validate the current mesh overlapping UVs. It validates whether the current UVs are overlapping or not. @@ -281,9 +293,14 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): return instance.data.get("overlapping_faces", []) def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError( - "Meshes found with overlapping UVs: {0}".format(invalid) + raise PublishValidationError( + "Meshes found with overlapping UVs:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Overlapping UVs" ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py index 2a0abe975c..1db7613999 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py @@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + PublishValidationError ) @@ -102,7 +103,7 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Shapes found with invalid shader " + raise PublishValidationError("Shapes found with invalid shader " "connections: {0}".format(invalid)) @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py index faa360380e..46364735b9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py @@ -6,10 +6,12 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin ) -class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): +class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Warn on multiple UV sets existing for each polygon mesh. On versions prior to Maya 2017 this will force no multiple uv sets because @@ -47,6 +49,8 @@ class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance 'objectSet'""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py index 40ddb916ca..116fecbcba 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py @@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin ) -class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): +class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate model's default set exists and is named 'map1'. In Maya meshes by default have a uv set named "map1" that cannot be @@ -48,6 +50,8 @@ class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance 'objectSet'""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py index d885158004..7167859444 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py @@ -1,12 +1,10 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ( - RepairAction, - ValidateMeshOrder, -) from openpype.hosts.maya.api.lib import len_flattened +from openpype.pipeline.publish import ( + PublishValidationError, RepairAction, ValidateMeshOrder) class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): @@ -40,8 +38,9 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): # This fix only works in Maya 2016 EXT2 and newer if float(cmds.about(version=True)) <= 2016.0: - raise RuntimeError("Repair not supported in Maya version below " - "2016 EXT 2") + raise PublishValidationError( + ("Repair not supported in Maya version below " + "2016 EXT 2")) invalid = cls.get_invalid(instance) for node in invalid: @@ -76,5 +75,6 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found in instance with vertices that " - "have no edges: %s" % invalid) + raise PublishValidationError( + ("Meshes found in instance with vertices that " + "have no edges: {}").format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py index 723346a285..19373efad9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateModelContent(pyblish.api.InstancePlugin): @@ -28,7 +31,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin): content_instance = instance.data.get("setMembers", None) if not content_instance: cls.log.error("Instance has no nodes!") - return True + return [instance.data["name"]] # All children will be included in the extracted export so we also # validate *all* descendents of the set members and we skip any @@ -60,15 +63,10 @@ class ValidateModelContent(pyblish.api.InstancePlugin): return True # Top group - assemblies = cmds.ls(content_instance, assemblies=True, long=True) - if len(assemblies) != 1 and cls.validate_top_group: + top_parents = set([x.split("|")[1] for x in content_instance]) + if cls.validate_top_group and len(top_parents) != 1: cls.log.error("Must have exactly one top group") - return assemblies - if len(assemblies) == 0: - cls.log.warning("No top group found. " - "(Are there objects in the instance?" - " Or is it parented in another group?)") - return assemblies or True + return top_parents def _is_visible(node): """Return whether node is visible""" @@ -79,11 +77,11 @@ class ValidateModelContent(pyblish.api.InstancePlugin): visibility=True) # The roots must be visible (the assemblies) - for assembly in assemblies: - if not _is_visible(assembly): - cls.log.error("Invisible assembly (root node) is not " - "allowed: {0}".format(assembly)) - invalid.add(assembly) + for parent in top_parents: + if not _is_visible(parent): + cls.log.error("Invisible parent (root node) is not " + "allowed: {0}".format(parent)) + invalid.add(parent) # Ensure at least one shape is visible if not any(_is_visible(shape) for shape in shapes): @@ -97,4 +95,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model content is invalid. See log.") + raise PublishValidationError( + title="Model content is invalid", + message="See log for more details" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py index 0e7adc640f..f4c1aa39c7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py @@ -1,22 +1,24 @@ # -*- coding: utf-8 -*- """Validate model nodes names.""" import os -import re import platform +import re +import gridfs +import pyblish.api from maya import cmds -import pyblish.api -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import ValidateContentsOrder import openpype.hosts.maya.api.action +from openpype.client.mongo import OpenPypeMongoConnection from openpype.hosts.maya.api.shader_definition_editor import ( DEFINITION_FILENAME) -from openpype.client.mongo import OpenPypeMongoConnection -import gridfs +from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateModelName(pyblish.api.InstancePlugin): +class ValidateModelName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate name of model starts with (somename)_###_(materialID)_GEO @@ -123,7 +125,7 @@ class ValidateModelName(pyblish.api.InstancePlugin): r = re.compile(regex) for obj in filtered: - cls.log.info("testing: {}".format(obj)) + cls.log.debug("testing: {}".format(obj)) m = r.match(obj) if m is None: cls.log.error("invalid name on: {}".format(obj)) @@ -148,7 +150,11 @@ class ValidateModelName(pyblish.api.InstancePlugin): def process(self, instance): """Plugin entry point.""" + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model naming is invalid. See the log.") + raise PublishValidationError( + "Model naming is invalid. See the log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py index 04db5a061b..ad0fcafc56 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py @@ -1,14 +1,19 @@ import os import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) COLOUR_SPACES = ['sRGB', 'linear', 'auto'] MIPMAP_EXTENSIONS = ['tdl'] -class ValidateMvLookContents(pyblish.api.InstancePlugin): +class ValidateMvLookContents(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): order = ValidateContentsOrder families = ['mvLook'] hosts = ['maya'] @@ -23,19 +28,22 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin): enforced_intents = ['-', 'Final'] def process(self, instance): + if not self.is_active(instance.data): + return + intent = instance.context.data['intent']['value'] publishMipMap = instance.data["publishMipMap"] enforced = True if intent in self.enforced_intents: - self.log.info("This validation will be enforced: '{}'" - .format(intent)) + self.log.debug("This validation will be enforced: '{}'" + .format(intent)) else: enforced = False - self.log.info("This validation will NOT be enforced: '{}'" - .format(intent)) + self.log.debug("This validation will NOT be enforced: '{}'" + .format(intent)) if not instance[:]: - raise RuntimeError("Instance is empty") + raise PublishValidationError("Instance is empty") invalid = set() @@ -62,13 +70,14 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin): if enforced: invalid.add(node) self.log.error(msg) - raise RuntimeError(msg) + raise PublishValidationError(msg) else: self.log.warning(msg) if invalid: - raise RuntimeError("'{}' has invalid look " - "content".format(instance.name)) + raise PublishValidationError( + "'{}' has invalid look content".format(instance.name) + ) def valid_file(self, fname): self.log.debug("Checking validity of '{}'".format(fname)) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_animation.py b/openpype/hosts/maya/plugins/publish/validate_no_animation.py index 2e7cafe4ab..9ff189cf83 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_animation.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_animation.py @@ -2,10 +2,22 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateNoAnimation(pyblish.api.Validator): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateNoAnimation(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Ensure no keyframes on nodes in the Instance. Even though a Model would extract without animCurves correctly this avoids @@ -22,10 +34,17 @@ class ValidateNoAnimation(pyblish.api.Validator): actions = [openpype.hosts.maya.api.action.SelectInvalidAction] def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Keyframes found: {0}".format(invalid)) + raise PublishValidationError( + "Keyframes found on:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Keyframes on model" + ) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py index a4fb938d43..f0aa9261f7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): @@ -28,4 +38,10 @@ class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): def process(self, instance): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) - assert not invalid, "Default cameras found: {0}".format(invalid) + if invalid: + raise PublishValidationError( + "Default cameras found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Default cameras" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py index 0ff03f9165..13eeae5859 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py @@ -4,11 +4,19 @@ import pyblish.api from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) import openpype.hosts.maya.api.action +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + def get_namespace(node_name): # ensure only node's name (not parent path) node_name = node_name.rsplit("|", 1)[-1] @@ -36,7 +44,12 @@ class ValidateNoNamespace(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Namespaces found: {0}".format(invalid)) + raise PublishValidationError( + "Namespaces found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Namespaces in model" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py index f77fc81dc1..187135fdf3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py @@ -5,9 +5,17 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + def has_shape_children(node): # Check if any descendants allDescendents = cmds.listRelatives(node, @@ -64,7 +72,12 @@ class ValidateNoNullTransforms(pyblish.api.InstancePlugin): """Process all the transform nodes in the instance """ invalid = self.get_invalid(instance) if invalid: - raise ValueError("Empty transforms found: {0}".format(invalid)) + raise PublishValidationError( + "Empty transforms found without shapes:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Empty transforms" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py index 2cfdc28128..6ae634be24 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py @@ -2,10 +2,22 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateNoUnknownNodes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Checks to see if there are any unknown nodes in the instance. This often happens if nodes from plug-ins are used but are not available @@ -29,7 +41,14 @@ class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise ValueError("Unknown nodes found: {0}".format(invalid)) + raise PublishValidationError( + "Unknown nodes found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Unknown nodes" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py index 27e5e6a006..22fd1edc29 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py @@ -1,5 +1,13 @@ import pyblish.api from maya import cmds +from openpype.pipeline.publish import PublishValidationError + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateNoVRayMesh(pyblish.api.InstancePlugin): @@ -11,6 +19,9 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin): def process(self, instance): + if not cmds.pluginInfo("vrayformaya", query=True, loaded=True): + return + shapes = cmds.ls(instance, shapes=True, type="mesh") @@ -20,5 +31,11 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin): source=True) or [] vray_meshes = cmds.ls(inputs, type='VRayMesh') if vray_meshes: - raise RuntimeError("Meshes that are VRayMeshes shouldn't " - "be pointcached: {0}".format(vray_meshes)) + raise PublishValidationError( + "Meshes that are V-Ray Proxies should not be in an Alembic " + "pointcache.\n" + "Found V-Ray proxies:\n\n{}".format( + _as_report_list(sorted(vray_meshes)) + ), + title="V-Ray Proxies in pointcache" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_node_ids.py index 796f4c8d76..0c7d647014 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids.py @@ -1,6 +1,9 @@ import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + ValidatePipelineOrder, + PublishXmlValidationError +) import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -34,8 +37,14 @@ class ValidateNodeIDs(pyblish.api.InstancePlugin): # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found without " - "IDs: {0}".format(invalid)) + names = "\n".join( + "- {}".format(node) for node in invalid + ) + raise PublishXmlValidationError( + plugin=self, + message="Nodes found without IDs: {}".format(invalid), + formatting_data={"nodes": names} + ) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py index 68c47f3a96..643c970463 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py @@ -1,12 +1,10 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): @@ -35,8 +33,9 @@ class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Shapes found that are considered 'Deformed'" - "without object ids: {0}".format(invalid)) + raise PublishValidationError( + ("Shapes found that are considered 'Deformed'" + "without object ids: {0}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py index b2f28fd4e5..f15aa2efa8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py @@ -1,10 +1,11 @@ import pyblish.api -from openpype.client import get_assets -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action +from openpype.client import get_assets from openpype.hosts.maya.api import lib +from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): @@ -29,9 +30,9 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found asset IDs which are not related to " - "current project in instance: " - "`%s`" % instance.name) + raise PublishValidationError( + ("Found asset IDs which are not related to " + "current project in instance: `{}`").format(instance.name)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py index f901dc58c4..52e706fec9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py @@ -1,11 +1,13 @@ import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidatePipelineOrder) -class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): +class ValidateNodeIDsRelated(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate nodes have a related Colorbleed Id to the instance.data[asset] """ @@ -23,12 +25,15 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): def process(self, instance): """Process all nodes in instance (including hierarchy)""" + if not self.is_active(instance.data): + return + # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes IDs found that are not related to asset " - "'{}' : {}".format(instance.data['asset'], - invalid)) + raise PublishValidationError( + ("Nodes IDs found that are not related to asset " + "'{}' : {}").format(instance.data['asset'], invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py index f7a5e6e292..61386fc939 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py @@ -1,7 +1,10 @@ from collections import defaultdict import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + ValidatePipelineOrder, + PublishValidationError +) import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -29,8 +32,13 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin): # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with non-unique " - "asset IDs: {0}".format(invalid)) + label = "Nodes found with non-unique asset IDs" + raise PublishValidationError( + message="{}: {}".format(label, invalid), + title="Non-unique asset ids on nodes", + description="{}\n- {}".format(label, + "\n- ".join(sorted(invalid))) + ) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py index 6135c9c695..cb5c68e4ab 100644 --- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py @@ -4,7 +4,12 @@ from maya import cmds import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.hosts.maya.api.lib import pairwise +from openpype.hosts.maya.api.action import SelectInvalidAction +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): @@ -16,31 +21,36 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): hosts = ['maya'] families = ["workfile"] label = "Plug-in Path Attributes" + actions = [SelectInvalidAction] - def get_invalid(self, instance): + # Attributes are defined in project settings + attribute = [] + + @classmethod + def get_invalid(cls, instance): invalid = list() - # get the project setting - validate_path = ( - instance.context.data["project_settings"]["maya"]["publish"] - ) - file_attr = validate_path["ValidatePluginPathAttributes"]["attribute"] - if not file_attr: + file_attrs = cls.attribute + if not file_attrs: return invalid - # get the nodes and file attributes - for node, attr in file_attr.items(): - # check the related nodes - targets = cmds.ls(type=node) + # Consider only valid node types to avoid "Unknown object type" warning + all_node_types = set(cmds.allNodeTypes()) + node_types = [ + key for key in file_attrs.keys() + if key in all_node_types + ] - for target in targets: - # get the filepath - file_attr = "{}.{}".format(target, attr) - filepath = cmds.getAttr(file_attr) + for node, node_type in pairwise(cmds.ls(type=node_types, + showType=True)): + # get the filepath + file_attr = "{}.{}".format(node, file_attrs[node_type]) + filepath = cmds.getAttr(file_attr) - if filepath and not os.path.exists(filepath): - self.log.error("File {0} not exists".format(filepath)) # noqa - invalid.append(target) + if filepath and not os.path.exists(filepath): + cls.log.error("{} '{}' uses non-existing filepath: {}" + .format(node_type, node, filepath)) + invalid.append(node) return invalid @@ -48,5 +58,16 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): """Process all directories Set as Filenames in Non-Maya Nodes""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Non-existent Path " - "found: {0}".format(invalid)) + raise PublishValidationError( + title="Plug-in Path Attributes", + message="Non-existent filepath found on nodes: {}".format( + ", ".join(invalid) + ), + description=( + "## Plug-in nodes use invalid filepaths\n" + "The workfile contains nodes from plug-ins that use " + "filepaths which do not exist.\n\n" + "Please make sure their filepaths are correct and the " + "files exist on disk." + ) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py index 78bb022785..030e41ca1f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py @@ -1,10 +1,11 @@ -from maya import cmds +import os import pyblish.api + +from maya import cmds + from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateRenderImageRule(pyblish.api.InstancePlugin): @@ -24,15 +25,19 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin): def process(self, instance): - required_images_rule = self.get_default_render_image_folder(instance) - current_images_rule = cmds.workspace(fileRuleEntry="images") - - assert current_images_rule == required_images_rule, ( - "Invalid workspace `images` file rule value: '{}'. " - "Must be set to: '{}'".format( - current_images_rule, required_images_rule - ) + required_images_rule = os.path.normpath( + self.get_default_render_image_folder(instance) ) + current_images_rule = os.path.normpath( + cmds.workspace(fileRuleEntry="images") + ) + + if current_images_rule != required_images_rule: + raise PublishValidationError( + ( + "Invalid workspace `images` file rule value: '{}'. " + "Must be set to: '{}'" + ).format(current_images_rule, required_images_rule)) @classmethod def repair(cls, instance): @@ -44,8 +49,17 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin): cmds.workspace(fileRule=("images", required_images_rule)) cmds.workspace(saveWorkspace=True) - @staticmethod - def get_default_render_image_folder(instance): + @classmethod + def get_default_render_image_folder(cls, instance): + staging_dir = instance.data.get("stagingDir") + if staging_dir: + cls.log.debug( + "Staging dir found: \"{}\". Ignoring setting from " + "`project_settings/maya/RenderSettings/" + "default_render_image_folder`.".format(staging_dir) + ) + return staging_dir + return instance.context.data.get('project_settings')\ .get('maya') \ .get('RenderSettings') \ diff --git a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py index 67ece75af8..9d4410186b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError, +) class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin): @@ -31,5 +34,7 @@ class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Renderable default cameras " - "found: {0}".format(invalid)) + raise PublishValidationError( + title="Rendering default cameras", + message="Renderable default cameras " + "found: {0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py index 77322fefd5..2c0d604175 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py @@ -5,7 +5,10 @@ from maya import cmds import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib_rendersettings import RenderSettings -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): @@ -28,7 +31,7 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid cameras for render.") + raise PublishValidationError("Invalid cameras for render.") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py index 7919a6eaa1..f8de983e06 100644 --- a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py +++ b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py @@ -1,8 +1,9 @@ import pyblish.api -from openpype.client import get_subset_by_name import openpype.hosts.maya.api.action +from openpype.client import get_subset_by_name from openpype.pipeline import legacy_io +from openpype.pipeline.publish import PublishValidationError class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): @@ -30,7 +31,8 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found unregistered subsets: {}".format(invalid)) + raise PublishValidationError( + "Found unregistered subsets: {}".format(invalid)) def get_invalid(self, instance): invalid = [] diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py index 71b91b8e54..dccb4ade78 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py +++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py @@ -9,6 +9,7 @@ import pyblish.api from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError, ) from openpype.hosts.maya.api import lib @@ -112,17 +113,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) - assert invalid is False, ("Invalid render settings " - "found for '{}'!".format(instance.name)) + if invalid: + raise PublishValidationError( + title="Invalid Render Settings", + message=("Invalid render settings found " + "for '{}'!".format(instance.name)) + ) @classmethod def get_invalid(cls, instance): invalid = False - multipart = False renderer = instance.data['renderer'] - layer = instance.data['setMembers'] + layer = instance.data['renderlayer'] cameras = instance.data.get("cameras", []) # Get the node attributes for current renderer @@ -280,7 +284,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): render_value = cmds.getAttr( "{}.{}".format(node, data["attribute"]) ) - except RuntimeError: + except PublishValidationError: invalid = True cls.log.error( "Cannot get value of {}.{}".format( diff --git a/openpype/hosts/maya/plugins/publish/validate_resolution.py b/openpype/hosts/maya/plugins/publish/validate_resolution.py new file mode 100644 index 0000000000..91b473b250 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_resolution.py @@ -0,0 +1,117 @@ +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from maya import cmds +from openpype.pipeline.publish import RepairAction +from openpype.hosts.maya.api import lib +from openpype.hosts.maya.api.lib import reset_scene_resolution + + +class ValidateResolution(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate the render resolution setting aligned with DB""" + + order = pyblish.api.ValidatorOrder + families = ["renderlayer"] + hosts = ["maya"] + label = "Validate Resolution" + actions = [RepairAction] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + invalid = self.get_invalid_resolution(instance) + if invalid: + raise PublishValidationError( + "Render resolution is invalid. See log for details.", + description=( + "Wrong render resolution setting. " + "Please use repair button to fix it.\n\n" + "If current renderer is V-Ray, " + "make sure vraySettings node has been created." + ) + ) + + @classmethod + def get_invalid_resolution(cls, instance): + width, height, pixelAspect = cls.get_db_resolution(instance) + current_renderer = instance.data["renderer"] + layer = instance.data["renderlayer"] + invalid = False + if current_renderer == "vray": + vray_node = "vraySettings" + if cmds.objExists(vray_node): + current_width = lib.get_attr_in_layer( + "{}.width".format(vray_node), layer=layer) + current_height = lib.get_attr_in_layer( + "{}.height".format(vray_node), layer=layer) + current_pixelAspect = lib.get_attr_in_layer( + "{}.pixelAspect".format(vray_node), layer=layer + ) + else: + cls.log.error( + "Can't detect VRay resolution because there is no node " + "named: `{}`".format(vray_node) + ) + return True + else: + current_width = lib.get_attr_in_layer( + "defaultResolution.width", layer=layer) + current_height = lib.get_attr_in_layer( + "defaultResolution.height", layer=layer) + current_pixelAspect = lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer + ) + if current_width != width or current_height != height: + cls.log.error( + "Render resolution {}x{} does not match " + "asset resolution {}x{}".format( + current_width, current_height, + width, height + )) + invalid = True + if current_pixelAspect != pixelAspect: + cls.log.error( + "Render pixel aspect {} does not match " + "asset pixel aspect {}".format( + current_pixelAspect, pixelAspect + )) + invalid = True + return invalid + + @classmethod + def get_db_resolution(cls, instance): + asset_doc = instance.data["assetEntity"] + project_doc = instance.context.data["projectEntity"] + for data in [asset_doc["data"], project_doc["data"]]: + if ( + "resolutionWidth" in data and + "resolutionHeight" in data and + "pixelAspect" in data + ): + width = data["resolutionWidth"] + height = data["resolutionHeight"] + pixelAspect = data["pixelAspect"] + return int(width), int(height), float(pixelAspect) + + # Defaults if not found in asset document or project document + return 1920, 1080, 1.0 + + @classmethod + def repair(cls, instance): + # Usually without renderlayer overrides the renderlayers + # all share the same resolution value - so fixing the first + # will have fixed all the others too. It's much faster to + # check whether it's invalid first instead of switching + # into all layers individually + if not cls.get_invalid_resolution(instance): + cls.log.debug( + "Nothing to repair on instance: {}".format(instance) + ) + return + layer_node = instance.data['setMembers'] + with lib.renderlayer(layer_node): + reset_scene_resolution() diff --git a/openpype/hosts/maya/plugins/publish/validate_resources.py b/openpype/hosts/maya/plugins/publish/validate_resources.py index b7bd47ad0a..7d894a2bef 100644 --- a/openpype/hosts/maya/plugins/publish/validate_resources.py +++ b/openpype/hosts/maya/plugins/publish/validate_resources.py @@ -2,7 +2,10 @@ import os from collections import defaultdict import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateResources(pyblish.api.InstancePlugin): @@ -54,4 +57,4 @@ class ValidateResources(pyblish.api.InstancePlugin): ) if invalid_resources: - raise RuntimeError("Invalid resources in instance.") + raise PublishValidationError("Invalid resources in instance.") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py index 1096c95486..106b4024e2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py @@ -1,7 +1,10 @@ -from maya import cmds - import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from maya import cmds +import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) class ValidateRigContents(pyblish.api.InstancePlugin): @@ -17,74 +20,127 @@ class ValidateRigContents(pyblish.api.InstancePlugin): label = "Rig Contents" hosts = ["maya"] families = ["rig"] + action = [openpype.hosts.maya.api.action.SelectInvalidAction] accepted_output = ["mesh", "transform"] accepted_controllers = ["transform"] def process(self, instance): + invalid = self.get_invalid(instance) + if invalid: + raise PublishValidationError( + "Invalid rig content. See log for details.") - objectsets = ("controls_SET", "out_SET") - missing = [obj for obj in objectsets if obj not in instance] - assert not missing, ("%s is missing %s" % (instance, missing)) + @classmethod + def get_invalid(cls, instance): - # Ensure there are at least some transforms or dag nodes - # in the rig instance - set_members = instance.data['setMembers'] - if not cmds.ls(set_members, type="dagNode", long=True): - raise RuntimeError("No dag nodes in the pointcache instance. " - "(Empty instance?)") + # Find required sets by suffix + required, rig_sets = cls.get_nodes(instance) + + cls.validate_missing_objectsets(instance, required, rig_sets) + + controls_set = rig_sets["controls_SET"] + out_set = rig_sets["out_SET"] # Ensure contents in sets and retrieve long path for all objects - output_content = cmds.sets("out_SET", query=True) or [] - assert output_content, "Must have members in rig out_SET" + output_content = cmds.sets(out_set, query=True) or [] + if not output_content: + raise PublishValidationError("Must have members in rig out_SET") output_content = cmds.ls(output_content, long=True) - controls_content = cmds.sets("controls_SET", query=True) or [] - assert controls_content, "Must have members in rig controls_SET" + controls_content = cmds.sets(controls_set, query=True) or [] + if not controls_content: + raise PublishValidationError( + "Must have members in rig controls_SET" + ) controls_content = cmds.ls(controls_content, long=True) - # Validate members are inside the hierarchy from root node - root_node = cmds.ls(set_members, assemblies=True) - hierarchy = cmds.listRelatives(root_node, allDescendents=True, - fullPath=True) - hierarchy = set(hierarchy) - - invalid_hierarchy = [] - for node in output_content: - if node not in hierarchy: - invalid_hierarchy.append(node) - for node in controls_content: - if node not in hierarchy: - invalid_hierarchy.append(node) + rig_content = output_content + controls_content + invalid_hierarchy = cls.invalid_hierarchy(instance, rig_content) # Additional validations - invalid_geometry = self.validate_geometry(output_content) - invalid_controls = self.validate_controls(controls_content) + invalid_geometry = cls.validate_geometry(output_content) + invalid_controls = cls.validate_controls(controls_content) error = False if invalid_hierarchy: - self.log.error("Found nodes which reside outside of root group " + cls.log.error("Found nodes which reside outside of root group " "while they are set up for publishing." "\n%s" % invalid_hierarchy) error = True if invalid_controls: - self.log.error("Only transforms can be part of the controls_SET." + cls.log.error("Only transforms can be part of the controls_SET." "\n%s" % invalid_controls) error = True if invalid_geometry: - self.log.error("Only meshes can be part of the out_SET\n%s" + cls.log.error("Only meshes can be part of the out_SET\n%s" % invalid_geometry) error = True - if error: - raise RuntimeError("Invalid rig content. See log for details.") + return invalid_hierarchy + invalid_controls + invalid_geometry - def validate_geometry(self, set_members): - """Check if the out set passes the validations + @classmethod + def validate_missing_objectsets(cls, instance, + required_objsets, rig_sets): + """Validate missing objectsets in rig sets - Checks if all its set members are within the hierarchy of the root + Args: + instance (str): instance + required_objsets (list): list of objectset names + rig_sets (list): list of rig sets + + Raises: + PublishValidationError: When the error is raised, it will show + which instance has the missing object sets + """ + missing = [ + key for key in required_objsets if key not in rig_sets + ] + if missing: + raise PublishValidationError( + "%s is missing sets: %s" % (instance, ", ".join(missing)) + ) + + @classmethod + def invalid_hierarchy(cls, instance, content): + """ + Check if all rig set members are within the hierarchy of the rig root + + Args: + instance (str): instance + content (list): list of content from rig sets + + Raises: + PublishValidationError: It means no dag nodes in + the rig instance + + Returns: + list: invalid hierarchy + """ + # Ensure there are at least some transforms or dag nodes + # in the rig instance + set_members = instance.data['setMembers'] + if not cmds.ls(set_members, type="dagNode", long=True): + raise PublishValidationError( + "No dag nodes in the rig instance. " + "(Empty instance?)" + ) + # Validate members are inside the hierarchy from root node + root_nodes = cmds.ls(set_members, assemblies=True, long=True) + hierarchy = cmds.listRelatives(root_nodes, allDescendents=True, + fullPath=True) + root_nodes + hierarchy = set(hierarchy) + invalid_hierarchy = [] + for node in content: + if node not in hierarchy: + invalid_hierarchy.append(node) + return invalid_hierarchy + + @classmethod + def validate_geometry(cls, set_members): + """ Checks if the node types of the set members valid Args: @@ -103,15 +159,13 @@ class ValidateRigContents(pyblish.api.InstancePlugin): fullPath=True) or [] all_shapes = cmds.ls(set_members + shapes, long=True, shapes=True) for shape in all_shapes: - if cmds.nodeType(shape) not in self.accepted_output: + if cmds.nodeType(shape) not in cls.accepted_output: invalid.append(shape) - return invalid - - def validate_controls(self, set_members): - """Check if the controller set passes the validations - - Checks if all its set members are within the hierarchy of the root + @classmethod + def validate_controls(cls, set_members): + """ + Checks if the control set members are allowed node types. Checks if the node types of the set members valid Args: @@ -125,7 +179,80 @@ class ValidateRigContents(pyblish.api.InstancePlugin): # Validate control types invalid = [] for node in set_members: - if cmds.nodeType(node) not in self.accepted_controllers: + if cmds.nodeType(node) not in cls.accepted_controllers: invalid.append(node) return invalid + + @classmethod + def get_nodes(cls, instance): + """Get the target objectsets and rig sets nodes + + Args: + instance (str): instance + + Returns: + tuple: 2-tuple of list of objectsets, + list of rig sets nodes + """ + objectsets = ["controls_SET", "out_SET"] + rig_sets_nodes = instance.data.get("rig_sets", []) + return objectsets, rig_sets_nodes + + +class ValidateSkeletonRigContents(ValidateRigContents): + """Ensure skeleton rigs contains pipeline-critical content + + The rigs optionally contain at least two object sets: + "skeletonMesh_SET" - Set of the skinned meshes + with bone hierarchies + + """ + + order = ValidateContentsOrder + label = "Skeleton Rig Contents" + hosts = ["maya"] + families = ["rig.fbx"] + + @classmethod + def get_invalid(cls, instance): + objectsets, skeleton_mesh_nodes = cls.get_nodes(instance) + cls.validate_missing_objectsets( + instance, objectsets, instance.data["rig_sets"]) + + # Ensure contents in sets and retrieve long path for all objects + output_content = instance.data.get("skeleton_mesh", []) + output_content = cmds.ls(skeleton_mesh_nodes, long=True) + + invalid_hierarchy = cls.invalid_hierarchy( + instance, output_content) + invalid_geometry = cls.validate_geometry(output_content) + + error = False + if invalid_hierarchy: + cls.log.error("Found nodes which reside outside of root group " + "while they are set up for publishing." + "\n%s" % invalid_hierarchy) + error = True + if invalid_geometry: + cls.log.error("Found nodes which reside outside of root group " + "while they are set up for publishing." + "\n%s" % invalid_hierarchy) + error = True + if error: + return invalid_hierarchy + invalid_geometry + + @classmethod + def get_nodes(cls, instance): + """Get the target objectsets and rig sets nodes + + Args: + instance (str): instance + + Returns: + tuple: 2-tuple of list of objectsets, + list of rig sets nodes + """ + objectsets = ["skeletonMesh_SET"] + skeleton_mesh_nodes = instance.data.get("skeleton_mesh", []) + return objectsets, skeleton_mesh_nodes diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py index 1e42abdcd9..82248c57b3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py @@ -5,6 +5,7 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import undo_chunk @@ -51,22 +52,30 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError('{} failed, see log ' - 'information'.format(self.label)) + raise PublishValidationError( + '{} failed, see log information'.format(self.label) + ) @classmethod def get_invalid(cls, instance): - controllers_sets = [i for i in instance if i == "controls_SET"] - controls = cmds.sets(controllers_sets, query=True) - assert controls, "Must have 'controls_SET' in rig instance" + controls_set = cls.get_node(instance) + if not controls_set: + cls.log.error( + "Must have 'controls_SET' in rig instance" + ) + return [instance.data["instance_node"]] + + controls = cmds.sets(controls_set, query=True) # Ensure all controls are within the top group lookup = set(instance[:]) - assert all(control in lookup for control in cmds.ls(controls, - long=True)), ( - "All controls must be inside the rig's group." - ) + if not all(control in lookup for control in cmds.ls(controls, + long=True)): + cls.log.error( + "All controls must be inside the rig's group." + ) + return [controls_set] # Validate all controls has_connections = list() @@ -180,9 +189,17 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): @classmethod def repair(cls, instance): + controls_set = cls.get_node(instance) + if not controls_set: + cls.log.error( + "Unable to repair because no 'controls_SET' found in rig " + "instance: {}".format(instance) + ) + return + # Use a single undo chunk with undo_chunk(): - controls = cmds.sets("controls_SET", query=True) + controls = cmds.sets(controls_set, query=True) for control in controls: # Lock visibility @@ -211,3 +228,64 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): default = cls.CONTROLLER_DEFAULTS[attr] cls.log.info("Setting %s to %s" % (plug, default)) cmds.setAttr(plug, default) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from controls_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from controls_SET + """ + return instance.data["rig_sets"].get("controls_SET") + + +class ValidateSkeletonRigControllers(ValidateRigControllers): + """Validate rig controller for skeletonAnim_SET + + Controls must have the transformation attributes on their default + values of translate zero, rotate zero and scale one when they are + unlocked attributes. + + Unlocked keyable attributes may not have any incoming connections. If + these connections are required for the rig then lock the attributes. + + The visibility attribute must be locked. + + Note that `repair` will: + - Lock all visibility attributes + - Reset all default values for translate, rotate, scale + - Break all incoming connections to keyable attributes + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Controllers" + hosts = ["maya"] + families = ["rig.fbx"] + + # Default controller values + CONTROLLER_DEFAULTS = { + "translateX": 0, + "translateY": 0, + "translateZ": 0, + "rotateX": 0, + "rotateY": 0, + "rotateZ": 0, + "scaleX": 1, + "scaleY": 1, + "scaleZ": 1 + } + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get("skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py index 55b2ebd6d8..03f6a5f1ab 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py @@ -5,7 +5,9 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) + from openpype.hosts.maya.api import lib import openpype.hosts.maya.api.action @@ -48,17 +50,17 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError('{} failed, see log ' + raise PublishValidationError('{} failed, see log ' 'information'.format(self.label)) @classmethod def get_invalid(cls, instance): - controllers_sets = [i for i in instance if i == "controls_SET"] - if not controllers_sets: + controls_set = instance.data["rig_sets"].get("controls_SET") + if not controls_set: return [] - controls = cmds.sets(controllers_sets, query=True) or [] + controls = cmds.sets(controls_set, query=True) or [] if not controls: return [] diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py index 03ba381f8d..80ac0f27e6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py @@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -37,16 +38,19 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with mismatching " - "IDs: {0}".format(invalid)) + raise PublishValidationError( + "Nodes found with mismatching IDs: {0}".format(invalid) + ) @classmethod def get_invalid(cls, instance): """Get all nodes which do not match the criteria""" - invalid = [] + out_set = cls.get_node(instance) + if not out_set: + return [] - out_set = next(x for x in instance if x.endswith("out_SET")) + invalid = [] members = cmds.sets(out_set, query=True) shapes = cmds.ls(members, dag=True, @@ -81,3 +85,45 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): continue lib.set_id(node, sibling_id, overwrite=True) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from out_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from out_SET + """ + return instance.data["rig_sets"].get("out_SET") + + +class ValidateSkeletonRigOutSetNodeIds(ValidateRigOutSetNodeIds): + """Validate if deformed shapes have related IDs to the original shapes + from skeleton set. + + When a deformer is applied in the scene on a referenced mesh that already + had deformers then Maya will create a new shape node for the mesh that + does not have the original id. This validator checks whether the ids are + valid on all the shape nodes in the instance. + + """ + + order = ValidateContentsOrder + families = ["rig.fbx"] + hosts = ['maya'] + label = 'Skeleton Rig Out Set Node Ids' + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get( + "skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index cba70a21b7..343d8e6924 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -9,6 +9,7 @@ from openpype.hosts.maya.api.lib import get_id, set_id from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -34,7 +35,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError("Found nodes with mismatched IDs.") + raise PublishValidationError("Found nodes with mismatched IDs.") @classmethod def get_invalid(cls, instance, compute=False): @@ -46,7 +47,10 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): invalid = {} if compute: - out_set = next(x for x in instance if x.endswith("out_SET")) + out_set = cls.get_node(instance) + if not out_set: + instance.data["mismatched_output_ids"] = invalid + return invalid instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True) instance_nodes = cmds.ls(instance_nodes, long=True) @@ -107,7 +111,44 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): set_id(instance_node, id_to_set, overwrite=True) if multiple_ids_match: - raise RuntimeError( + raise PublishValidationError( "Multiple matched ids found. Please repair manually: " "{}".format(multiple_ids_match) ) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from out_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from out_SET + """ + return instance.data["rig_sets"].get("out_SET") + + +class ValidateSkeletonRigOutputIds(ValidateRigOutputIds): + """Validate rig output ids from the skeleton sets. + + Ids must share the same id as similarly named nodes in the scene. This is + to ensure the id from the model is preserved through animation. + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Output Ids" + hosts = ["maya"] + families = ["rig.fbx"] + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get("skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py index f1fa4d3c4c..b48d67e416 100644 --- a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py +++ b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py @@ -1,10 +1,10 @@ import os import maya.cmds as cmds - import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) def is_subdir(path, root_dir): @@ -37,10 +37,11 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin): scene_name = cmds.file(query=True, sceneName=True) if not scene_name: - raise RuntimeError("Scene hasn't been saved. Workspace can't be " - "validated.") + raise PublishValidationError( + "Scene hasn't been saved. Workspace can't be validated.") root_dir = cmds.workspace(query=True, rootDirectory=True) if not is_subdir(scene_name, root_dir): - raise RuntimeError("Maya workspace is not set correctly.") + raise PublishValidationError( + "Maya workspace is not set correctly.") diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py index 034db471da..36bb2c1fee 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py @@ -1,13 +1,15 @@ import re -from maya import cmds import pyblish.api +from maya import cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateShaderName(pyblish.api.InstancePlugin): +class ValidateShaderName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate shader name assigned. It should be _<*>_SHD @@ -23,12 +25,14 @@ class ValidateShaderName(pyblish.api.InstancePlugin): # The default connections to check def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found shapes with invalid shader names " - "assigned: " - "\n{}".format(invalid)) + raise PublishValidationError( + ("Found shapes with invalid shader names " + "assigned:\n{}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py index 4ab669f46b..d8ad366ed8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py @@ -8,6 +8,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + OptionalPyblishPluginMixin ) @@ -15,7 +16,8 @@ def short_name(node): return node.rsplit("|", 1)[-1].rsplit(":", 1)[-1] -class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): +class ValidateShapeDefaultNames(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates that Shape names are using Maya's default format. When you create a new polygon cube Maya will name the transform @@ -77,6 +79,8 @@ class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): def process(self, instance): """Process all the shape nodes in the instance""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py index 7a7e9a0aee..c7af6a60db 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py @@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) @@ -67,5 +68,30 @@ class ValidateShapeZero(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Shapes found with non-zero component tweaks: " - "{0}".format(invalid)) + raise PublishValidationError( + title="Shape Component Tweaks", + message="Shapes found with non-zero component tweaks: '{}'" + "".format(", ".join(invalid)), + description=( + "## Shapes found with component tweaks\n" + "Shapes were detected that have component tweaks on their " + "components. Please remove the component tweaks to " + "continue.\n\n" + "### Repair\n" + "The repair action will try to *freeze* the component " + "tweaks into the shapes, which is usually the correct fix " + "if the mesh has no construction history (= has its " + "history deleted)."), + detail=( + "Maya allows to store component tweaks within shape nodes " + "which are applied between its `inMesh` and `outMesh` " + "connections resulting in the output of a shape node " + "differing from the input. We usually want to avoid this " + "for published meshes (in particular for Maya scenes) as " + "it can have unintended results when using these meshes " + "as intermediate meshes since it applies positional " + "differences without being visible edits in the node " + "graph.\n\n" + "These tweaks are traditionally stored in the `.pnts` " + "attribute of shapes.") + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py index 398b6fb7bf..9084374c76 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py @@ -28,7 +28,7 @@ class ValidateSkeletalMeshHierarchy(pyblish.api.InstancePlugin): parent.split("|")[1] for parent in (joints_parents + geo_parents) } - self.log.info(parents_set) + self.log.debug(parents_set) if len(set(parents_set)) > 2: raise PublishXmlValidationError( diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py index c0a9ddcf69..701c80a8af 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py @@ -7,8 +7,10 @@ from openpype.hosts.maya.api.action import ( from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) + from maya import cmds @@ -28,7 +30,7 @@ class ValidateSkeletalMeshTriangulated(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "The following objects needs to be triangulated: " "{}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py new file mode 100644 index 0000000000..1dbe1c454c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py @@ -0,0 +1,40 @@ +# -*- coding: utf-8 -*- +"""Plugin for validating naming conventions.""" +from maya import cmds + +import pyblish.api + +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) + + +class ValidateSkeletonTopGroupHierarchy(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates top group hierarchy in the SETs + Make sure the object inside the SETs are always top + group of the hierarchy + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Top Group Hierarchy" + families = ["rig.fbx"] + + def process(self, instance): + invalid = [] + skeleton_mesh_data = instance.data("skeleton_mesh", []) + if skeleton_mesh_data: + invalid = self.get_top_hierarchy(skeleton_mesh_data) + if invalid: + raise PublishValidationError( + "The skeletonMesh_SET includes the object which " + "is not at the top hierarchy: {}".format(invalid)) + + def get_top_hierarchy(self, targets): + targets = cmds.ls(targets, long=True) # ensure long names + non_top_hierarchy_list = [ + target for target in targets if target.count("|") > 2 + ] + return non_top_hierarchy_list diff --git a/openpype/hosts/maya/plugins/publish/validate_step_size.py b/openpype/hosts/maya/plugins/publish/validate_step_size.py index 294458f63c..493a6ee65c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_step_size.py +++ b/openpype/hosts/maya/plugins/publish/validate_step_size.py @@ -1,7 +1,10 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) class ValidateStepSize(pyblish.api.InstancePlugin): @@ -40,4 +43,5 @@ class ValidateStepSize(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid instances found: {0}".format(invalid)) + raise PublishValidationError( + "Invalid instances found: {0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py index b2a83a80fb..cbc7ee9d5c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py @@ -5,10 +5,15 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): +class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates transform suffix based on the type of its children shapes. Suffices must be: @@ -47,8 +52,8 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): def get_table_for_invalid(cls): ss = [] for k, v in cls.SUFFIX_NAMING_TABLE.items(): - ss.append(" - {}: {}".format(k, ", ".join(v))) - return "\n".join(ss) + ss.append(" - {}: {}".format(k, ", ".join(v))) + return "
".join(ss) @staticmethod def is_valid_name(node_name, shape_type, @@ -110,9 +115,20 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): instance (:class:`pyblish.api.Instance`): published instance. """ + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: valid = self.get_table_for_invalid() - raise ValueError("Incorrectly named geometry " - "transforms: {0}, accepted suffixes are: " - "\n{1}".format(invalid, valid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + valid = valid.replace("\n", "
") + + raise PublishValidationError( + title="Invalid naming suffix", + message="Valid suffixes are:
{0}

" + "Incorrectly named geometry transforms:
{1}" + "".format(valid, names)) diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py index abd9e00af1..906ff17ec9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateTransformZero(pyblish.api.Validator): @@ -62,5 +65,14 @@ class ValidateTransformZero(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Nodes found with transform " - "values: {0}".format(invalid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + + raise PublishValidationError( + title="Transform Zero", + message="The model publish allows no transformations. You must" + " freeze transformations to continue.

" + "Nodes found with transform values: " + "{0}".format(names)) diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py index 1425190b82..58fa9d02bd 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py @@ -7,10 +7,15 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline import legacy_io from openpype.settings import get_project_settings -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): +class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate name of Unreal Static Mesh Unreals naming convention states that staticMesh should start with `SM` @@ -64,11 +69,8 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): invalid = [] - project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) collision_prefixes = ( - project_settings + instance.context.data["project_settings"] ["maya"] ["create"] ["CreateUnrealStaticMesh"] @@ -131,16 +133,19 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): return invalid def process(self, instance): + if not self.is_active(instance.data): + return + if not self.validate_mesh and not self.validate_collision: - self.log.info("Validation of both mesh and collision names" - "is disabled.") + self.log.debug("Validation of both mesh and collision names" + "is disabled.") return if not instance.data.get("collisionMembers", None): - self.log.info("There are no collision objects to validate") + self.log.debug("There are no collision objects to validate") return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model naming is invalid. See log.") + raise PublishValidationError("Model naming is invalid. See log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py index dd699735d9..a420dcb900 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py @@ -6,10 +6,12 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + OptionalPyblishPluginMixin ) -class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): +class ValidateUnrealUpAxis(pyblish.api.ContextPlugin, + OptionalPyblishPluginMixin): """Validate if Z is set as up axis in Maya""" optional = True @@ -21,6 +23,9 @@ class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): actions = [RepairAction] def process(self, context): + if not self.is_active(context.data): + return + assert cmds.upAxis(q=True, axis=True) == "z", ( "Invalid axis set as up axis" ) diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py index faf634f258..e72782e552 100644 --- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py +++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py @@ -2,7 +2,10 @@ import pyblish.api from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): @@ -27,7 +30,7 @@ class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: start, end = self.get_frame_range(instance) - raise RuntimeError("No visible nodes found in " + raise PublishValidationError("No visible nodes found in " "frame range {}-{}.".format(start, end)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_vray.py b/openpype/hosts/maya/plugins/publish/validate_vray.py index 045ac258a1..bef5967cc9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray.py @@ -1,7 +1,7 @@ from maya import cmds import pyblish.api -from openpype.pipeline import PublishValidationError +from openpype.pipeline.publish import PublishValidationError class ValidateVray(pyblish.api.InstancePlugin): diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py index 366f3bd10e..14571203ea 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py @@ -1,11 +1,9 @@ import pyblish.api +from maya import cmds + from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( - ValidateContentsOrder, - RepairAction, -) - -from maya import cmds + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): @@ -36,7 +34,7 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): vray_settings = cmds.ls("vraySettings", type="VRaySettingsNode") assert vray_settings, "Please ensure a VRay Settings Node is present" - renderlayer = instance.data['setMembers'] + renderlayer = instance.data['renderlayer'] if not lib.get_attr_in_layer(self.enabled_attr, layer=renderlayer): # If not distributed rendering enabled, ignore.. @@ -45,14 +43,15 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): # If distributed rendering is enabled but it is *not* set to ignore # during batch mode we invalidate the instance if not lib.get_attr_in_layer(self.ignored_attr, layer=renderlayer): - raise RuntimeError("Renderlayer has distributed rendering enabled " - "but is not set to ignore in batch mode.") + raise PublishValidationError( + ("Renderlayer has distributed rendering enabled " + "but is not set to ignore in batch mode.")) @classmethod def repair(cls, instance): - renderlayer = instance.data.get("setMembers") + renderlayer = instance.data.get("renderlayer") with lib.renderlayer(renderlayer): - cls.log.info("Enabling Distributed Rendering " - "ignore in batch mode..") + cls.log.debug("Enabling Distributed Rendering " + "ignore in batch mode..") cmds.setAttr(cls.ignored_attr, True) diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py index f49811c2c0..4474f08ba4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py @@ -5,6 +5,7 @@ from openpype.pipeline.publish import ( context_plugin_should_run, RepairContextAction, ValidateContentsOrder, + PublishValidationError ) from maya import cmds @@ -26,7 +27,10 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin): invalid = self.get_invalid(context) if invalid: - raise RuntimeError("Found invalid VRay Translator settings!") + raise PublishValidationError( + message="Found invalid VRay Translator settings", + title=self.label + ) @classmethod def get_invalid(cls, context): @@ -35,7 +39,11 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin): # Get vraySettings node vray_settings = cmds.ls(type="VRaySettingsNode") - assert vray_settings, "Please ensure a VRay Settings Node is present" + if not vray_settings: + raise PublishValidationError( + "Please ensure a VRay Settings Node is present", + title=cls.label + ) node = vray_settings[0] diff --git a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py index 855a96e6b9..7b726de3a8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py @@ -3,6 +3,10 @@ import pyblish.api from maya import cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError +) + class ValidateVrayProxyMembers(pyblish.api.InstancePlugin): @@ -19,7 +23,7 @@ class ValidateVrayProxyMembers(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'%s' is invalid VRay Proxy for " + raise PublishValidationError("'%s' is invalid VRay Proxy for " "export!" % instance.name) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py index 06250f5779..a8085418e7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py @@ -54,7 +54,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin): # has any yeti callback set or not since if the callback # is there it wouldn't error and if it weren't then # nothing happens because there are no yeti nodes. - cls.log.info( + cls.log.debug( "Yeti is loaded but no yeti nodes were found. " "Callback validation skipped.." ) @@ -62,7 +62,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin): renderer = instance.data["renderer"] if renderer == "redshift": - cls.log.info("Redshift ignores any pre and post render callbacks") + cls.log.debug("Redshift ignores any pre and post render callbacks") return False callback_lookup = cls.callbacks.get(renderer, {}) diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py index 4842134b12..2b7249ad94 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py @@ -1,7 +1,11 @@ import pyblish.api import maya.cmds as cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import RepairAction +from openpype.pipeline.publish import ( + RepairAction, + PublishValidationError +) + class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): @@ -23,7 +27,7 @@ class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes have incorrect I/O settings") + raise PublishValidationError("Nodes have incorrect I/O settings") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py index ebef44774d..50a27589ad 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): @@ -19,7 +22,7 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Yeti Rig has invalid input meshes") + raise PublishValidationError("Yeti Rig has invalid input meshes") @classmethod def get_invalid(cls, instance): @@ -34,8 +37,8 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): # Allow publish without input meshes. if not shapes: - cls.log.info("Found no input meshes for %s, skipping ..." - % instance) + cls.log.debug("Found no input meshes for %s, skipping ..." + % instance) return [] # check if input node is part of groomRig instance diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py index 9914277721..455bf5291a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py @@ -1,5 +1,7 @@ import pyblish.api +from openpype.pipeline.publish import PublishValidationError + class ValidateYetiRigSettings(pyblish.api.InstancePlugin): """Validate Yeti Rig Settings have collected input connections. @@ -18,8 +20,9 @@ class ValidateYetiRigSettings(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Detected invalid Yeti Rig data. (See log) " - "Tip: Save the scene") + raise PublishValidationError( + ("Detected invalid Yeti Rig data. (See log) " + "Tip: Save the scene")) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py index ae6a999d98..f2899cdb37 100644 --- a/openpype/hosts/maya/startup/userSetup.py +++ b/openpype/hosts/maya/startup/userSetup.py @@ -1,7 +1,7 @@ import os from openpype.settings import get_project_settings -from openpype.pipeline import install_host +from openpype.pipeline import install_host, get_current_project_name from openpype.hosts.maya.api import MayaHost from maya import cmds @@ -12,10 +12,11 @@ install_host(host) print("Starting OpenPype usersetup...") -project_settings = get_project_settings(os.environ['AVALON_PROJECT']) +project_name = get_current_project_name() +settings = get_project_settings(project_name) # Loading plugins explicitly. -explicit_plugins_loading = project_settings["maya"]["explicit_plugins_loading"] +explicit_plugins_loading = settings["maya"]["explicit_plugins_loading"] if explicit_plugins_loading["enabled"]: def _explicit_load_plugins(): for plugin in explicit_plugins_loading["plugins_to_load"]: @@ -46,17 +47,16 @@ if bool(int(os.environ.get(key, "0"))): ) # Build a shelf. -shelf_preset = project_settings['maya'].get('project_shelf') - +shelf_preset = settings['maya'].get('project_shelf') if shelf_preset: - project = os.environ["AVALON_PROJECT"] - - icon_path = os.path.join(os.environ['OPENPYPE_PROJECT_SCRIPTS'], - project, "icons") + icon_path = os.path.join( + os.environ['OPENPYPE_PROJECT_SCRIPTS'], + project_name, + "icons") icon_path = os.path.abspath(icon_path) for i in shelf_preset['imports']: - import_string = "from {} import {}".format(project, i) + import_string = "from {} import {}".format(project_name, i) print(import_string) exec(import_string) diff --git a/openpype/hosts/maya/tools/mayalookassigner/app.py b/openpype/hosts/maya/tools/mayalookassigner/app.py index 64fc04dfc4..b5ce7ada34 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/app.py +++ b/openpype/hosts/maya/tools/mayalookassigner/app.py @@ -4,9 +4,9 @@ import logging from qtpy import QtWidgets, QtCore -from openpype.client import get_last_version_by_subset_id from openpype import style -from openpype.pipeline import legacy_io +from openpype.client import get_last_version_by_subset_id +from openpype.pipeline import get_current_project_name from openpype.tools.utils.lib import qt_app_context from openpype.hosts.maya.api.lib import ( assign_look_by_version, @@ -216,7 +216,7 @@ class MayaLookAssignerWindow(QtWidgets.QWidget): selection = self.assign_selected.isChecked() asset_nodes = self.asset_outliner.get_nodes(selection=selection) - project_name = legacy_io.active_project() + project_name = get_current_project_name() start = time.time() for i, (asset, item) in enumerate(asset_nodes.items()): diff --git a/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py b/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py index 0ce2b21dcd..076b0047bb 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py +++ b/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py @@ -10,6 +10,7 @@ from openpype.client import get_last_version_by_subset_name from openpype.hosts.maya import api from . import lib from .alembic import get_alembic_ids_cache +from .usd import is_usd_lib_supported, get_usd_ids_cache log = logging.getLogger(__name__) @@ -74,6 +75,13 @@ def get_nodes_by_id(standin): # Support alembic files directly return get_alembic_ids_cache(path) + elif ( + is_usd_lib_supported and + any(path.endswith(ext) for ext in [".usd", ".usda", ".usdc"]) + ): + # Support usd files directly + return get_usd_ids_cache(path) + json_path = None for f in os.listdir(os.path.dirname(path)): if f.endswith(".json"): diff --git a/openpype/hosts/maya/tools/mayalookassigner/commands.py b/openpype/hosts/maya/tools/mayalookassigner/commands.py index c5e6c973cf..5cc4f84931 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/commands.py +++ b/openpype/hosts/maya/tools/mayalookassigner/commands.py @@ -1,14 +1,14 @@ -from collections import defaultdict -import logging import os +import logging +from collections import defaultdict import maya.cmds as cmds -from openpype.client import get_asset_by_id +from openpype.client import get_assets from openpype.pipeline import ( - legacy_io, remove_container, registered_host, + get_current_project_name, ) from openpype.hosts.maya.api import lib @@ -126,18 +126,29 @@ def create_items_from_nodes(nodes): log.warning("No id hashes") return asset_view_items - project_name = legacy_io.active_project() - for _id, id_nodes in id_hashes.items(): - asset = get_asset_by_id(project_name, _id, fields=["name"]) + project_name = get_current_project_name() + asset_ids = set(id_hashes.keys()) + asset_docs = get_assets(project_name, asset_ids, fields=["name"]) + asset_docs_by_id = { + str(asset_doc["_id"]): asset_doc + for asset_doc in asset_docs + } + for asset_id, id_nodes in id_hashes.items(): + asset_doc = asset_docs_by_id.get(asset_id) # Skip if asset id is not found - if not asset: - log.warning("Id not found in the database, skipping '%s'." % _id) - log.warning("Nodes: %s" % id_nodes) + if not asset_doc: + log.warning( + "Id found on {num} nodes for which no asset is found database," + " skipping '{asset_id}'".format( + num=len(nodes), + asset_id=asset_id + ) + ) continue # Collect available look subsets for this asset - looks = lib.list_looks(asset["_id"]) + looks = lib.list_looks(project_name, asset_doc["_id"]) # Collect namespaces the asset is found in namespaces = set() @@ -146,8 +157,8 @@ def create_items_from_nodes(nodes): namespaces.add(namespace) asset_view_items.append({ - "label": asset["name"], - "asset": asset, + "label": asset_doc["name"], + "asset": asset_doc, "looks": looks, "namespaces": namespaces }) diff --git a/openpype/hosts/maya/tools/mayalookassigner/usd.py b/openpype/hosts/maya/tools/mayalookassigner/usd.py new file mode 100644 index 0000000000..6b5cb2f0f5 --- /dev/null +++ b/openpype/hosts/maya/tools/mayalookassigner/usd.py @@ -0,0 +1,38 @@ +from collections import defaultdict + +try: + from pxr import Usd + is_usd_lib_supported = True +except ImportError: + is_usd_lib_supported = False + + +def get_usd_ids_cache(path): + # type: (str) -> dict + """Build a id to node mapping in a USD file. + + Nodes without IDs are ignored. + + Returns: + dict: Mapping of id to nodes in the USD file. + + """ + if not is_usd_lib_supported: + raise RuntimeError("No pxr.Usd python library available.") + + stage = Usd.Stage.Open(path) + ids = {} + for prim in stage.Traverse(): + attr = prim.GetAttribute("userProperties:cbId") + if not attr.IsValid(): + continue + value = attr.Get() + if not value: + continue + path = str(prim.GetPath()) + ids[path] = value + + cache = defaultdict(list) + for path, value in ids.items(): + cache[value].append(path) + return dict(cache) diff --git a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py index c875fec7f0..97fb832f71 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py +++ b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py @@ -6,7 +6,7 @@ import logging from maya import cmds from openpype.client import get_last_version_by_subset_name -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name import openpype.hosts.maya.lib as maya_lib from . import lib from .alembic import get_alembic_ids_cache @@ -76,7 +76,7 @@ def vrayproxy_assign_look(vrayproxy, subset="lookDefault"): asset_id = node_id.split(":", 1)[0] node_ids_by_asset_id[asset_id].add(node_id) - project_name = legacy_io.active_project() + project_name = get_current_project_name() for asset_id, node_ids in node_ids_by_asset_id.items(): # Get latest look version diff --git a/openpype/hosts/maya/tools/mayalookassigner/widgets.py b/openpype/hosts/maya/tools/mayalookassigner/widgets.py index f2df17e68c..82c37e2104 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/widgets.py +++ b/openpype/hosts/maya/tools/mayalookassigner/widgets.py @@ -90,15 +90,13 @@ class AssetOutliner(QtWidgets.QWidget): def get_all_assets(self): """Add all items from the current scene""" - items = [] with preserve_expanded_rows(self.view): with preserve_selection(self.view): self.clear() nodes = commands.get_all_asset_nodes() items = commands.create_items_from_nodes(nodes) self.add_items(items) - - return len(items) > 0 + return len(items) > 0 def get_selected_assets(self): """Add all selected items from the current scene""" diff --git a/openpype/hosts/nuke/api/__init__.py b/openpype/hosts/nuke/api/__init__.py index 1af5ff365d..c6ccd0baf1 100644 --- a/openpype/hosts/nuke/api/__init__.py +++ b/openpype/hosts/nuke/api/__init__.py @@ -50,6 +50,11 @@ from .utils import ( get_colorspace_list ) +from .actions import ( + SelectInvalidAction, + SelectInstanceNodeAction +) + __all__ = ( "file_extensions", "has_unsaved_changes", @@ -92,5 +97,8 @@ __all__ = ( "create_write_node", "colorspace_exists_on_node", - "get_colorspace_list" + "get_colorspace_list", + + "SelectInvalidAction", + "SelectInstanceNodeAction" ) diff --git a/openpype/hosts/nuke/api/actions.py b/openpype/hosts/nuke/api/actions.py index c955a85acc..995e6427af 100644 --- a/openpype/hosts/nuke/api/actions.py +++ b/openpype/hosts/nuke/api/actions.py @@ -20,33 +20,58 @@ class SelectInvalidAction(pyblish.api.Action): def process(self, context, plugin): - try: - import nuke - except ImportError: - raise ImportError("Current host is not Nuke") - errored_instances = get_errored_instances_from_context(context, plugin=plugin) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") - invalid = list() + invalid = set() for instance in errored_instances: invalid_nodes = plugin.get_invalid(instance) if invalid_nodes: if isinstance(invalid_nodes, (list, tuple)): - invalid.append(invalid_nodes[0]) + invalid.update(invalid_nodes) else: self.log.warning("Plug-in returned to be invalid, " "but has no selectable nodes.") - # Ensure unique (process each node only once) - invalid = list(set(invalid)) - if invalid: self.log.info("Selecting invalid nodes: {}".format(invalid)) reset_selection() select_nodes(invalid) else: self.log.info("No invalid nodes found.") + + +class SelectInstanceNodeAction(pyblish.api.Action): + """Select instance node for failed plugin.""" + label = "Select instance node" + on = "failed" # This action is only available on a failed plug-in + icon = "mdi.cursor-default-click" + + def process(self, context, plugin): + + # Get the errored instances for the plug-in + errored_instances = get_errored_instances_from_context( + context, plugin) + + # Get the invalid nodes for the plug-ins + self.log.info("Finding instance nodes..") + nodes = set() + for instance in errored_instances: + instance_node = instance.data.get("transientData", {}).get("node") + if not instance_node: + raise RuntimeError( + "No transientData['node'] found on instance: {}".format( + instance + ) + ) + nodes.add(instance_node) + + if nodes: + self.log.info("Selecting instance nodes: {}".format(nodes)) + reset_selection() + select_nodes(nodes) + else: + self.log.info("No instance nodes found.") diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 685ab3019d..62f3a3c3ff 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -42,8 +42,10 @@ from openpype.pipeline.template_data import get_template_data_with_names from openpype.pipeline import ( get_current_project_name, discover_legacy_creator_plugins, - legacy_io, Anatomy, + get_current_host_name, + get_current_project_name, + get_current_asset_name, ) from openpype.pipeline.context_tools import ( get_current_project_asset, @@ -422,10 +424,13 @@ def add_publish_knob(node): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.set_node_data") def set_avalon_knob_data(node, data=None, prefix="avalon:"): """[DEPRECATED] Sets data into nodes's avalon knob + This function is still used but soon will be deprecated. + Use `set_node_data` instead. + Arguments: node (nuke.Node): Nuke node to imprint with data, data (dict, optional): Data to be imprinted into AvalonTab @@ -485,10 +490,13 @@ def set_avalon_knob_data(node, data=None, prefix="avalon:"): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.get_node_data") def get_avalon_knob_data(node, prefix="avalon:", create=True): """[DEPRECATED] Gets a data from nodes's avalon knob + This function is still used but soon will be deprecated. + Use `get_node_data` instead. + Arguments: node (obj): Nuke node to search for data, prefix (str, optional): filtering prefix @@ -553,7 +561,9 @@ def add_write_node_legacy(name, **kwarg): w = nuke.createNode( "Write", - "name {}".format(name)) + "name {}".format(name), + inpanel=False + ) w["file"].setValue(kwarg["file"]) @@ -589,7 +599,9 @@ def add_write_node(name, file_path, knobs, **kwarg): w = nuke.createNode( "Write", - "name {}".format(name)) + "name {}".format(name), + inpanel=False + ) w["file"].setValue(file_path) @@ -966,7 +978,7 @@ def check_inventory_versions(): if not repre_ids: return - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Find representations based on found containers repre_docs = get_representations( project_name, @@ -1124,11 +1136,15 @@ def format_anatomy(data): anatomy = Anatomy() log.debug("__ anatomy.templates: {}".format(anatomy.templates)) - padding = int( - anatomy.templates["render"].get( - "frame_padding" + padding = None + if "frame_padding" in anatomy.templates.keys(): + padding = int(anatomy.templates["frame_padding"]) + elif "render" in anatomy.templates.keys(): + padding = int( + anatomy.templates["render"].get( + "frame_padding" + ) ) - ) version = data.get("version", None) if not version: @@ -1138,7 +1154,7 @@ def format_anatomy(data): project_name = anatomy.project_name asset_name = data["asset"] task_name = data["task"] - host_name = os.environ["AVALON_APP"] + host_name = get_current_host_name() context_data = get_template_data_with_names( project_name, asset_name, task_name, host_name ) @@ -1192,8 +1208,10 @@ def create_prenodes( # create node now_node = nuke.createNode( - nodeclass, "name {}".format(name)) - now_node.hideControlPanel() + nodeclass, + "name {}".format(name), + inpanel=False + ) # add for dependency linking for_dependency[name] = { @@ -1317,12 +1335,17 @@ def create_write_node( input_name = str(input.name()).replace(" ", "") # if connected input node was defined prev_node = nuke.createNode( - "Input", "name {}".format(input_name)) + "Input", + "name {}".format(input_name), + inpanel=False + ) else: # generic input node connected to nothing prev_node = nuke.createNode( - "Input", "name {}".format("rgba")) - prev_node.hideControlPanel() + "Input", + "name {}".format("rgba"), + inpanel=False + ) # creating pre-write nodes `prenodes` last_prenode = create_prenodes( @@ -1342,15 +1365,13 @@ def create_write_node( imageio_writes["knobs"], **data ) - write_node.hideControlPanel() # connect to previous node now_node.setInput(0, prev_node) # switch actual node to previous prev_node = now_node - now_node = nuke.createNode("Output", "name Output1") - now_node.hideControlPanel() + now_node = nuke.createNode("Output", "name Output1", inpanel=False) # connect to previous node now_node.setInput(0, prev_node) @@ -1461,7 +1482,7 @@ def create_write_node_legacy( if knob["name"] == "file_type": representation = knob["value"] - host_name = os.environ.get("AVALON_APP") + host_name = get_current_host_name() try: data.update({ "app": host_name, @@ -1517,8 +1538,10 @@ def create_write_node_legacy( else: # generic input node connected to nothing prev_node = nuke.createNode( - "Input", "name {}".format("rgba")) - prev_node.hideControlPanel() + "Input", + "name {}".format("rgba"), + inpanel=False + ) # creating pre-write nodes `prenodes` if prenodes: for node in prenodes: @@ -1530,8 +1553,10 @@ def create_write_node_legacy( # create node now_node = nuke.createNode( - klass, "name {}".format(pre_node_name)) - now_node.hideControlPanel() + klass, + "name {}".format(pre_node_name), + inpanel=False + ) # add data to knob for _knob in knobs: @@ -1561,14 +1586,18 @@ def create_write_node_legacy( if isinstance(dependent, (tuple or list)): for i, node_name in enumerate(dependent): input_node = nuke.createNode( - "Input", "name {}".format(node_name)) - input_node.hideControlPanel() + "Input", + "name {}".format(node_name), + inpanel=False + ) now_node.setInput(1, input_node) elif isinstance(dependent, str): input_node = nuke.createNode( - "Input", "name {}".format(node_name)) - input_node.hideControlPanel() + "Input", + "name {}".format(node_name), + inpanel=False + ) now_node.setInput(0, input_node) else: @@ -1583,15 +1612,13 @@ def create_write_node_legacy( "inside_{}".format(name), **_data ) - write_node.hideControlPanel() # connect to previous node now_node.setInput(0, prev_node) # switch actual node to previous prev_node = now_node - now_node = nuke.createNode("Output", "name Output1") - now_node.hideControlPanel() + now_node = nuke.createNode("Output", "name Output1", inpanel=False) # connect to previous node now_node.setInput(0, prev_node) @@ -1678,7 +1705,7 @@ def create_write_node_legacy( knob_value = float(knob_value) if knob_type == "bool": knob_value = bool(knob_value) - if knob_type in ["2d_vector", "3d_vector"]: + if knob_type in ["2d_vector", "3d_vector", "color", "box"]: knob_value = list(knob_value) GN[knob_name].setValue(knob_value) @@ -1694,7 +1721,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): Args: node (nuke.Node): nuke node knob_settings (list): list of dict. Keys are `type`, `name`, `value` - kwargs (dict)[optional]: keys for formatable knob settings + kwargs (dict)[optional]: keys for formattable knob settings """ for knob in knob_settings: log.debug("__ knob: {}".format(pformat(knob))) @@ -1711,7 +1738,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): ) continue - # first deal with formatable knob settings + # first deal with formattable knob settings if knob_type == "formatable": template = knob["template"] to_type = knob["to_type"] @@ -1720,8 +1747,8 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): **kwargs ) except KeyError as msg: - log.warning("__ msg: {}".format(msg)) - raise KeyError(msg) + raise KeyError( + "Not able to format expression: {}".format(msg)) # convert value to correct type if to_type == "2d_vector": @@ -1760,8 +1787,8 @@ def convert_knob_value_to_correct_type(knob_type, knob_value): knob_value = knob_value elif knob_type == "color_gui": knob_value = color_gui_to_int(knob_value) - elif knob_type in ["2d_vector", "3d_vector", "color"]: - knob_value = [float(v) for v in knob_value] + elif knob_type in ["2d_vector", "3d_vector", "color", "box"]: + knob_value = [float(val_) for val_ in knob_value] return knob_value @@ -1914,15 +1941,18 @@ class WorkfileSettings(object): def __init__(self, root_node=None, nodes=None, **kwargs): project_doc = kwargs.get("project") if project_doc is None: - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) + else: + project_name = project_doc["name"] Context._project_doc = project_doc + self._project_name = project_name self._asset = ( kwargs.get("asset_name") - or legacy_io.Session["AVALON_ASSET"] + or get_current_asset_name() ) - self._asset_entity = get_current_project_asset(self._asset) + self._asset_entity = get_asset_by_name(project_name, self._asset) self._root_node = root_node or nuke.root() self._nodes = self.get_nodes(nodes=nodes) @@ -2011,6 +2041,7 @@ class WorkfileSettings(object): ) workfile_settings = imageio_host["workfile"] + viewer_process_settings = imageio_host["viewer"]["viewerProcess"] if not config_data: # TODO: backward compatibility for old projects - remove later @@ -2046,14 +2077,30 @@ class WorkfileSettings(object): str(workfile_settings["OCIO_config"])) else: - # set values to root + # OCIO config path is defined from prelaunch hook self._root_node["colorManagement"].setValue("OCIO") + # print previous settings in case some were found in workfile + residual_path = self._root_node["customOCIOConfigPath"].value() + if residual_path: + log.info("Residual OCIO config path found: `{}`".format( + residual_path + )) + # we dont need the key anymore workfile_settings.pop("customOCIOConfigPath", None) workfile_settings.pop("colorManagement", None) workfile_settings.pop("OCIO_config", None) + # get monitor lut from settings respecting Nuke version differences + monitor_lut = workfile_settings.pop("monitorLut", None) + monitor_lut_data = self._get_monitor_settings( + viewer_process_settings, monitor_lut) + + # set monitor related knobs luts (MonitorOut, Thumbnails) + for knob, value_ in monitor_lut_data.items(): + workfile_settings[knob] = value_ + # then set the rest for knob, value_ in workfile_settings.items(): # skip unfilled ocio config path @@ -2070,9 +2117,70 @@ class WorkfileSettings(object): # set ocio config path if config_data: - current_ocio_path = os.getenv("OCIO") - if current_ocio_path != config_data["path"]: - message = """ + config_path = config_data["path"].replace("\\", "/") + log.info("OCIO config path found: `{}`".format( + config_path)) + + # check if there's a mismatch between environment and settings + correct_settings = self._is_settings_matching_environment( + config_data) + + # if there's no mismatch between environment and settings + if correct_settings: + self._set_ocio_config_path_to_workfile(config_data) + + def _get_monitor_settings(self, viewer_lut, monitor_lut): + """ Get monitor settings from viewer and monitor lut + + Args: + viewer_lut (str): viewer lut string + monitor_lut (str): monitor lut string + + Returns: + dict: monitor settings + """ + output_data = {} + m_display, m_viewer = get_viewer_config_from_string(monitor_lut) + v_display, v_viewer = get_viewer_config_from_string(viewer_lut) + + # set monitor lut differently for nuke version 14 + if nuke.NUKE_VERSION_MAJOR >= 14: + output_data["monitorOutLUT"] = create_viewer_profile_string( + m_viewer, m_display, path_like=False) + # monitorLut=thumbnails - viewerProcess makes more sense + output_data["monitorLut"] = create_viewer_profile_string( + v_viewer, v_display, path_like=False) + + if nuke.NUKE_VERSION_MAJOR == 13: + output_data["monitorOutLUT"] = create_viewer_profile_string( + m_viewer, m_display, path_like=False) + # monitorLut=thumbnails - viewerProcess makes more sense + output_data["monitorLut"] = create_viewer_profile_string( + v_viewer, v_display, path_like=True) + if nuke.NUKE_VERSION_MAJOR <= 12: + output_data["monitorLut"] = create_viewer_profile_string( + m_viewer, m_display, path_like=True) + + return output_data + + def _is_settings_matching_environment(self, config_data): + """ Check if OCIO config path is different from environment + + Args: + config_data (dict): OCIO config data from settings + + Returns: + bool: True if settings are matching environment, False otherwise + """ + current_ocio_path = os.environ["OCIO"] + settings_ocio_path = config_data["path"] + + # normalize all paths to forward slashes + current_ocio_path = current_ocio_path.replace("\\", "/") + settings_ocio_path = settings_ocio_path.replace("\\", "/") + + if current_ocio_path != settings_ocio_path: + message = """ It seems like there's a mismatch between the OCIO config path set in your Nuke settings and the actual path set in your OCIO environment. @@ -2090,38 +2198,171 @@ Please note the paths for your reference: Reopening Nuke should synchronize these paths and resolve any discrepancies. """ - nuke.message( - message.format( - env_path=current_ocio_path, - settings_path=config_data["path"] - ) + nuke.message( + message.format( + env_path=current_ocio_path, + settings_path=settings_ocio_path ) + ) + return False + + return True + + def _set_ocio_config_path_to_workfile(self, config_data): + """ Set OCIO config path to workfile + + Path set into nuke workfile. It is trying to replace path with + environment variable if possible. If not, it will set it as it is. + It also saves the script to apply the change, but only if it's not + empty Untitled script. + + Args: + config_data (dict): OCIO config data from settings + + """ + # replace path with env var if possible + ocio_path = self._replace_ocio_path_with_env_var(config_data) + ocio_path = ocio_path.replace("\\", "/") + + log.info("Setting OCIO config path to: `{}`".format( + ocio_path)) + + self._root_node["customOCIOConfigPath"].setValue( + ocio_path + ) + self._root_node["OCIO_config"].setValue("custom") + + # only save script if it's not empty + if self._root_node["name"].value() != "": + log.info("Saving script to apply OCIO config path change.") + nuke.scriptSave() + + def _get_included_vars(self, config_template): + """ Get all environment variables included in template + + Args: + config_template (str): OCIO config template from settings + + Returns: + list: list of environment variables included in template + """ + # resolve all environments for whitelist variables + included_vars = [ + "BUILTIN_OCIO_ROOT", + ] + + # include all project root related env vars + for env_var in os.environ: + if env_var.startswith("OPENPYPE_PROJECT_ROOT_"): + included_vars.append(env_var) + + # use regex to find env var in template with format {ENV_VAR} + # this way we make sure only template used env vars are included + env_var_regex = r"\{([A-Z0-9_]+)\}" + env_var = re.findall(env_var_regex, config_template) + if env_var: + included_vars.append(env_var[0]) + + return included_vars + + def _replace_ocio_path_with_env_var(self, config_data): + """ Replace OCIO config path with environment variable + + Environment variable is added as TCL expression to path. TCL expression + is also replacing backward slashes found in path for windows + formatted values. + + Args: + config_data (str): OCIO config dict from settings + + Returns: + str: OCIO config path with environment variable TCL expression + """ + config_path = config_data["path"].replace("\\", "/") + config_template = config_data["template"] + + included_vars = self._get_included_vars(config_template) + + # make sure we return original path if no env var is included + new_path = config_path + + for env_var in included_vars: + env_path = os.getenv(env_var) + if not env_path: + continue + + # it has to be directory current process can see + if not os.path.isdir(env_path): + continue + + # make sure paths are in same format + env_path = env_path.replace("\\", "/") + path = config_path.replace("\\", "/") + + # check if env_path is in path and replace to first found positive + if env_path in path: + # with regsub we make sure path format of slashes is correct + resub_expr = ( + "[regsub -all {{\\\\}} [getenv {}] \"/\"]").format(env_var) + + new_path = path.replace( + env_path, resub_expr + ) + break + + return new_path def set_writes_colorspace(self): ''' Adds correct colorspace to write node dict ''' - for node in nuke.allNodes(filter="Group"): + for node in nuke.allNodes(filter="Group", group=self._root_node): + log.info("Setting colorspace to `{}`".format(node.name())) # get data from avalon knob avalon_knob_data = read_avalon_data(node) + node_data = get_node_data(node, INSTANCE_DATA_KNOB) - if avalon_knob_data.get("id") != "pyblish.avalon.instance": + if ( + # backward compatibility + # TODO: remove this once old avalon data api will be removed + avalon_knob_data + and avalon_knob_data.get("id") != "pyblish.avalon.instance" + ): + continue + elif ( + node_data + and node_data.get("id") != "pyblish.avalon.instance" + ): continue - if "creator" not in avalon_knob_data: + if ( + # backward compatibility + # TODO: remove this once old avalon data api will be removed + avalon_knob_data + and "creator" not in avalon_knob_data + ): + continue + elif ( + node_data + and "creator_identifier" not in node_data + ): continue - # establish families - families = [avalon_knob_data["family"]] - if avalon_knob_data.get("families"): - families.append(avalon_knob_data.get("families")) + nuke_imageio_writes = None + if avalon_knob_data: + # establish families + families = [avalon_knob_data["family"]] + if avalon_knob_data.get("families"): + families.append(avalon_knob_data.get("families")) - nuke_imageio_writes = get_imageio_node_setting( - node_class=avalon_knob_data["families"], - plugin_name=avalon_knob_data["creator"], - subset=avalon_knob_data["subset"] - ) + nuke_imageio_writes = get_imageio_node_setting( + node_class=avalon_knob_data["families"], + plugin_name=avalon_knob_data["creator"], + subset=avalon_knob_data["subset"] + ) + elif node_data: + nuke_imageio_writes = get_write_node_template_attr(node) log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes)) @@ -2180,7 +2421,6 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. continue preset_clrsp = input["colorspace"] - log.debug(preset_clrsp) if preset_clrsp is not None: current = n["colorspace"].value() future = str(preset_clrsp) @@ -2210,7 +2450,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. knobs["to"])) def set_colorspace(self): - ''' Setting colorpace following presets + ''' Setting colorspace following presets ''' # get imageio nuke_colorspace = get_nuke_imageio_settings() @@ -2218,17 +2458,16 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. log.info("Setting colorspace to workfile...") try: self.set_root_colorspace(nuke_colorspace) - except AttributeError: - msg = "set_colorspace(): missing `workfile` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to workfile error: {}".format(_error) nuke.message(msg) log.info("Setting colorspace to viewers...") try: self.set_viewers_colorspace(nuke_colorspace["viewer"]) - except AttributeError: - msg = "set_colorspace(): missing `viewer` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to viewer error: {}".format(_error) nuke.message(msg) - log.error(msg) log.info("Setting colorspace to write nodes...") try: @@ -2315,7 +2554,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. def reset_resolution(self): """Set resolution to project resolution.""" log.info("Resetting resolution") - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_data = self._asset_entity["data"] format_data = { @@ -2394,7 +2633,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. from .utils import set_context_favorites work_dir = os.getenv("AVALON_WORKDIR") - asset = os.getenv("AVALON_ASSET") + asset = get_current_asset_name() favorite_items = OrderedDict() # project @@ -2594,9 +2833,10 @@ def select_nodes(nodes): """Selects all inputted nodes Arguments: - nodes (list): nuke nodes to be selected + nodes (Union[list, tuple, set]): nuke nodes to be selected """ - assert isinstance(nodes, (list, tuple)), "nodes has to be list or tuple" + assert isinstance(nodes, (list, tuple, set)), \ + "nodes has to be list, tuple or set" for node in nodes: node["selected"].setValue(True) @@ -2662,7 +2902,15 @@ def _launch_workfile_app(): host_tools.show_workfiles(parent=None, on_top=True) +@deprecated("openpype.hosts.nuke.api.lib.start_workfile_template_builder") def process_workfile_builder(): + """ [DEPRECATED] Process workfile builder on nuke start + + This function is deprecated and will be removed in future versions. + Use settings for `project_settings/nuke/templated_workfile_build` which are + supported by api `start_workfile_template_builder()`. + """ + # to avoid looping of the callback, remove it! nuke.removeOnCreate(process_workfile_builder, nodeClass="Root") @@ -2671,11 +2919,6 @@ def process_workfile_builder(): workfile_builder = project_settings["nuke"].get( "workfile_builder", {}) - # get all imortant settings - openlv_on = env_value_to_bool( - env_key="AVALON_OPEN_LAST_WORKFILE", - default=None) - # get settings createfv_on = workfile_builder.get("create_first_version") or None builder_on = workfile_builder.get("builder_on_start") or None @@ -2716,20 +2959,15 @@ def process_workfile_builder(): save_file(last_workfile_path) return - # skip opening of last version if it is not enabled - if not openlv_on or not os.path.exists(last_workfile_path): - return - - log.info("Opening last workfile...") - # open workfile - open_file(last_workfile_path) - def start_workfile_template_builder(): from .workfile_template_builder import ( build_workfile_template ) + # remove callback since it would be duplicating the workfile + nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") + # to avoid looping of the callback, remove it! log.info("Starting workfile template builder...") try: @@ -2737,8 +2975,6 @@ def start_workfile_template_builder(): except TemplateProfileNotFound: log.warning("Template profile not found. Skipping...") - # remove callback since it would be duplicating the workfile - nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") @deprecated def recreate_instance(origin_node, avalon_data=None): @@ -2817,7 +3053,8 @@ def add_scripts_menu(): return # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) config = project_settings["nuke"]["scriptsmenu"]["definition"] _menu = project_settings["nuke"]["scriptsmenu"]["name"] @@ -2835,7 +3072,8 @@ def add_scripts_menu(): def add_scripts_gizmo(): # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) platform_name = platform.system().lower() for gizmo_settings in project_settings["nuke"]["gizmo"]: @@ -2928,6 +3166,7 @@ class DirmapCache: """Caching class to get settings and sync_module easily and only once.""" _project_name = None _project_settings = None + _sync_module_discovered = False _sync_module = None _mapping = None @@ -2945,8 +3184,10 @@ class DirmapCache: @classmethod def sync_module(cls): - if cls._sync_module is None: - cls._sync_module = ModulesManager().modules_by_name["sync_server"] + if not cls._sync_module_discovered: + cls._sync_module_discovered = True + cls._sync_module = ModulesManager().modules_by_name.get( + "sync_server") return cls._sync_module @classmethod @@ -3152,11 +3393,11 @@ def get_viewer_config_from_string(input_string): display = split[0] elif "(" in viewer: pattern = r"([\w\d\s\.\-]+).*[(](.*)[)]" - result = re.findall(pattern, viewer) + result_ = re.findall(pattern, viewer) try: - result = result.pop() - display = str(result[1]).rstrip() - viewer = str(result[0]).rstrip() + result_ = result_.pop() + display = str(result_[1]).rstrip() + viewer = str(result_[0]).rstrip() except IndexError: raise IndexError(( "Viewer Input string is not correct. " @@ -3164,3 +3405,46 @@ def get_viewer_config_from_string(input_string): ).format(input_string)) return (display, viewer) + + +def create_viewer_profile_string(viewer, display=None, path_like=False): + """Convert viewer and display to string + + Args: + viewer (str): viewer name + display (Optional[str]): display name + path_like (Optional[bool]): if True, return path like string + + Returns: + str: viewer config string + """ + if not display: + return viewer + + if path_like: + return "{}/{}".format(display, viewer) + return "{} ({})".format(viewer, display) + + +def get_filenames_without_hash(filename, frame_start, frame_end): + """Get filenames without frame hash + i.e. "renderCompositingMain.baking.0001.exr" + + Args: + filename (str): filename with frame hash + frame_start (str): start of the frame + frame_end (str): end of the frame + + Returns: + list: filename per frame of the sequence + """ + filenames = [] + for frame in range(int(frame_start), (int(frame_end) + 1)): + if "#" in filename: + # use regex to convert #### to {:0>4} + def replace(match): + return "{{:0>{}}}".format(len(match.group())) + filename_without_hashes = re.sub("#+", replace, filename) + new_filename = filename_without_hashes.format(frame) + filenames.append(new_filename) + return filenames diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index 8406a251e9..a1d290646c 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -2,7 +2,7 @@ import nuke import os import importlib -from collections import OrderedDict +from collections import OrderedDict, defaultdict import pyblish.api @@ -20,6 +20,8 @@ from openpype.pipeline import ( register_creator_plugin_path, register_inventory_action_path, AVALON_CONTAINER_ID, + get_current_asset_name, + get_current_task_name, ) from openpype.pipeline.workfile import BuildWorkfile from openpype.tools.utils import host_tools @@ -32,6 +34,7 @@ from .lib import ( get_main_window, add_publish_knob, WorkfileSettings, + # TODO: remove this once workfile builder will be removed process_workfile_builder, start_workfile_template_builder, launch_workfiles_app, @@ -153,11 +156,18 @@ def add_nuke_callbacks(): """ nuke_settings = get_current_project_settings()["nuke"] workfile_settings = WorkfileSettings() + # Set context settings. nuke.addOnCreate( workfile_settings.set_context_settings, nodeClass="Root") + + # adding favorites to file browser nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root") + + # template builder callbacks nuke.addOnCreate(start_workfile_template_builder, nodeClass="Root") + + # TODO: remove this callback once workfile builder will be removed nuke.addOnCreate(process_workfile_builder, nodeClass="Root") # fix ffmpeg settings on script @@ -167,11 +177,12 @@ def add_nuke_callbacks(): nuke.addOnScriptLoad(check_inventory_versions) nuke.addOnScriptSave(check_inventory_versions) - # # set apply all workfile settings on script load and save + # set apply all workfile settings on script load and save nuke.addOnScriptLoad(WorkfileSettings().set_context_settings) + if nuke_settings["nuke-dirmap"]["enabled"]: - log.info("Added Nuke's dirmaping callback ...") + log.info("Added Nuke's dir-mapping callback ...") # Add dirmap for file paths. nuke.addFilenameFilter(dirmap_file_name_filter) @@ -211,6 +222,13 @@ def _show_workfiles(): host_tools.show_workfiles(parent=None, on_top=False) +def get_context_label(): + return "{0}, {1}".format( + get_current_asset_name(), + get_current_task_name() + ) + + def _install_menu(): """Install Avalon menu into Nuke's main menu bar.""" @@ -220,9 +238,7 @@ def _install_menu(): menu = menubar.addMenu(MENU_LABEL) if not ASSIST: - label = "{0}, {1}".format( - os.environ["AVALON_ASSET"], os.environ["AVALON_TASK"] - ) + label = get_context_label() Context.context_label = label context_action = menu.addCommand(label) context_action.setEnabled(False) @@ -338,9 +354,7 @@ def change_context_label(): menubar = nuke.menu("Nuke") menu = menubar.findItem(MENU_LABEL) - label = "{0}, {1}".format( - os.environ["AVALON_ASSET"], os.environ["AVALON_TASK"] - ) + label = get_context_label() rm_item = [ (i, item) for i, item in enumerate(menu.items()) @@ -529,10 +543,16 @@ def list_instances(creator_id=None): For SubsetManager + Args: + creator_id (Optional[str]): creator identifier + Returns: (list) of dictionaries matching instances format """ - listed_instances = [] + instances_by_order = defaultdict(list) + subset_instances = [] + instance_ids = set() + for node in nuke.allNodes(recurseGroups=True): if node.Class() in ["Viewer", "Dot"]: @@ -558,9 +578,60 @@ def list_instances(creator_id=None): if creator_id and instance_data["creator_identifier"] != creator_id: continue - listed_instances.append((node, instance_data)) + instance_id = instance_data.get("instance_id") + if not instance_id: + pass + elif instance_id in instance_ids: + instance_data.pop("instance_id") + else: + instance_ids.add(instance_id) - return listed_instances + # node name could change, so update subset name data + _update_subset_name_data(instance_data, node) + + if "render_order" not in node.knobs(): + subset_instances.append((node, instance_data)) + continue + + order = int(node["render_order"].value()) + instances_by_order[order].append((node, instance_data)) + + # Sort instances based on order attribute or subset name. + # TODO: remove in future Publisher enhanced with sorting + ordered_instances = [] + for key in sorted(instances_by_order.keys()): + instances_by_subset = defaultdict(list) + for node, data_ in instances_by_order[key]: + instances_by_subset[data_["subset"]].append((node, data_)) + for subkey in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[subkey]) + + instances_by_subset = defaultdict(list) + for node, data_ in subset_instances: + instances_by_subset[data_["subset"]].append((node, data_)) + for key in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[key]) + + return ordered_instances + + +def _update_subset_name_data(instance_data, node): + """Update subset name data in instance data. + + Args: + instance_data (dict): instance creator data + node (nuke.Node): nuke node + """ + # make sure node name is subset name + old_subset_name = instance_data["subset"] + old_variant = instance_data["variant"] + subset_name_root = old_subset_name.replace(old_variant, "") + + new_subset_name = node.name() + new_variant = new_subset_name.replace(subset_name_root, "") + + instance_data["subset"] = new_subset_name + instance_data["variant"] = new_variant def remove_instance(instance): diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index 7035da2bb5..c39e3c339d 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -19,7 +19,10 @@ from openpype.pipeline import ( CreatorError, Creator as NewCreator, CreatedInstance, - legacy_io + get_current_task_name +) +from openpype.lib.transcoding import ( + VIDEO_EXTENSIONS ) from .lib import ( INSTANCE_DATA_KNOB, @@ -35,7 +38,8 @@ from .lib import ( get_node_data, get_view_process_node, get_viewer_config_from_string, - deprecated + deprecated, + get_filenames_without_hash ) from .pipeline import ( list_instances, @@ -212,9 +216,15 @@ class NukeCreator(NewCreator): created_instance["creator_attributes"].pop(key) def update_instances(self, update_list): - for created_inst, _changes in update_list: + for created_inst, changes in update_list: instance_node = created_inst.transient_data["node"] + # update instance node name if subset name changed + if "subset" in changes.changed_keys: + instance_node["name"].setValue( + changes["subset"].new_value + ) + # in case node is not existing anymore (user erased it manually) try: instance_node.fullName() @@ -256,6 +266,17 @@ class NukeWriteCreator(NukeCreator): family = "write" icon = "sign-out" + def get_linked_knobs(self): + linked_knobs = [] + if "channels" in self.instance_attributes: + linked_knobs.append("channels") + if "ordered" in self.instance_attributes: + linked_knobs.append("render_order") + if "use_range_limit" in self.instance_attributes: + linked_knobs.extend(["___", "first", "last", "use_limit"]) + + return linked_knobs + def integrate_links(self, node, outputs=True): # skip if no selection if not self.selected_node: @@ -310,6 +331,7 @@ class NukeWriteCreator(NukeCreator): "frames": "Use existing frames" } if ("farm_rendering" in self.instance_attributes): + rendering_targets["frames_farm"] = "Use existing frames - farm" rendering_targets["farm"] = "Farm rendering" return EnumDef( @@ -361,11 +383,7 @@ class NukeWriteCreator(NukeCreator): sys.exc_info()[2] ) - def apply_settings( - self, - project_settings, - system_settings - ): + def apply_settings(self, project_settings): """Method called on initialization of plugin to apply settings.""" # plugin settings @@ -566,18 +584,25 @@ class ExporterReview(object): def get_file_info(self): if self.collection: # get path - self.fname = os.path.basename(self.collection.format( - "{head}{padding}{tail}")) + self.fname = os.path.basename( + self.collection.format("{head}{padding}{tail}") + ) self.fhead = self.collection.format("{head}") # get first and last frame self.first_frame = min(self.collection.indexes) self.last_frame = max(self.collection.indexes) + + # make sure slate frame is not included + frame_start_handle = self.instance.data["frameStartHandle"] + if frame_start_handle > self.first_frame: + self.first_frame = frame_start_handle + else: self.fname = os.path.basename(self.path_in) self.fhead = os.path.splitext(self.fname)[0] + "." - self.first_frame = self.instance.data.get("frameStartHandle", None) - self.last_frame = self.instance.data.get("frameEndHandle", None) + self.first_frame = self.instance.data["frameStartHandle"] + self.last_frame = self.instance.data["frameEndHandle"] if "#" in self.fhead: self.fhead = self.fhead.replace("#", "")[:-1] @@ -613,6 +638,10 @@ class ExporterReview(object): "frameStart": self.first_frame, "frameEnd": self.last_frame, }) + if ".{}".format(self.ext) not in VIDEO_EXTENSIONS: + filenames = get_filenames_without_hash( + self.file, self.first_frame, self.last_frame) + repre["files"] = filenames if self.multiple_presets: repre["outputName"] = self.name @@ -786,7 +815,20 @@ class ExporterReviewMov(ExporterReview): self.log.info("File info was set...") - self.file = self.fhead + self.name + ".{}".format(self.ext) + if ".{}".format(self.ext) in VIDEO_EXTENSIONS: + self.file = "{}{}.{}".format( + self.fhead, self.name, self.ext) + else: + # Output is image (or image sequence) + # When the file is an image it's possible it + # has extra information after the `fhead` that + # we want to preserve, e.g. like frame numbers + # or frames hashes like `####` + filename_no_ext = os.path.splitext( + os.path.basename(self.path_in))[0] + after_head = filename_no_ext[len(self.fhead):] + self.file = "{}{}.{}.{}".format( + self.fhead, self.name, after_head, self.ext) self.path = os.path.join( self.staging_dir, self.file).replace("\\", "/") @@ -824,41 +866,6 @@ class ExporterReviewMov(ExporterReview): add_tags = [] self.publish_on_farm = farm read_raw = kwargs["read_raw"] - - # TODO: remove this when `reformat_nodes_config` - # is changed in settings - reformat_node_add = kwargs["reformat_node_add"] - reformat_node_config = kwargs["reformat_node_config"] - - # TODO: make this required in future - reformat_nodes_config = kwargs.get("reformat_nodes_config", {}) - - # TODO: remove this once deprecated is removed - # make sure only reformat_nodes_config is used in future - if reformat_node_add and reformat_nodes_config.get("enabled"): - self.log.warning( - "`reformat_node_add` is deprecated. " - "Please use only `reformat_nodes_config` instead.") - reformat_nodes_config = None - - # TODO: reformat code when backward compatibility is not needed - # warning if reformat_nodes_config is not set - if not reformat_nodes_config: - self.log.warning( - "Please set `reformat_nodes_config` in settings. " - "Using `reformat_node_config` instead." - ) - reformat_nodes_config = { - "enabled": reformat_node_add, - "reposition_nodes": [ - { - "node_class": "Reformat", - "knobs": reformat_node_config - } - ] - } - - bake_viewer_process = kwargs["bake_viewer_process"] bake_viewer_input_process_node = kwargs[ "bake_viewer_input_process"] @@ -890,6 +897,11 @@ class ExporterReviewMov(ExporterReview): r_node["origlast"].setValue(self.last_frame) r_node["colorspace"].setValue(self.write_colorspace) + # do not rely on defaults, set explicitly + # to be sure it is set correctly + r_node["frame_mode"].setValue("expression") + r_node["frame"].setValue("") + if read_raw: r_node["raw"].setValue(1) @@ -897,6 +909,7 @@ class ExporterReviewMov(ExporterReview): self._shift_to_previous_node_and_temp(subset, r_node, "Read... `{}`") # add reformat node + reformat_nodes_config = kwargs["reformat_nodes_config"] if reformat_nodes_config["enabled"]: reposition_nodes = reformat_nodes_config["reposition_nodes"] for reposition_node in reposition_nodes: @@ -941,7 +954,6 @@ class ExporterReviewMov(ExporterReview): self.log.debug("Path: {}".format(self.path)) write_node["file"].setValue(str(self.path)) write_node["file_type"].setValue(str(self.ext)) - # Knobs `meta_codec` and `mov64_codec` are not available on centos. # TODO shouldn't this come from settings on outputs? try: @@ -955,7 +967,11 @@ class ExporterReviewMov(ExporterReview): except Exception: self.log.info("`mov64_codec` knob was not found") - write_node["mov64_write_timecode"].setValue(1) + try: + write_node["mov64_write_timecode"].setValue(1) + except Exception: + self.log.info("`mov64_write_timecode` knob was not found") + write_node["raw"].setValue(1) # connect write_node.setInput(0, self.previous_node) @@ -1173,7 +1189,7 @@ def convert_to_valid_instaces(): from openpype.hosts.nuke.api import workio - task_name = legacy_io.Session["AVALON_TASK"] + task_name = get_current_task_name() # save into new workfile current_file = workio.current_file() diff --git a/openpype/hosts/nuke/api/workfile_template_builder.py b/openpype/hosts/nuke/api/workfile_template_builder.py index 2384e8eca1..9d7604c58d 100644 --- a/openpype/hosts/nuke/api/workfile_template_builder.py +++ b/openpype/hosts/nuke/api/workfile_template_builder.py @@ -25,7 +25,8 @@ from .lib import ( select_nodes, duplicate_node, node_tempfile, - get_main_window + get_main_window, + WorkfileSettings, ) PLACEHOLDER_SET = "PLACEHOLDERS_SET" @@ -113,6 +114,11 @@ class NukePlaceholderPlugin(PlaceholderPlugin): placeholder_data[key] = value return placeholder_data + def delete_placeholder(self, placeholder): + """Remove placeholder if building was successful""" + placeholder_node = nuke.toNode(placeholder.scene_identifier) + nuke.delete(placeholder_node) + class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): identifier = "nuke.load" @@ -275,14 +281,6 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() @@ -689,14 +687,6 @@ class NukePlaceholderCreatePlugin( placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() @@ -955,6 +945,9 @@ def build_workfile_template(*args, **kwargs): builder = NukeTemplateBuilder(registered_host()) builder.build_template(*args, **kwargs) + # set all settings to shot context default + WorkfileSettings().set_context_settings() + def update_workfile_template(*args): builder = NukeTemplateBuilder(registered_host()) diff --git a/openpype/hosts/nuke/api/workio.py b/openpype/hosts/nuke/api/workio.py index 8d29e0441f..98e59eff71 100644 --- a/openpype/hosts/nuke/api/workio.py +++ b/openpype/hosts/nuke/api/workio.py @@ -1,6 +1,7 @@ """Host API required Work Files tool""" import os import nuke +import shutil from .utils import is_headless @@ -21,21 +22,37 @@ def save_file(filepath): def open_file(filepath): + + def read_script(nuke_script): + nuke.scriptClear() + nuke.scriptReadFile(nuke_script) + nuke.Root()["name"].setValue(nuke_script) + nuke.Root()["project_directory"].setValue(os.path.dirname(nuke_script)) + nuke.Root().setModified(False) + filepath = filepath.replace("\\", "/") # To remain in the same window, we have to clear the script and read # in the contents of the workfile. - nuke.scriptClear() + # Nuke Preferences can be read after the script is read. + read_script(filepath) + if not is_headless(): autosave = nuke.toNode("preferences")["AutoSaveName"].evaluate() - autosave_prmpt = "Autosave detected.\nWould you like to load the autosave file?" # noqa + autosave_prmpt = "Autosave detected.\n" \ + "Would you like to load the autosave file?" # noqa if os.path.isfile(autosave) and nuke.ask(autosave_prmpt): - filepath = autosave + try: + # Overwrite the filepath with autosave + shutil.copy(autosave, filepath) + # Now read the (auto-saved) script again + read_script(filepath) + except shutil.Error as err: + nuke.message( + "Detected autosave file could not be used.\n{}" + + .format(err)) - nuke.scriptReadFile(filepath) - nuke.Root()["name"].setValue(filepath) - nuke.Root()["project_directory"].setValue(os.path.dirname(filepath)) - nuke.Root().setModified(False) return True diff --git a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py index 3948a665c6..657291ec51 100644 --- a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py +++ b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py @@ -1,11 +1,12 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook class PrelaunchNukeAssistHook(PreLaunchHook): """ Adding flag when nukeassist """ - app_groups = ["nukeassist"] + app_groups = {"nukeassist"} + launch_types = set() def execute(self): self.launch_context.env["NUKEASSIST"] = "1" diff --git a/openpype/hosts/nuke/plugins/create/create_write_image.py b/openpype/hosts/nuke/plugins/create/create_write_image.py index 0c8adfb75c..8c18739587 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_image.py +++ b/openpype/hosts/nuke/plugins/create/create_write_image.py @@ -64,9 +64,6 @@ class CreateWriteImage(napi.NukeWriteCreator): ) def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] # add fpath_template write_data = { @@ -81,7 +78,7 @@ class CreateWriteImage(napi.NukeWriteCreator): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "frame": nuke.frame() } diff --git a/openpype/hosts/nuke/plugins/create/create_write_prerender.py b/openpype/hosts/nuke/plugins/create/create_write_prerender.py index f46dd2d6d5..395c3b002f 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_prerender.py +++ b/openpype/hosts/nuke/plugins/create/create_write_prerender.py @@ -30,6 +30,9 @@ class CreateWritePrerender(napi.NukeWriteCreator): temp_rendering_path_template = ( "{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}") + # Before write node render. + order = 90 + def get_pre_create_attr_defs(self): attr_defs = [ BoolDef( @@ -42,10 +45,6 @@ class CreateWritePrerender(napi.NukeWriteCreator): return attr_defs def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] - # add fpath_template write_data = { "creator": self.__class__.__name__, @@ -68,7 +67,7 @@ class CreateWritePrerender(napi.NukeWriteCreator): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/create/create_write_render.py b/openpype/hosts/nuke/plugins/create/create_write_render.py index c24405873a..91acf4eabc 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_render.py +++ b/openpype/hosts/nuke/plugins/create/create_write_render.py @@ -56,11 +56,15 @@ class CreateWriteRender(napi.NukeWriteCreator): actual_format = nuke.root().knob('format').value() width, height = (actual_format.width(), actual_format.height()) + self.log.debug(">>>>>>> : {}".format(self.instance_attributes)) + self.log.debug(">>>>>>> : {}".format(self.get_linked_knobs())) + created_node = napi.create_write_node( subset_name, write_data, input=self.selected_node, prenodes=self.prenodes, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/create/workfile_creator.py b/openpype/hosts/nuke/plugins/create/workfile_creator.py index 72ef61e63f..c4e0753abc 100644 --- a/openpype/hosts/nuke/plugins/create/workfile_creator.py +++ b/openpype/hosts/nuke/plugins/create/workfile_creator.py @@ -3,7 +3,6 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, - legacy_io, ) from openpype.hosts.nuke.api import ( INSTANCE_DATA_KNOB, @@ -27,10 +26,10 @@ class WorkfileCreator(AutoCreator): root_node, api.INSTANCE_DATA_KNOB ) - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + project_name = self.create_context.get_current_project_name() + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( diff --git a/openpype/hosts/nuke/plugins/load/actions.py b/openpype/hosts/nuke/plugins/load/actions.py index 3227a7ed98..635318f53d 100644 --- a/openpype/hosts/nuke/plugins/load/actions.py +++ b/openpype/hosts/nuke/plugins/load/actions.py @@ -17,7 +17,7 @@ class SetFrameRangeLoader(load.LoaderPlugin): "yeticache", "pointcache"] representations = ["*"] - extension = {"*"} + extensions = {"*"} label = "Set frame range" order = 11 diff --git a/openpype/hosts/nuke/plugins/load/load_backdrop.py b/openpype/hosts/nuke/plugins/load/load_backdrop.py index 67c7877e60..0cbd380697 100644 --- a/openpype/hosts/nuke/plugins/load/load_backdrop.py +++ b/openpype/hosts/nuke/plugins/load/load_backdrop.py @@ -6,8 +6,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -27,7 +27,7 @@ class LoadBackdropNodes(load.LoaderPlugin): families = ["workfile", "nukenodes"] representations = ["*"] - extension = {"nk"} + extensions = {"nk"} label = "Import Nuke Nodes" order = 0 @@ -72,7 +72,7 @@ class LoadBackdropNodes(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -190,7 +190,7 @@ class LoadBackdropNodes(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_camera_abc.py b/openpype/hosts/nuke/plugins/load/load_camera_abc.py index 11cc63d25c..e245b0cb5e 100644 --- a/openpype/hosts/nuke/plugins/load/load_camera_abc.py +++ b/openpype/hosts/nuke/plugins/load/load_camera_abc.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import ( @@ -26,7 +26,7 @@ class AlembicCameraLoader(load.LoaderPlugin): families = ["camera"] representations = ["*"] - extension = {"abc"} + extensions = {"abc"} label = "Load Alembic Camera" icon = "camera" @@ -57,7 +57,7 @@ class AlembicCameraLoader(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") with maintained_selection(): camera_node = nuke.createNode( @@ -66,8 +66,6 @@ class AlembicCameraLoader(load.LoaderPlugin): object_name, file), inpanel=False ) - # hide property panel - camera_node.hideControlPanel() camera_node.forceValidate() camera_node["frame_rate"].setValue(float(fps)) @@ -110,12 +108,10 @@ class AlembicCameraLoader(load.LoaderPlugin): None """ # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) object_name = container['objectName'] - # get corresponding node - camera_node = nuke.toNode(object_name) # get main variables version_data = version_doc.get("data", {}) @@ -182,7 +178,7 @@ class AlembicCameraLoader(load.LoaderPlugin): """ Coloring a node by correct color by actual version """ # get all versions in list - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] ) diff --git a/openpype/hosts/nuke/plugins/load/load_clip.py b/openpype/hosts/nuke/plugins/load/load_clip.py index cb3da79ef5..19038b168d 100644 --- a/openpype/hosts/nuke/plugins/load/load_clip.py +++ b/openpype/hosts/nuke/plugins/load/load_clip.py @@ -8,7 +8,7 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -91,15 +91,16 @@ class LoadClip(plugin.NukeLoader): # reset container id so it is always unique for each instance self.reset_container_id() - self.log.warning(self.extensions) - is_sequence = len(representation["files"]) > 1 if is_sequence: - representation = self._representation_with_hash_in_frame( - representation + context["representation"] = \ + self._representation_with_hash_in_frame( + representation ) - filepath = get_representation_path(representation).replace("\\", "/") + + filepath = self.filepath_from_context(context) + filepath = filepath.replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) start_at_workfile = options.get( @@ -144,10 +145,9 @@ class LoadClip(plugin.NukeLoader): # Create the Loader with the filename path set read_node = nuke.createNode( "Read", - "name {}".format(read_name)) - - # hide property panel - read_node.hideControlPanel() + "name {}".format(read_name), + inpanel=False + ) # to avoid multiple undo steps for rest of process # we will switch off undo-ing @@ -155,7 +155,7 @@ class LoadClip(plugin.NukeLoader): read_node["file"].setValue(filepath) used_colorspace = self._set_colorspace( - read_node, version_data, representation["data"]) + read_node, version_data, representation["data"], filepath) self._set_range_to_node(read_node, first, last, start_at_workfile) @@ -165,8 +165,8 @@ class LoadClip(plugin.NukeLoader): "handleStart", "handleEnd"] data_imprint = {} - for k in add_keys: - if k == 'version': + for key in add_keys: + if key == 'version': version_doc = context["version"] if version_doc["type"] == "hero_version": version = "hero" @@ -174,17 +174,20 @@ class LoadClip(plugin.NukeLoader): version = version_doc.get("name") if version: - data_imprint[k] = version + data_imprint[key] = version - elif k == 'colorspace': - colorspace = representation["data"].get(k) - colorspace = colorspace or version_data.get(k) + elif key == 'colorspace': + colorspace = representation["data"].get(key) + colorspace = colorspace or version_data.get(key) data_imprint["db_colorspace"] = colorspace if used_colorspace: data_imprint["used_colorspace"] = used_colorspace else: - data_imprint[k] = context["version"]['data'].get( - k, str(None)) + value_ = context["version"]['data'].get( + key, str(None)) + if isinstance(value_, (str)): + value_ = value_.replace("\\", "/") + data_imprint[key] = value_ data_imprint["objectName"] = read_name @@ -257,6 +260,7 @@ class LoadClip(plugin.NukeLoader): representation = self._representation_with_hash_in_frame( representation ) + filepath = get_representation_path(representation).replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) @@ -267,7 +271,7 @@ class LoadClip(plugin.NukeLoader): if "addRetime" in key ] - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc.get("data", {}) @@ -304,8 +308,7 @@ class LoadClip(plugin.NukeLoader): # we will switch off undo-ing with viewer_update_and_undo_stop(): used_colorspace = self._set_colorspace( - read_node, version_data, representation["data"], - path=filepath) + read_node, version_data, representation["data"], filepath) self._set_range_to_node(read_node, first, last, start_at_workfile) @@ -452,9 +455,9 @@ class LoadClip(plugin.NukeLoader): return self.node_name_template.format(**name_data) - def _set_colorspace(self, node, version_data, repre_data, path=None): + def _set_colorspace(self, node, version_data, repre_data, path): output_color = None - path = path or self.fname.replace("\\", "/") + path = path.replace("\\", "/") # get colorspace colorspace = repre_data.get("colorspace") colorspace = colorspace or version_data.get("colorspace") diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py index d49f87a094..cacc00854e 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects.py +++ b/openpype/hosts/nuke/plugins/load/load_effects.py @@ -8,8 +8,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import ( @@ -24,7 +24,7 @@ class LoadEffects(load.LoaderPlugin): families = ["effect"] representations = ["*"] - extension = {"json"} + extensions = {"json"} label = "Load Effects - nodes" order = 0 @@ -72,7 +72,7 @@ class LoadEffects(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # getting data from json file with unicode conversion with open(file, "r") as f: @@ -88,10 +88,9 @@ class LoadEffects(load.LoaderPlugin): GN = nuke.createNode( "Group", - "name {}_1".format(object_name)) - - # hide property panel - GN.hideControlPanel() + "name {}_1".format(object_name), + inpanel=False + ) # adding content to the group node with GN: @@ -156,7 +155,7 @@ class LoadEffects(load.LoaderPlugin): """ # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py index bfe32c1ed9..bdf3cd6965 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py @@ -8,8 +8,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import lib @@ -25,7 +25,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): families = ["effect"] representations = ["*"] - extension = {"json"} + extensions = {"json"} label = "Load Effects - Input Process" order = 0 @@ -73,7 +73,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # getting data from json file with unicode conversion with open(file, "r") as f: @@ -89,10 +89,9 @@ class LoadEffectsInputProcess(load.LoaderPlugin): GN = nuke.createNode( "Group", - "name {}_1".format(object_name)) - - # hide property panel - GN.hideControlPanel() + "name {}_1".format(object_name), + inpanel=False + ) # adding content to the group node with GN: @@ -161,7 +160,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo.py b/openpype/hosts/nuke/plugins/load/load_gizmo.py index 2aa7c49723..ede05c422b 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -26,7 +26,7 @@ class LoadGizmo(load.LoaderPlugin): families = ["gizmo"] representations = ["*"] - extension = {"gizmo"} + extensions = {"gizmo"} label = "Load Gizmo" order = 0 @@ -73,7 +73,7 @@ class LoadGizmo(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -106,7 +106,7 @@ class LoadGizmo(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py index 2514a28299..d567aaf7b0 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py @@ -6,8 +6,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -28,7 +28,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): families = ["gizmo"] representations = ["*"] - extension = {"gizmo"} + extensions = {"gizmo"} label = "Load Gizmo - Input Process" order = 0 @@ -75,7 +75,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -113,7 +113,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_image.py b/openpype/hosts/nuke/plugins/load/load_image.py index f82ee4db88..0dd3a940db 100644 --- a/openpype/hosts/nuke/plugins/load/load_image.py +++ b/openpype/hosts/nuke/plugins/load/load_image.py @@ -7,8 +7,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -86,7 +86,7 @@ class LoadImage(load.LoaderPlugin): if namespace is None: namespace = context['asset']['name'] - file = self.fname + file = self.filepath_from_context(context) if not file: repr_id = context["representation"]["_id"] @@ -96,7 +96,8 @@ class LoadImage(load.LoaderPlugin): file = file.replace("\\", "/") - repr_cont = context["representation"]["context"] + representation = context["representation"] + repr_cont = representation["context"] frame = repr_cont.get("frame") if frame: padding = len(frame) @@ -104,25 +105,15 @@ class LoadImage(load.LoaderPlugin): frame, format(frame_number, "0{}".format(padding))) - name_data = { - "asset": repr_cont["asset"], - "subset": repr_cont["subset"], - "representation": context["representation"]["name"], - "ext": repr_cont["representation"], - "id": context["representation"]["_id"], - "class_name": self.__class__.__name__ - } - - read_name = self.node_name_template.format(**name_data) + read_name = self._get_node_name(representation) # Create the Loader with the filename path set with viewer_update_and_undo_stop(): r = nuke.createNode( "Read", - "name {}".format(read_name)) - - # hide property panel - r.hideControlPanel() + "name {}".format(read_name), + inpanel=False + ) r["file"].setValue(file) @@ -202,7 +193,7 @@ class LoadImage(load.LoaderPlugin): format(frame_number, "0{}".format(padding))) # Get start frame from version data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] @@ -213,6 +204,8 @@ class LoadImage(load.LoaderPlugin): last = first = int(frame_number) # Set the global in to the start frame of the sequence + read_name = self._get_node_name(representation) + node["name"].setValue(read_name) node["file"].setValue(file) node["origfirst"].setValue(first) node["first"].setValue(first) @@ -251,3 +244,17 @@ class LoadImage(load.LoaderPlugin): with viewer_update_and_undo_stop(): nuke.delete(node) + + def _get_node_name(self, representation): + + repre_cont = representation["context"] + name_data = { + "asset": repre_cont["asset"], + "subset": repre_cont["subset"], + "representation": representation["name"], + "ext": repre_cont["representation"], + "id": representation["_id"], + "class_name": self.__class__.__name__ + } + + return self.node_name_template.format(**name_data) diff --git a/openpype/hosts/nuke/plugins/load/load_matchmove.py b/openpype/hosts/nuke/plugins/load/load_matchmove.py index a7d124d472..14ddf20dc3 100644 --- a/openpype/hosts/nuke/plugins/load/load_matchmove.py +++ b/openpype/hosts/nuke/plugins/load/load_matchmove.py @@ -9,7 +9,7 @@ class MatchmoveLoader(load.LoaderPlugin): families = ["matchmove"] representations = ["*"] - extension = {"py"} + extensions = {"py"} defaults = ["Camera", "Object"] @@ -18,8 +18,9 @@ class MatchmoveLoader(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - if self.fname.lower().endswith(".py"): - exec(open(self.fname).read()) + path = self.filepath_from_context(context) + if path.lower().endswith(".py"): + exec(open(path).read()) else: msg = "Unsupported script type" diff --git a/openpype/hosts/nuke/plugins/load/load_model.py b/openpype/hosts/nuke/plugins/load/load_model.py index f968da8475..b9b8a0f4c0 100644 --- a/openpype/hosts/nuke/plugins/load/load_model.py +++ b/openpype/hosts/nuke/plugins/load/load_model.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import maintained_selection @@ -24,7 +24,7 @@ class AlembicModelLoader(load.LoaderPlugin): families = ["model", "pointcache", "animation"] representations = ["*"] - extension = {"abc"} + extensions = {"abc"} label = "Load Alembic" icon = "cube" @@ -55,7 +55,7 @@ class AlembicModelLoader(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") with maintained_selection(): model_node = nuke.createNode( @@ -65,9 +65,6 @@ class AlembicModelLoader(load.LoaderPlugin): inpanel=False ) - # hide property panel - model_node.hideControlPanel() - model_node.forceValidate() # Ensure all items are imported and selected. @@ -115,7 +112,7 @@ class AlembicModelLoader(load.LoaderPlugin): None """ # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) object_name = container['objectName'] # get corresponding node @@ -190,7 +187,7 @@ class AlembicModelLoader(load.LoaderPlugin): def node_version_color(self, version, node): """ Coloring a node by correct color by actual version""" - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version["parent"], fields=["_id"] ) diff --git a/openpype/hosts/nuke/plugins/load/load_script_precomp.py b/openpype/hosts/nuke/plugins/load/load_script_precomp.py index 53e9a76003..d5f9d24765 100644 --- a/openpype/hosts/nuke/plugins/load/load_script_precomp.py +++ b/openpype/hosts/nuke/plugins/load/load_script_precomp.py @@ -5,7 +5,7 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, load, get_representation_path, ) @@ -22,7 +22,7 @@ class LinkAsGroup(load.LoaderPlugin): families = ["workfile", "nukenodes"] representations = ["*"] - extension = {"nk"} + extensions = {"nk"} label = "Load Precomp" order = 0 @@ -43,8 +43,8 @@ class LinkAsGroup(load.LoaderPlugin): if namespace is None: namespace = context['asset']['name'] - file = self.fname.replace("\\", "/") - self.log.info("file: {}\n".format(self.fname)) + file = self.filepath_from_context(context).replace("\\", "/") + self.log.info("file: {}\n".format(file)) precomp_name = context["representation"]["context"]["subset"] @@ -70,10 +70,9 @@ class LinkAsGroup(load.LoaderPlugin): # P = nuke.nodes.LiveGroup("file {}".format(file)) P = nuke.createNode( "Precomp", - "file {}".format(file)) - - # hide property panel - P.hideControlPanel() + "file {}".format(file), + inpanel=False + ) # Set colorspace defined in version data colorspace = context["version"]["data"].get("colorspace", None) @@ -124,7 +123,7 @@ class LinkAsGroup(load.LoaderPlugin): root = get_representation_path(representation).replace("\\", "/") # Get start frame from version data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] diff --git a/openpype/hosts/nuke/plugins/publish/collect_backdrop.py b/openpype/hosts/nuke/plugins/publish/collect_backdrop.py index 7d51af7e9e..d04c1204e3 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/collect_backdrop.py @@ -57,4 +57,4 @@ class CollectBackdrops(pyblish.api.InstancePlugin): if version: instance.data['version'] = version - self.log.info("Backdrop instance collected: `{}`".format(instance)) + self.log.debug("Backdrop instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_context_data.py b/openpype/hosts/nuke/plugins/publish/collect_context_data.py index f1b4965205..b85e924f55 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_context_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_context_data.py @@ -64,4 +64,4 @@ class CollectContextData(pyblish.api.ContextPlugin): context.data["scriptData"] = script_data context.data.update(script_data) - self.log.info('Context from Nuke script collected') + self.log.debug('Context from Nuke script collected') diff --git a/openpype/hosts/nuke/plugins/publish/collect_gizmo.py b/openpype/hosts/nuke/plugins/publish/collect_gizmo.py index e3c40a7a90..c410de7c32 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_gizmo.py +++ b/openpype/hosts/nuke/plugins/publish/collect_gizmo.py @@ -43,4 +43,4 @@ class CollectGizmo(pyblish.api.InstancePlugin): "frameStart": first_frame, "frameEnd": last_frame }) - self.log.info("Gizmo instance collected: `{}`".format(instance)) + self.log.debug("Gizmo instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_model.py b/openpype/hosts/nuke/plugins/publish/collect_model.py index 3fdf376d0c..a099f06be0 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_model.py +++ b/openpype/hosts/nuke/plugins/publish/collect_model.py @@ -43,4 +43,4 @@ class CollectModel(pyblish.api.InstancePlugin): "frameStart": first_frame, "frameEnd": last_frame }) - self.log.info("Model instance collected: `{}`".format(instance)) + self.log.debug("Model instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py similarity index 75% rename from openpype/hosts/nuke/plugins/publish/collect_instance_data.py rename to openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py index 3908aef4bc..449a1cc935 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py @@ -3,10 +3,12 @@ import pyblish.api class CollectInstanceData(pyblish.api.InstancePlugin): - """Collect all nodes with Avalon knob.""" + """Collect Nuke instance data + + """ order = pyblish.api.CollectorOrder - 0.49 - label = "Collect Instance Data" + label = "Collect Nuke Instance Data" hosts = ["nuke", "nukeassist"] # presets @@ -40,5 +42,14 @@ class CollectInstanceData(pyblish.api.InstancePlugin): "pixelAspect": pixel_aspect }) + + # add creator attributes to instance + creator_attributes = instance.data["creator_attributes"] + instance.data.update(creator_attributes) + + # add review family if review activated on instance + if instance.data.get("review"): + instance.data["families"].append("review") + self.log.debug("Collected instance: {}".format( instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_reads.py b/openpype/hosts/nuke/plugins/publish/collect_reads.py index 831ae29a27..38938a3dda 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_reads.py +++ b/openpype/hosts/nuke/plugins/publish/collect_reads.py @@ -2,8 +2,6 @@ import os import re import nuke import pyblish.api -from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io class CollectNukeReads(pyblish.api.InstancePlugin): @@ -15,16 +13,9 @@ class CollectNukeReads(pyblish.api.InstancePlugin): families = ["source"] def process(self, instance): - node = instance.data["transientData"]["node"] - - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] - asset_doc = get_asset_by_name(project_name, asset_name) - - self.log.debug("asset_doc: {}".format(asset_doc["data"])) - self.log.debug("checking instance: {}".format(instance)) + node = instance.data["transientData"]["node"] if node.Class() != "Read": return diff --git a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py index 5701087697..3baa0cd9b5 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py +++ b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py @@ -5,7 +5,7 @@ import nuke class CollectSlate(pyblish.api.InstancePlugin): """Check if SLATE node is in scene and connected to rendering tree""" - order = pyblish.api.CollectorOrder + 0.09 + order = pyblish.api.CollectorOrder + 0.002 label = "Collect Slate Node" hosts = ["nuke"] families = ["render"] @@ -13,10 +13,14 @@ class CollectSlate(pyblish.api.InstancePlugin): def process(self, instance): node = instance.data["transientData"]["node"] - slate = next((n for n in nuke.allNodes() - if "slate" in n.name().lower() - if not n["disable"].getValue()), - None) + slate = next( + ( + n_ for n_ in nuke.allNodes() + if "slate" in n_.name().lower() + if not n_["disable"].getValue() + ), + None + ) if slate: # check if slate node is connected to write node tree @@ -35,7 +39,7 @@ class CollectSlate(pyblish.api.InstancePlugin): instance.data["slateNode"] = slate_node instance.data["slate"] = True instance.data["families"].append("slate") - self.log.info( + self.log.debug( "Slate node is in node graph: `{}`".format(slate.name())) self.log.debug( "__ instance.data: `{}`".format(instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_workfile.py b/openpype/hosts/nuke/plugins/publish/collect_workfile.py index 852042e6e9..0f03572f8b 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_workfile.py +++ b/openpype/hosts/nuke/plugins/publish/collect_workfile.py @@ -37,4 +37,6 @@ class CollectWorkfile(pyblish.api.InstancePlugin): # adding basic script data instance.data.update(script_data) - self.log.info("Collect script version") + self.log.debug( + "Collected current script version: {}".format(current_file) + ) diff --git a/openpype/hosts/nuke/plugins/publish/collect_writes.py b/openpype/hosts/nuke/plugins/publish/collect_writes.py index 2d1caacdc3..6f9245f5b9 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/collect_writes.py @@ -1,5 +1,4 @@ import os -from pprint import pformat import nuke import pyblish.api from openpype.hosts.nuke import api as napi @@ -15,30 +14,16 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, hosts = ["nuke", "nukeassist"] families = ["render", "prerender", "image"] + # cache + _write_nodes = {} + _frame_ranges = {} + def process(self, instance): - self.log.debug(pformat(instance.data)) - creator_attributes = instance.data["creator_attributes"] - instance.data.update(creator_attributes) group_node = instance.data["transientData"]["node"] render_target = instance.data["render_target"] - family = instance.data["family"] - families = instance.data["families"] - # add targeted family to families - instance.data["families"].append( - "{}.{}".format(family, render_target) - ) - if instance.data.get("review"): - instance.data["families"].append("review") - - child_nodes = napi.get_instance_group_node_childs(instance) - instance.data["transientData"]["childNodes"] = child_nodes - - write_node = None - for x in child_nodes: - if x.Class() == "Write": - write_node = x + write_node = self._write_node_helper(instance) if write_node is None: self.log.warning( @@ -48,113 +33,134 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, ) return - instance.data["writeNode"] = write_node - self.log.debug("checking instance: {}".format(instance)) + # get colorspace and add to version data + colorspace = napi.get_colorspace_from_node(write_node) - # Determine defined file type - ext = write_node["file_type"].value() + if render_target == "frames": + self._set_existing_files_data(instance, colorspace) - # Get frame range - handle_start = instance.context.data["handleStart"] - handle_end = instance.context.data["handleEnd"] - first_frame = int(nuke.root()["first_frame"].getValue()) - last_frame = int(nuke.root()["last_frame"].getValue()) - frame_length = int(last_frame - first_frame + 1) + elif render_target == "frames_farm": + collected_frames = self._set_existing_files_data( + instance, colorspace) - if write_node["use_limit"].getValue(): - first_frame = int(write_node["first"].getValue()) - last_frame = int(write_node["last"].getValue()) + self._set_expected_files(instance, collected_frames) + + self._add_farm_instance_data(instance) + + elif render_target == "farm": + self._add_farm_instance_data(instance) + + # set additional instance data + self._set_additional_instance_data(instance, render_target, colorspace) + + def _set_existing_files_data(self, instance, colorspace): + """Set existing files data to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + colorspace (str): colorspace + + Returns: + list: collected frames + """ + collected_frames = self._get_collected_frames(instance) + + representation = self._get_existing_frames_representation( + instance, collected_frames + ) + + # inject colorspace data + self.set_representation_colorspace( + representation, instance.context, + colorspace=colorspace + ) + + instance.data["representations"].append(representation) + + return collected_frames + + def _set_expected_files(self, instance, collected_frames): + """Set expected files to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + """ + write_node = self._write_node_helper(instance) write_file_path = nuke.filename(write_node) output_dir = os.path.dirname(write_file_path) - # get colorspace and add to version data - colorspace = napi.get_colorspace_from_node(write_node) + instance.data["expectedFiles"] = [ + os.path.join(output_dir, source_file) + for source_file in collected_frames + ] - self.log.debug('output dir: {}'.format(output_dir)) + def _get_frame_range_data(self, instance): + """Get frame range data from instance. - if render_target == "frames": - representation = { - 'name': ext, - 'ext': ext, - "stagingDir": output_dir, - "tags": [] - } + Args: + instance (pyblish.api.Instance): pyblish instance - # get file path knob - node_file_knob = write_node["file"] - # list file paths based on input frames - expected_paths = list(sorted({ - node_file_knob.evaluate(frame) - for frame in range(first_frame, last_frame + 1) - })) + Returns: + tuple: first_frame, last_frame + """ - # convert only to base names - expected_filenames = [ - os.path.basename(filepath) - for filepath in expected_paths - ] + instance_name = instance.data["name"] - # make sure files are existing at folder - collected_frames = [ - filename - for filename in os.listdir(output_dir) - if filename in expected_filenames - ] + if self._frame_ranges.get(instance_name): + # return cashed write node + return self._frame_ranges[instance_name] - if collected_frames: - collected_frames_len = len(collected_frames) - frame_start_str = "%0{}d".format( - len(str(last_frame))) % first_frame - representation['frameStart'] = frame_start_str + write_node = self._write_node_helper(instance) - # in case slate is expected and not yet rendered - self.log.debug("_ frame_length: {}".format(frame_length)) - self.log.debug("_ collected_frames_len: {}".format( - collected_frames_len)) + # Get frame range from workfile + first_frame = int(nuke.root()["first_frame"].getValue()) + last_frame = int(nuke.root()["last_frame"].getValue()) - # this will only run if slate frame is not already - # rendered from previews publishes - if ( - "slate" in families - and frame_length == collected_frames_len - and family == "render" - ): - frame_slate_str = ( - "{{:0{}d}}".format(len(str(last_frame))) - ).format(first_frame - 1) + # Get frame range from write node if activated + if write_node["use_limit"].getValue(): + first_frame = int(write_node["first"].getValue()) + last_frame = int(write_node["last"].getValue()) - slate_frame = collected_frames[0].replace( - frame_start_str, frame_slate_str) - collected_frames.insert(0, slate_frame) + # add to cache + self._frame_ranges[instance_name] = (first_frame, last_frame) - if collected_frames_len == 1: - representation['files'] = collected_frames.pop() - else: - representation['files'] = collected_frames + return first_frame, last_frame - # inject colorspace data - self.set_representation_colorspace( - representation, instance.context, - colorspace=colorspace - ) + def _set_additional_instance_data( + self, instance, render_target, colorspace + ): + """Set additional instance data. - instance.data["representations"].append(representation) - self.log.info("Publishing rendered frames ...") + Args: + instance (pyblish.api.Instance): pyblish instance + render_target (str): render target + colorspace (str): colorspace + """ + family = instance.data["family"] - elif render_target == "farm": - farm_keys = ["farm_chunk", "farm_priority", "farm_concurrency"] - for key in farm_keys: - # Skip if key is not in creator attributes - if key not in creator_attributes: - continue - # Add farm attributes to instance - instance.data[key] = creator_attributes[key] + # add targeted family to families + instance.data["families"].append( + "{}.{}".format(family, render_target) + ) + self.log.debug("Appending render target to families: {}.{}".format( + family, render_target) + ) - # Farm rendering - instance.data["transfer"] = False - instance.data["farm"] = True - self.log.info("Farm rendering ON ...") + write_node = self._write_node_helper(instance) + + # Determine defined file type + ext = write_node["file_type"].value() + + # get frame range data + handle_start = instance.context.data["handleStart"] + handle_end = instance.context.data["handleEnd"] + first_frame, last_frame = self._get_frame_range_data(instance) + + # get output paths + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) # TODO: remove this when we have proper colorspace support version_data = { @@ -188,9 +194,208 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, "frameEndHandle": last_frame, }) + + # TODO temporarily set stagingDir as persistent for backward + # compatibility. This is mainly focused on `renders`folders which + # were previously not cleaned up (and could be used in read notes) + # this logic should be removed and replaced with custom staging dir + instance.data["stagingDir_persistent"] = True + + def _write_node_helper(self, instance): + """Helper function to get write node from instance. + + Also sets instance transient data with child nodes. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + nuke.Node: write node + """ + instance_name = instance.data["name"] + + if self._write_nodes.get(instance_name): + # return cashed write node + return self._write_nodes[instance_name] + + # get all child nodes from group node + child_nodes = napi.get_instance_group_node_childs(instance) + + # set child nodes to instance transient data + instance.data["transientData"]["childNodes"] = child_nodes + + write_node = None + for node_ in child_nodes: + if node_.Class() == "Write": + write_node = node_ + + if write_node: + # for slate frame extraction + instance.data["transientData"]["writeNode"] = write_node + # add to cache + self._write_nodes[instance_name] = write_node + + return self._write_nodes[instance_name] + + def _get_existing_frames_representation( + self, + instance, + collected_frames + ): + """Get existing frames representation. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + + Returns: + dict: representation + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # Determine defined file type + ext = write_node["file_type"].value() + + representation = { + "name": ext, + "ext": ext, + "stagingDir": output_dir, + "tags": [] + } + + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + + representation['frameStart'] = frame_start_str + + # set slate frame + collected_frames = self._add_slate_frame_to_collected_frames( + instance, + collected_frames, + first_frame, + last_frame + ) + + if len(collected_frames) == 1: + representation['files'] = collected_frames.pop() + else: + representation['files'] = collected_frames + + return representation + + def _get_frame_start_str(self, first_frame, last_frame): + """Get frame start string. + + Args: + first_frame (int): first frame + last_frame (int): last frame + + Returns: + str: frame start string + """ + # convert first frame to string with padding + return ( + "{{:0{}d}}".format(len(str(last_frame))) + ).format(first_frame) + + def _add_slate_frame_to_collected_frames( + self, + instance, + collected_frames, + first_frame, + last_frame + ): + """Add slate frame to collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + first_frame (int): first frame + last_frame (int): last frame + + Returns: + list: collected frames + """ + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + frame_length = int(last_frame - first_frame + 1) + + # this will only run if slate frame is not already + # rendered from previews publishes + if ( + "slate" in instance.data["families"] + and frame_length == len(collected_frames) + ): + frame_slate_str = self._get_frame_start_str( + first_frame - 1, + last_frame + ) + + slate_frame = collected_frames[0].replace( + frame_start_str, frame_slate_str) + collected_frames.insert(0, slate_frame) + + return collected_frames + + def _add_farm_instance_data(self, instance): + """Add farm publishing related instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + """ + # make sure rendered sequence on farm will # be used for extract review if not instance.data.get("review"): instance.data["useSequenceForReview"] = False - self.log.debug("instance.data: {}".format(pformat(instance.data))) + # Farm rendering + instance.data.update({ + "transfer": False, + "farm": True # to skip integrate + }) + self.log.info("Farm rendering ON ...") + + def _get_collected_frames(self, instance): + """Get collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + list: collected frames + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # get file path knob + node_file_knob = write_node["file"] + # list file paths based on input frames + expected_paths = list(sorted({ + node_file_knob.evaluate(frame) + for frame in range(first_frame, last_frame + 1) + })) + + # convert only to base names + expected_filenames = { + os.path.basename(filepath) + for filepath in expected_paths + } + + # make sure files are existing at folder + collected_frames = [ + filename + for filename in os.listdir(output_dir) + if filename in expected_filenames + ] + + return collected_frames diff --git a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py index 5166fa4b2c..2a6a5dee2a 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py @@ -56,8 +56,6 @@ class ExtractBackdropNode(publish.Extractor): # connect output node for n, output in connections_out.items(): opn = nuke.createNode("Output") - self.log.info(n.name()) - self.log.info(output.name()) output.setInput( next((i for i, d in enumerate(output.dependencies()) if d.name() in n.name()), 0), opn) @@ -102,5 +100,5 @@ class ExtractBackdropNode(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{}' to: {}".format( + self.log.debug("Extracted instance '{}' to: {}".format( instance.name, path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_camera.py b/openpype/hosts/nuke/plugins/publish/extract_camera.py index 4286f71e83..3ec85c1f11 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_camera.py +++ b/openpype/hosts/nuke/plugins/publish/extract_camera.py @@ -11,9 +11,9 @@ from openpype.hosts.nuke.api.lib import maintained_selection class ExtractCamera(publish.Extractor): - """ 3D camera exctractor + """ 3D camera extractor """ - label = 'Exctract Camera' + label = 'Extract Camera' order = pyblish.api.ExtractorOrder families = ["camera"] hosts = ["nuke"] @@ -36,11 +36,8 @@ class ExtractCamera(publish.Extractor): step = 1 output_range = str(nuke.FrameRange(first_frame, last_frame, step)) - self.log.info("instance.data: `{}`".format( - pformat(instance.data))) - rm_nodes = [] - self.log.info("Crating additional nodes") + self.log.debug("Creating additional nodes for 3D Camera Extractor") subset = instance.data["subset"] staging_dir = self.staging_dir(instance) @@ -84,8 +81,6 @@ class ExtractCamera(publish.Extractor): for n in rm_nodes: nuke.delete(n) - self.log.info(file_path) - # create representation data if "representations" not in instance.data: instance.data["representations"] = [] @@ -112,7 +107,7 @@ class ExtractCamera(publish.Extractor): "frameEndHandle": last_frame, }) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, file_path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py index b0b1a9f7b7..ecec0d6f80 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py +++ b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py @@ -85,8 +85,5 @@ class ExtractGizmo(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{}' to: {}".format( + self.log.debug("Extracted instance '{}' to: {}".format( instance.name, path)) - - self.log.info("Data {}".format( - instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_model.py b/openpype/hosts/nuke/plugins/publish/extract_model.py index 814d404137..a8b37fb173 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_model.py +++ b/openpype/hosts/nuke/plugins/publish/extract_model.py @@ -11,9 +11,9 @@ from openpype.hosts.nuke.api.lib import ( class ExtractModel(publish.Extractor): - """ 3D model exctractor + """ 3D model extractor """ - label = 'Exctract Model' + label = 'Extract Model' order = pyblish.api.ExtractorOrder families = ["model"] hosts = ["nuke"] @@ -33,13 +33,13 @@ class ExtractModel(publish.Extractor): first_frame = int(nuke.root()["first_frame"].getValue()) last_frame = int(nuke.root()["last_frame"].getValue()) - self.log.info("instance.data: `{}`".format( + self.log.debug("instance.data: `{}`".format( pformat(instance.data))) rm_nodes = [] model_node = instance.data["transientData"]["node"] - self.log.info("Crating additional nodes") + self.log.debug("Creating additional nodes for Extract Model") subset = instance.data["subset"] staging_dir = self.staging_dir(instance) @@ -76,7 +76,7 @@ class ExtractModel(publish.Extractor): for n in rm_nodes: nuke.delete(n) - self.log.info(file_path) + self.log.debug("Filepath: {}".format(file_path)) # create representation data if "representations" not in instance.data: @@ -104,5 +104,5 @@ class ExtractModel(publish.Extractor): "frameEndHandle": last_frame, }) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, file_path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py b/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py index e66cfd9018..3fe1443bb3 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py +++ b/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py @@ -27,7 +27,7 @@ class CreateOutputNode(pyblish.api.ContextPlugin): if active_node: active_node = active_node.pop() - self.log.info(active_node) + self.log.debug("Active node: {}".format(active_node)) active_node['selected'].setValue(True) # select only instance render node diff --git a/openpype/hosts/nuke/plugins/publish/extract_render_local.py b/openpype/hosts/nuke/plugins/publish/extract_render_local.py index e2cf2addc5..ff04367e20 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_render_local.py +++ b/openpype/hosts/nuke/plugins/publish/extract_render_local.py @@ -119,7 +119,7 @@ class NukeRenderLocal(publish.Extractor, instance.data["representations"].append(repre) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, out_dir )) @@ -143,7 +143,7 @@ class NukeRenderLocal(publish.Extractor, instance.data["families"] = families collections, remainder = clique.assemble(filenames) - self.log.info('collections: {}'.format(str(collections))) + self.log.debug('collections: {}'.format(str(collections))) if collections: collection = collections[0] diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py index e4b7b155cd..b007f90f6c 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py @@ -20,8 +20,7 @@ class ExtractReviewDataLut(publish.Extractor): hosts = ["nuke"] def process(self, instance): - families = instance.data["families"] - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" in instance.data: staging_dir = instance.data[ "representations"][0]["stagingDir"].replace("\\", "/") @@ -34,7 +33,7 @@ class ExtractReviewDataLut(publish.Extractor): staging_dir = os.path.normpath(os.path.dirname(render_path)) instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) # generate data diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py similarity index 79% rename from openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py rename to openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py index 956d1a54a3..3ee166eb56 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py @@ -8,15 +8,16 @@ from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api.lib import maintained_selection -class ExtractReviewDataMov(publish.Extractor): - """Extracts movie and thumbnail with baked in luts +class ExtractReviewIntermediates(publish.Extractor): + """Extracting intermediate videos or sequences with + thumbnail for transcoding. must be run after extract_render_local.py """ order = pyblish.api.ExtractorOrder + 0.01 - label = "Extract Review Data Mov" + label = "Extract Review Intermediates" families = ["review"] hosts = ["nuke"] @@ -25,6 +26,24 @@ class ExtractReviewDataMov(publish.Extractor): viewer_lut_raw = None outputs = {} + @classmethod + def apply_settings(cls, project_settings): + """Apply the settings from the deprecated + ExtractReviewDataMov plugin for backwards compatibility + """ + nuke_publish = project_settings["nuke"]["publish"] + deprecated_setting = nuke_publish["ExtractReviewDataMov"] + current_setting = nuke_publish.get("ExtractReviewIntermediates") + if deprecated_setting["enabled"]: + # Use deprecated settings if they are still enabled + cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"] + cls.outputs = deprecated_setting["outputs"] + elif current_setting is None: + pass + elif current_setting["enabled"]: + cls.viewer_lut_raw = current_setting["viewer_lut_raw"] + cls.outputs = current_setting["outputs"] + def process(self, instance): families = set(instance.data["families"]) @@ -33,7 +52,7 @@ class ExtractReviewDataMov(publish.Extractor): task_type = instance.context.data["taskType"] subset = instance.data["subset"] - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" not in instance.data: instance.data["representations"] = [] @@ -43,10 +62,10 @@ class ExtractReviewDataMov(publish.Extractor): instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) - self.log.info(self.outputs) + self.log.debug("Outputs: {}".format(self.outputs)) # generate data with maintained_selection(): @@ -85,9 +104,10 @@ class ExtractReviewDataMov(publish.Extractor): re.search(s, subset) for s in f_subsets): continue - self.log.info( + self.log.debug( "Baking output `{}` with settings: {}".format( - o_name, o_data)) + o_name, o_data) + ) # check if settings have more then one preset # so we dont need to add outputName to representation @@ -136,10 +156,10 @@ class ExtractReviewDataMov(publish.Extractor): instance.data["useSequenceForReview"] = False else: instance.data["families"].remove("review") - self.log.info(( + self.log.debug( "Removing `review` from families. " "Not available baking profile." - )) + ) self.log.debug(instance.data["families"]) self.log.debug( diff --git a/openpype/hosts/nuke/plugins/publish/extract_script_save.py b/openpype/hosts/nuke/plugins/publish/extract_script_save.py index 0c8e561fd7..e44e5686b6 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_script_save.py +++ b/openpype/hosts/nuke/plugins/publish/extract_script_save.py @@ -3,13 +3,12 @@ import pyblish.api class ExtractScriptSave(pyblish.api.Extractor): - """ - """ + """Save current Nuke workfile script""" label = 'Script Save' order = pyblish.api.Extractor.order - 0.1 hosts = ['nuke'] def process(self, instance): - self.log.info('saving script') + self.log.debug('Saving current script') nuke.scriptSave() diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index 06c086b10d..7befb7b7f3 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -48,7 +48,7 @@ class ExtractSlateFrame(publish.Extractor): if instance.data.get("bakePresets"): for o_name, o_data in instance.data["bakePresets"].items(): - self.log.info("_ o_name: {}, o_data: {}".format( + self.log.debug("_ o_name: {}, o_data: {}".format( o_name, pformat(o_data))) self.render_slate( instance, @@ -65,14 +65,14 @@ class ExtractSlateFrame(publish.Extractor): def _create_staging_dir(self, instance): - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") staging_dir = os.path.normpath( os.path.dirname(instance.data["path"])) instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) def _check_frames_exists(self, instance): @@ -249,7 +249,7 @@ class ExtractSlateFrame(publish.Extractor): # Add file to representation files # - get write node - write_node = instance.data["writeNode"] + write_node = instance.data["transientData"]["writeNode"] # - evaluate filepaths for first frame and slate frame first_filename = os.path.basename( write_node["file"].evaluate(first_frame)) @@ -275,10 +275,10 @@ class ExtractSlateFrame(publish.Extractor): break if not matching_repre: - self.log.info(( - "Matching reresentaion was not found." + self.log.info( + "Matching reresentation was not found." " Representation files were not filled with slate." - )) + ) return # Add frame to matching representation files @@ -345,7 +345,7 @@ class ExtractSlateFrame(publish.Extractor): try: node[key].setValue(value) - self.log.info("Change key \"{}\" to value \"{}\"".format( + self.log.debug("Change key \"{}\" to value \"{}\"".format( key, value )) except NameError: diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index 21eefda249..de7567c1b1 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -8,6 +8,7 @@ from openpype.hosts.nuke import api as napi from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings +# Python 2/3 compatibility if sys.version_info[0] >= 3: unicode = str @@ -45,30 +46,30 @@ class ExtractThumbnail(publish.Extractor): for o_name, o_data in instance.data["bakePresets"].items(): self.render_thumbnail(instance, o_name, **o_data) else: - viewer_process_swithes = { + viewer_process_switches = { "bake_viewer_process": True, "bake_viewer_input_process": True } - self.render_thumbnail(instance, None, **viewer_process_swithes) + self.render_thumbnail( + instance, None, **viewer_process_switches) def render_thumbnail(self, instance, output_name=None, **kwargs): first_frame = instance.data["frameStartHandle"] last_frame = instance.data["frameEndHandle"] + colorspace = instance.data["colorspace"] # find frame range and define middle thumb frame mid_frame = int((last_frame - first_frame) / 2) # solve output name if any is set output_name = output_name or "" - if output_name: - output_name = "_" + output_name bake_viewer_process = kwargs["bake_viewer_process"] bake_viewer_input_process_node = kwargs[ "bake_viewer_input_process"] node = instance.data["transientData"]["node"] # group node - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" not in instance.data: instance.data["representations"] = [] @@ -78,7 +79,7 @@ class ExtractThumbnail(publish.Extractor): instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) temporary_nodes = [] @@ -90,8 +91,6 @@ class ExtractThumbnail(publish.Extractor): if collection: # get path - fname = os.path.basename(collection.format( - "{head}{padding}{tail}")) fhead = collection.format("{head}") thumb_fname = list(collection)[mid_frame] @@ -112,8 +111,8 @@ class ExtractThumbnail(publish.Extractor): if self.use_rendered and os.path.isfile(path_render): # check if file exist otherwise connect to write node rnode = nuke.createNode("Read") - rnode["file"].setValue(path_render) + rnode["colorspace"].setValue(colorspace) # turn it raw if none of baking is ON if all([ @@ -167,26 +166,42 @@ class ExtractThumbnail(publish.Extractor): previous_node = dag_node temporary_nodes.append(dag_node) + thumb_name = "thumbnail" + # only add output name and + # if there are more than one bake preset + if ( + output_name + and len(instance.data.get("bakePresets", {}).keys()) > 1 + ): + thumb_name = "{}_{}".format(output_name, thumb_name) + # create write node write_node = nuke.createNode("Write") - file = fhead[:-1] + output_name + ".jpg" - name = "thumbnail" - path = os.path.join(staging_dir, file).replace("\\", "/") - instance.data["thumbnail"] = path - write_node["file"].setValue(path) + file = fhead[:-1] + thumb_name + ".jpg" + thumb_path = os.path.join(staging_dir, file).replace("\\", "/") + + # add thumbnail to cleanup + instance.context.data["cleanupFullPaths"].append(thumb_path) + + # make sure only one thumbnail path is set + # and it is existing file + instance_thumb_path = instance.data.get("thumbnailPath") + if not instance_thumb_path or not os.path.isfile(instance_thumb_path): + instance.data["thumbnailPath"] = thumb_path + + write_node["file"].setValue(thumb_path) write_node["file_type"].setValue("jpg") write_node["raw"].setValue(1) write_node.setInput(0, previous_node) temporary_nodes.append(write_node) - tags = ["thumbnail", "publish_on_farm"] repre = { - 'name': name, + 'name': thumb_name, 'ext': "jpg", - "outputName": "thumb", + "outputName": thumb_name, 'files': file, "stagingDir": staging_dir, - "tags": tags + "tags": ["thumbnail", "publish_on_farm", "delete"] } instance.data["representations"].append(repre) diff --git a/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml b/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml new file mode 100644 index 0000000000..d9394ae510 --- /dev/null +++ b/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml @@ -0,0 +1,31 @@ + + + + Shot/Asset name + +## Publishing to a different asset context + +There are publish instances present which are publishing into a different asset than your current context. + +Usually this is not what you want but there can be cases where you might want to publish into another asset/shot or task. + +If that's the case you can disable the validation on the instance to ignore it. + +The wrong node's name is: \`{node_name}\` + +### Correct context keys and values: + +\`{correct_values}\` + +### Wrong keys and values: + +\`{wrong_values}\`. + + +## How to repair? + +1. Use \"Repair\" button. +2. Hit Reload button on the publisher. + + + diff --git a/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml b/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml deleted file mode 100644 index 0422917e9c..0000000000 --- a/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - Shot/Asset name - -## Invalid Shot/Asset name in subset - -Following Node with name `{node_name}`: -Is in context of `{correct_name}` but Node _asset_ knob is set as `{wrong_name}`. - -### How to repair? - -1. Either use Repair or Select button. -2. If you chose Select then rename asset knob to correct name. -3. Hit Reload button on the publisher. - - - \ No newline at end of file diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_context.py b/openpype/hosts/nuke/plugins/publish/validate_asset_context.py new file mode 100644 index 0000000000..731645a11c --- /dev/null +++ b/openpype/hosts/nuke/plugins/publish/validate_asset_context.py @@ -0,0 +1,112 @@ +# -*- coding: utf-8 -*- +"""Validate if instance asset is the same as context asset.""" +from __future__ import absolute_import + +import pyblish.api + +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishXmlValidationError, + OptionalPyblishPluginMixin +) +from openpype.hosts.nuke.api import SelectInstanceNodeAction + + +class ValidateCorrectAssetContext( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): + """Validator to check if instance asset context match context asset. + + When working in per-shot style you always publish data in context of + current asset (shot). This validator checks if this is so. It is optional + so it can be disabled when needed. + + Checking `asset` and `task` keys. + """ + order = ValidateContentsOrder + label = "Validate asset context" + hosts = ["nuke"] + actions = [ + RepairAction, + SelectInstanceNodeAction + ] + optional = True + + @classmethod + def apply_settings(cls, project_settings): + """Apply deprecated settings from project settings. + """ + nuke_publish = project_settings["nuke"]["publish"] + if "ValidateCorrectAssetName" in nuke_publish: + settings = nuke_publish["ValidateCorrectAssetName"] + else: + settings = nuke_publish["ValidateCorrectAssetContext"] + + cls.enabled = settings["enabled"] + cls.optional = settings["optional"] + cls.active = settings["active"] + + def process(self, instance): + if not self.is_active(instance.data): + return + + invalid_keys = self.get_invalid(instance) + + if not invalid_keys: + return + + message_values = { + "node_name": instance.data["transientData"]["node"].name(), + "correct_values": ", ".join([ + "{} > {}".format(_key, instance.context.data[_key]) + for _key in invalid_keys + ]), + "wrong_values": ", ".join([ + "{} > {}".format(_key, instance.data.get(_key)) + for _key in invalid_keys + ]) + } + + msg = ( + "Instance `{node_name}` has wrong context keys:\n" + "Correct: `{correct_values}` | Wrong: `{wrong_values}`").format( + **message_values) + + self.log.debug(msg) + + raise PublishXmlValidationError( + self, msg, formatting_data=message_values + ) + + @classmethod + def get_invalid(cls, instance): + """Get invalid keys from instance data and context data.""" + + invalid_keys = [] + testing_keys = ["asset", "task"] + for _key in testing_keys: + if _key not in instance.data: + invalid_keys.append(_key) + continue + if instance.data[_key] != instance.context.data[_key]: + invalid_keys.append(_key) + + return invalid_keys + + @classmethod + def repair(cls, instance): + """Repair instance data with context data.""" + invalid_keys = cls.get_invalid(instance) + + create_context = instance.context.data["create_context"] + + instance_id = instance.data.get("instance_id") + created_instance = create_context.get_instance_by_id( + instance_id + ) + for _key in invalid_keys: + created_instance[_key] = instance.context.data[_key] + + create_context.save_changes() diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py deleted file mode 100644 index df05f76a5b..0000000000 --- a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py +++ /dev/null @@ -1,138 +0,0 @@ -# -*- coding: utf-8 -*- -"""Validate if instance asset is the same as context asset.""" -from __future__ import absolute_import - -import pyblish.api - -import openpype.hosts.nuke.api.lib as nlib - -from openpype.pipeline.publish import ( - ValidateContentsOrder, - PublishXmlValidationError, - OptionalPyblishPluginMixin -) - -class SelectInvalidInstances(pyblish.api.Action): - """Select invalid instances in Outliner.""" - - label = "Select" - icon = "briefcase" - on = "failed" - - def process(self, context, plugin): - """Process invalid validators and select invalid instances.""" - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - if instances: - self.deselect() - self.log.info( - "Selecting invalid nodes: %s" % ", ".join( - [str(x) for x in instances] - ) - ) - self.select(instances) - else: - self.log.info("No invalid nodes found.") - self.deselect() - - def select(self, instances): - for inst in instances: - if inst.data.get("transientData", {}).get("node"): - select_node = inst.data["transientData"]["node"] - select_node["selected"].setValue(True) - - def deselect(self): - nlib.reset_selection() - - -class RepairSelectInvalidInstances(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - self.log.debug(instances) - - context_asset = context.data["assetEntity"]["name"] - for instance in instances: - node = instance.data["transientData"]["node"] - node_data = nlib.get_node_data(node, nlib.INSTANCE_DATA_KNOB) - node_data["asset"] = context_asset - nlib.set_node_data(node, nlib.INSTANCE_DATA_KNOB, node_data) - - -class ValidateCorrectAssetName( - pyblish.api.InstancePlugin, - OptionalPyblishPluginMixin -): - """Validator to check if instance asset match context asset. - - When working in per-shot style you always publish data in context of - current asset (shot). This validator checks if this is so. It is optional - so it can be disabled when needed. - - Action on this validator will select invalid instances in Outliner. - """ - order = ValidateContentsOrder - label = "Validate correct asset name" - hosts = ["nuke"] - actions = [ - SelectInvalidInstances, - RepairSelectInvalidInstances - ] - optional = True - - def process(self, instance): - if not self.is_active(instance.data): - return - - asset = instance.data.get("asset") - context_asset = instance.context.data["assetEntity"]["name"] - node = instance.data["transientData"]["node"] - - msg = ( - "Instance `{}` has wrong shot/asset name:\n" - "Correct: `{}` | Wrong: `{}`").format( - instance.name, asset, context_asset) - - self.log.debug(msg) - - if asset != context_asset: - raise PublishXmlValidationError( - self, msg, formatting_data={ - "node_name": node.name(), - "wrong_name": asset, - "correct_name": context_asset - } - ) diff --git a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py index ad60089952..761b080caa 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py @@ -43,8 +43,8 @@ class SelectCenterInNodeGraph(pyblish.api.Action): all_xC.append(xC) all_yC.append(yC) - self.log.info("all_xC: `{}`".format(all_xC)) - self.log.info("all_yC: `{}`".format(all_yC)) + self.log.debug("all_xC: `{}`".format(all_xC)) + self.log.debug("all_yC: `{}`".format(all_yC)) # zoom to nodes in node graph nuke.zoom(2, [min(all_xC), min(all_yC)]) diff --git a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py index dbcd216a84..ff6d73c6ec 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py +++ b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py @@ -23,7 +23,7 @@ class ValidateOutputResolution( order = pyblish.api.ValidatorOrder optional = True families = ["render"] - label = "Write resolution" + label = "Validate Write resolution" hosts = ["nuke"] actions = [RepairAction] @@ -104,9 +104,9 @@ class ValidateOutputResolution( _rfn["resize"].setValue(0) _rfn["black_outside"].setValue(1) - cls.log.info("I am adding reformat node") + cls.log.info("Adding reformat node") if cls.resolution_msg == invalid: reformat = cls.get_reformat(instance) reformat["format"].setValue(nuke.root()["format"].value()) - cls.log.info("I am fixing reformat to root.format") + cls.log.info("Fixing reformat to root.format") diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py index 45c20412c8..64bf69b69b 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py +++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py @@ -14,27 +14,26 @@ class RepairActionBase(pyblish.api.Action): # Get the errored instances return get_errored_instances_from_context(context, plugin=plugin) - def repair_knob(self, instances, state): + def repair_knob(self, context, instances, state): + create_context = context.data["create_context"] for instance in instances: - node = instance.data["transientData"]["node"] - files_remove = [os.path.join(instance.data["outputDir"], f) - for r in instance.data.get("representations", []) - for f in r.get("files", []) - ] - self.log.info("Files to be removed: {}".format(files_remove)) - for f in files_remove: - os.remove(f) - self.log.debug("removing file: {}".format(f)) - node["render"].setValue(state) + # Reset the render knob + instance_id = instance.data.get("instance_id") + created_instance = create_context.get_instance_by_id( + instance_id + ) + created_instance.creator_attributes["render_target"] = state self.log.info("Rendering toggled to `{}`".format(state)) + create_context.save_changes() + class RepairCollectionActionToLocal(RepairActionBase): label = "Repair - rerender with \"Local\"" def process(self, context, plugin): instances = self.get_instance(context, plugin) - self.repair_knob(instances, "Local") + self.repair_knob(context, instances, "local") class RepairCollectionActionToFarm(RepairActionBase): @@ -42,7 +41,7 @@ class RepairCollectionActionToFarm(RepairActionBase): def process(self, context, plugin): instances = self.get_instance(context, plugin) - self.repair_knob(instances, "On farm") + self.repair_knob(context, instances, "farm") class ValidateRenderedFrames(pyblish.api.InstancePlugin): @@ -77,8 +76,8 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin): return collections, remainder = clique.assemble(repre["files"]) - self.log.info("collections: {}".format(str(collections))) - self.log.info("remainder: {}".format(str(remainder))) + self.log.debug("collections: {}".format(str(collections))) + self.log.debug("remainder: {}".format(str(remainder))) collection = collections[0] @@ -104,15 +103,15 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin): coll_start = min(collection.indexes) coll_end = max(collection.indexes) - self.log.info("frame_length: {}".format(frame_length)) - self.log.info("collected_frames_len: {}".format( + self.log.debug("frame_length: {}".format(frame_length)) + self.log.debug("collected_frames_len: {}".format( collected_frames_len)) - self.log.info("f_start_h-f_end_h: {}-{}".format( + self.log.debug("f_start_h-f_end_h: {}-{}".format( f_start_h, f_end_h)) - self.log.info( + self.log.debug( "coll_start-coll_end: {}-{}".format(coll_start, coll_end)) - self.log.info( + self.log.debug( "len(collection.indexes): {}".format(collected_frames_len) ) diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py index aeecea655f..9c8bfae388 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py @@ -1,3 +1,5 @@ +from collections import defaultdict + import pyblish.api from openpype.pipeline.publish import get_errored_instances_from_context from openpype.hosts.nuke.api.lib import ( @@ -37,7 +39,7 @@ class RepairNukeWriteNodeAction(pyblish.api.Action): set_node_knobs_from_settings(write_node, correct_data["knobs"]) - self.log.info("Node attributes were fixed") + self.log.debug("Node attributes were fixed") class ValidateNukeWriteNode( @@ -80,18 +82,14 @@ class ValidateNukeWriteNode( correct_data = get_write_node_template_attr(write_group_node) check = [] - self.log.debug("__ write_node: {}".format( - write_node - )) - self.log.debug("__ correct_data: {}".format( - correct_data - )) + + # Collect key values of same type in a list. + values_by_name = defaultdict(list) + for knob_data in correct_data["knobs"]: + values_by_name[knob_data["name"]].append(knob_data["value"]) for knob_data in correct_data["knobs"]: knob_type = knob_data["type"] - self.log.debug("__ knob_type: {}".format( - knob_type - )) if ( knob_type == "__legacy__" @@ -105,35 +103,35 @@ class ValidateNukeWriteNode( ) key = knob_data["name"] - value = knob_data["value"] + values = values_by_name[key] node_value = write_node[key].value() # fix type differences - if type(node_value) in (int, float): - try: - if isinstance(value, list): - value = color_gui_to_int(value) - else: - value = float(value) - node_value = float(node_value) - except ValueError: - value = str(value) - else: - value = str(value) - node_value = str(node_value) + fixed_values = [] + for value in values: + if type(node_value) in (int, float): + try: + + if isinstance(value, list): + value = color_gui_to_int(value) + else: + value = float(value) + node_value = float(node_value) + except ValueError: + value = str(value) + else: + value = str(value) + node_value = str(node_value) + + fixed_values.append(value) - self.log.debug("__ key: {} | value: {}".format( - key, value - )) if ( - node_value != value + node_value not in fixed_values and key != "file" and key != "tile_color" ): check.append([key, value, write_node[key].value()]) - self.log.info(check) - if check: self._make_error(check) diff --git a/openpype/hosts/photoshop/api/README.md b/openpype/hosts/photoshop/api/README.md index 4a36746cb2..7bd2bcb1bf 100644 --- a/openpype/hosts/photoshop/api/README.md +++ b/openpype/hosts/photoshop/api/README.md @@ -210,8 +210,9 @@ class ImageLoader(load.LoaderPlugin): representations = ["*"] def load(self, context, name=None, namespace=None, data=None): + path = self.filepath_from_context(context) with photoshop.maintained_selection(): - layer = stub.import_smart_object(self.fname) + layer = stub.import_smart_object(path) self[:] = [layer] diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py index 73dc80260c..56ae2a4c25 100644 --- a/openpype/hosts/photoshop/api/pipeline.py +++ b/openpype/hosts/photoshop/api/pipeline.py @@ -6,11 +6,8 @@ import pyblish.api from openpype.lib import register_event_callback, Logger from openpype.pipeline import ( - legacy_io, register_loader_plugin_path, register_creator_plugin_path, - deregister_loader_plugin_path, - deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) @@ -23,6 +20,7 @@ from openpype.host import ( from openpype.pipeline.load import any_outdated_containers from openpype.hosts.photoshop import PHOTOSHOP_HOST_DIR +from openpype.tools.utils import get_openpype_qt_app from . import lib @@ -111,14 +109,6 @@ class PhotoshopHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): item["id"] = "publish_context" _get_stub().imprint(item["id"], item) - def get_context_title(self): - """Returns title for Creator window""" - - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - return "{}/{}/{}".format(project_name, asset_name, task_name) - def list_instances(self): """List all created instances to publish from current workfile. @@ -174,10 +164,7 @@ def check_inventory(): return # Warn about outdated containers. - _app = QtWidgets.QApplication.instance() - if not _app: - print("Starting new QApplication..") - _app = QtWidgets.QApplication([]) + _app = get_openpype_qt_app() message_box = QtWidgets.QMessageBox() message_box.setIcon(QtWidgets.QMessageBox.Warning) diff --git a/openpype/hosts/photoshop/lib.py b/openpype/hosts/photoshop/lib.py index ae7a33b7b6..9f603a70d2 100644 --- a/openpype/hosts/photoshop/lib.py +++ b/openpype/hosts/photoshop/lib.py @@ -1,5 +1,8 @@ +import re + import openpype.hosts.photoshop.api as api from openpype.client import get_asset_by_name +from openpype.lib import prepare_template_data from openpype.pipeline import ( AutoCreator, CreatedInstance @@ -78,3 +81,17 @@ class PSAutoCreator(AutoCreator): existing_instance["asset"] = asset_name existing_instance["task"] = task_name existing_instance["subset"] = subset_name + + +def clean_subset_name(subset_name): + """Clean all variants leftover {layer} from subset name.""" + dynamic_data = prepare_template_data({"layer": "{layer}"}) + for value in dynamic_data.values(): + if value in subset_name: + subset_name = (subset_name.replace(value, "") + .replace("__", "_") + .replace("..", ".")) + # clean trailing separator as Main_ + pattern = r'[\W_]+$' + replacement = '' + return re.sub(pattern, replacement, subset_name) diff --git a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py index 3bc61c8184..afde77fdb4 100644 --- a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py @@ -2,8 +2,9 @@ from openpype.pipeline import CreatedInstance from openpype.lib import BoolDef import openpype.hosts.photoshop.api as api -from openpype.hosts.photoshop.lib import PSAutoCreator +from openpype.hosts.photoshop.lib import PSAutoCreator, clean_subset_name from openpype.pipeline.create import get_subset_name +from openpype.lib import prepare_template_data from openpype.client import get_asset_by_name @@ -37,19 +38,14 @@ class AutoImageCreator(PSAutoCreator): asset_doc = get_asset_by_name(project_name, asset_name) if existing_instance is None: - subset_name = get_subset_name( - self.family, self.default_variant, task_name, asset_doc, + subset_name = self.get_subset_name( + self.default_variant, task_name, asset_doc, project_name, host_name ) - publishable_ids = [layer.id for layer in api.stub().get_layers() - if layer.visible] data = { "asset": asset_name, "task": task_name, - # ids are "virtual" layers, won't get grouped as 'members' do - # same difference in color coded layers in WP - "ids": publishable_ids } if not self.active_on_create: @@ -69,8 +65,8 @@ class AutoImageCreator(PSAutoCreator): existing_instance["asset"] != asset_name or existing_instance["task"] != task_name ): - subset_name = get_subset_name( - self.family, self.default_variant, task_name, asset_doc, + subset_name = self.get_subset_name( + self.default_variant, task_name, asset_doc, project_name, host_name ) @@ -98,7 +94,7 @@ class AutoImageCreator(PSAutoCreator): ) ] - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["photoshop"]["create"]["AutoImageCreator"] ) @@ -118,3 +114,19 @@ class AutoImageCreator(PSAutoCreator): Artist might disable this instance from publishing or from creating review for it though. """ + + def get_subset_name( + self, + variant, + task_name, + asset_doc, + project_name, + host_name=None, + instance=None + ): + dynamic_data = prepare_template_data({"layer": "{layer}"}) + subset_name = get_subset_name( + self.family, variant, task_name, asset_doc, + project_name, host_name, dynamic_data=dynamic_data + ) + return clean_subset_name(subset_name) diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py index f3165fca57..4f2e90886a 100644 --- a/openpype/hosts/photoshop/plugins/create/create_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_image.py @@ -10,6 +10,7 @@ from openpype.pipeline import ( from openpype.lib import prepare_template_data from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances +from openpype.hosts.photoshop.lib import clean_subset_name class ImageCreator(Creator): @@ -88,18 +89,24 @@ class ImageCreator(Creator): layer_fill = prepare_template_data({"layer": layer_name}) subset_name = subset_name.format(**layer_fill) + subset_name = clean_subset_name(subset_name) if group.long_name: for directory in group.long_name[::-1]: name = self._clean_highlights(stub, directory) layer_names_in_hierarchy.append(name) - data.update({"subset": subset_name}) - data.update({"members": [str(group.id)]}) - data.update({"layer_name": layer_name}) - data.update({"long_name": "_".join(layer_names_in_hierarchy)}) + data_update = { + "subset": subset_name, + "members": [str(group.id)], + "layer_name": layer_name, + "long_name": "_".join(layer_names_in_hierarchy) + } + data.update(data_update) - creator_attributes = {"mark_for_review": self.mark_for_review} + mark_for_review = (pre_create_data.get("mark_for_review") or + self.mark_for_review) + creator_attributes = {"mark_for_review": mark_for_review} data.update({"creator_attributes": creator_attributes}) if not self.active_on_create: @@ -124,8 +131,6 @@ class ImageCreator(Creator): if creator_id == self.identifier: instance_data = self._handle_legacy(instance_data) - layer = api.stub().get_layer(instance_data["members"][0]) - instance_data["layer"] = layer instance = CreatedInstance.from_existing( instance_data, self ) @@ -171,7 +176,7 @@ class ImageCreator(Creator): ) ] - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["photoshop"]["create"]["ImageCreator"] ) @@ -181,7 +186,6 @@ class ImageCreator(Creator): self.mark_for_review = plugin_settings["mark_for_review"] self.enabled = plugin_settings["enabled"] - def get_detail_description(self): return """Creator for Image instances diff --git a/openpype/hosts/photoshop/plugins/create/create_review.py b/openpype/hosts/photoshop/plugins/create/create_review.py index 064485d465..63751d94e4 100644 --- a/openpype/hosts/photoshop/plugins/create/create_review.py +++ b/openpype/hosts/photoshop/plugins/create/create_review.py @@ -18,7 +18,7 @@ class ReviewCreator(PSAutoCreator): it will get recreated in next publish either way). """ - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["photoshop"]["create"]["ReviewCreator"] ) diff --git a/openpype/hosts/photoshop/plugins/create/create_workfile.py b/openpype/hosts/photoshop/plugins/create/create_workfile.py index d498f0549c..1b255de3a3 100644 --- a/openpype/hosts/photoshop/plugins/create/create_workfile.py +++ b/openpype/hosts/photoshop/plugins/create/create_workfile.py @@ -19,7 +19,7 @@ class WorkfileCreator(PSAutoCreator): in next publish automatically). """ - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["photoshop"]["create"]["WorkfileCreator"] ) diff --git a/openpype/hosts/photoshop/plugins/load/load_image.py b/openpype/hosts/photoshop/plugins/load/load_image.py index 91a9787781..eb770bbd20 100644 --- a/openpype/hosts/photoshop/plugins/load/load_image.py +++ b/openpype/hosts/photoshop/plugins/load/load_image.py @@ -22,7 +22,8 @@ class ImageLoader(photoshop.PhotoshopLoader): name ) with photoshop.maintained_selection(): - layer = self.import_layer(self.fname, layer_name, stub) + path = self.filepath_from_context(context) + layer = self.import_layer(path, layer_name, stub) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py b/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py index c25c5a8f2c..f9fceb80bb 100644 --- a/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py +++ b/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py @@ -29,11 +29,13 @@ class ImageFromSequenceLoader(photoshop.PhotoshopLoader): options = [] def load(self, context, name=None, namespace=None, data=None): + + path = self.filepath_from_context(context) if data.get("frame"): - self.fname = os.path.join( - os.path.dirname(self.fname), data["frame"] + path = os.path.join( + os.path.dirname(path), data["frame"] ) - if not os.path.exists(self.fname): + if not os.path.exists(path): return stub = self.get_stub() @@ -42,7 +44,7 @@ class ImageFromSequenceLoader(photoshop.PhotoshopLoader): ) with photoshop.maintained_selection(): - layer = stub.import_smart_object(self.fname, layer_name) + layer = stub.import_smart_object(path, layer_name) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/load/load_reference.py b/openpype/hosts/photoshop/plugins/load/load_reference.py index 1f32a5d23c..5772e243d5 100644 --- a/openpype/hosts/photoshop/plugins/load/load_reference.py +++ b/openpype/hosts/photoshop/plugins/load/load_reference.py @@ -23,7 +23,8 @@ class ReferenceLoader(photoshop.PhotoshopLoader): stub.get_layers(), context["asset"]["name"], name ) with photoshop.maintained_selection(): - layer = self.import_layer(self.fname, layer_name, stub) + path = self.filepath_from_context(context) + layer = self.import_layer(path, layer_name, stub) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/publish/closePS.py b/openpype/hosts/photoshop/plugins/publish/closePS.py index b4ded96001..b4c3a4c966 100644 --- a/openpype/hosts/photoshop/plugins/publish/closePS.py +++ b/openpype/hosts/photoshop/plugins/publish/closePS.py @@ -17,7 +17,7 @@ class ClosePS(pyblish.api.ContextPlugin): active = True hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("ClosePS") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py index ce408f8d01..77f1a3e91f 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py @@ -6,8 +6,6 @@ from openpype.pipeline.create import get_subset_name class CollectAutoImage(pyblish.api.ContextPlugin): """Creates auto image in non artist based publishes (Webpublisher). - - 'remotepublish' should be renamed to 'autopublish' or similar in the future """ label = "Collect Auto Image" @@ -15,10 +13,9 @@ class CollectAutoImage(pyblish.api.ContextPlugin): hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): - family = "image" for instance in context: creator_identifier = instance.data.get("creator_identifier") if creator_identifier and creator_identifier == "auto_image": diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image_refresh.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image_refresh.py new file mode 100644 index 0000000000..741fb0e9cd --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image_refresh.py @@ -0,0 +1,24 @@ +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop + + +class CollectAutoImageRefresh(pyblish.api.ContextPlugin): + """Refreshes auto_image instance with currently visible layers.. + """ + + label = "Collect Auto Image Refresh" + order = pyblish.api.CollectorOrder + hosts = ["photoshop"] + order = pyblish.api.CollectorOrder + 0.2 + + def process(self, context): + for instance in context: + creator_identifier = instance.data.get("creator_identifier") + if creator_identifier and creator_identifier == "auto_image": + self.log.debug("Auto image instance found, won't create new") + # refresh existing auto image instance with current visible + publishable_ids = [layer.id for layer in photoshop.stub().get_layers() # noqa + if layer.visible] + instance.data["ids"] = publishable_ids + return diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py index 7de4adcaf4..82ba0ac09c 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py @@ -20,7 +20,7 @@ class CollectAutoReview(pyblish.api.ContextPlugin): label = "Collect Auto Review" hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] publish = True diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py index d10cf62c67..01dc50af40 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py @@ -12,7 +12,7 @@ class CollectAutoWorkfile(pyblish.api.ContextPlugin): label = "Collect Workfile" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): family = "workfile" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py index a5fea7ac7d..b13ff5e476 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py @@ -35,7 +35,7 @@ class CollectBatchData(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder - 0.495 label = "Collect batch data" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["webpublish"] def process(self, context): self.log.info("CollectBatchData") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py index 90fca8398f..c16616bcb2 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py @@ -34,7 +34,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin): label = "Instances" order = pyblish.api.CollectorOrder hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] # configurable by Settings color_code_mapping = [] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_image.py b/openpype/hosts/photoshop/plugins/publish/collect_image.py new file mode 100644 index 0000000000..64727cef33 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_image.py @@ -0,0 +1,20 @@ +import pyblish.api + +from openpype.hosts.photoshop import api + + +class CollectImage(pyblish.api.InstancePlugin): + """Collect layer metadata into a instance. + + Used later in validation + """ + order = pyblish.api.CollectorOrder + 0.200 + label = 'Collect Image' + + hosts = ["photoshop"] + families = ["image"] + + def process(self, instance): + if instance.data.get("members"): + layer = api.stub().get_layer(instance.data["members"][0]) + instance.data["layer"] = layer diff --git a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py index 2502689e4b..eec6f1fae4 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py @@ -18,6 +18,7 @@ Provides: import pyblish.api from openpype.client import get_last_version_by_subset_name +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedVersion(pyblish.api.ContextPlugin): @@ -26,7 +27,7 @@ class CollectPublishedVersion(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.190 label = "Collect published version" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): workfile_subset_name = None @@ -47,9 +48,17 @@ class CollectPublishedVersion(pyblish.api.ContextPlugin): version_doc = get_last_version_by_subset_name(project_name, workfile_subset_name, asset_id) - version_int = 1 + if version_doc: - version_int += int(version_doc["name"]) + version_int = int(version_doc["name"]) + 1 + else: + version_int = get_versioning_start( + project_name, + "photoshop", + task_name=context.data["task"], + task_type=context.data["taskType"], + project_settings=context.data["project_settings"] + ) self.log.debug(f"Setting {version_int} to context.") context.data["version"] = version_int diff --git a/openpype/hosts/photoshop/plugins/publish/extract_image.py b/openpype/hosts/photoshop/plugins/publish/extract_image.py index cdb28c742d..680f580cc0 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_image.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_image.py @@ -45,9 +45,11 @@ class ExtractImage(pyblish.api.ContextPlugin): # Perform extraction files = {} ids = set() - layer = instance.data.get("layer") - if layer: - ids.add(layer.id) + # real layers and groups + members = instance.data("members") + if members: + ids.update(set([int(member) for member in members])) + # virtual groups collected by color coding or auto_image add_ids = instance.data.pop("ids", None) if add_ids: ids.update(set(add_ids)) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index d5416a389d..d5dac417d7 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -4,7 +4,7 @@ from PIL import Image from openpype.lib import ( run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, ) from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop @@ -56,6 +56,7 @@ class ExtractReview(publish.Extractor): } if instance.data["family"] != "review": + self.log.debug("Existing extracted file from image family used.") # enable creation of review, without this jpg review would clash # with jpg of the image family output_name = repre_name @@ -63,8 +64,15 @@ class ExtractReview(publish.Extractor): repre_skeleton.update({"name": repre_name, "outputName": output_name}) - if self.make_image_sequence and len(layers) > 1: - self.log.info("Extract layers to image sequence.") + img_file = self.output_seq_filename % 0 + self._prepare_file_for_image_family(img_file, instance, + staging_dir) + repre_skeleton.update({ + "files": img_file, + }) + processed_img_names = [img_file] + elif self.make_image_sequence and len(layers) > 1: + self.log.debug("Extract layers to image sequence.") img_list = self._save_sequence_images(staging_dir, layers) repre_skeleton.update({ @@ -73,19 +81,19 @@ class ExtractReview(publish.Extractor): "fps": fps, "files": img_list, }) - instance.data["representations"].append(repre_skeleton) processed_img_names = img_list else: - self.log.info("Extract layers to flatten image.") - img_list = self._save_flatten_image(staging_dir, layers) + self.log.debug("Extract layers to flatten image.") + img_file = self._save_flatten_image(staging_dir, layers) repre_skeleton.update({ - "files": img_list, + "files": img_file, }) - instance.data["representations"].append(repre_skeleton) - processed_img_names = [img_list] + processed_img_names = [img_file] - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + instance.data["representations"].append(repre_skeleton) + + ffmpeg_args = get_ffmpeg_tool_args("ffmpeg") instance.data["stagingDir"] = staging_dir @@ -94,16 +102,57 @@ class ExtractReview(publish.Extractor): source_files_pattern = self._check_and_resize(processed_img_names, source_files_pattern, staging_dir) - self._generate_thumbnail(ffmpeg_path, instance, source_files_pattern, - staging_dir) + self._generate_thumbnail( + list(ffmpeg_args), + instance, + source_files_pattern, + staging_dir) no_of_frames = len(processed_img_names) if no_of_frames > 1: - self._generate_mov(ffmpeg_path, instance, fps, no_of_frames, - source_files_pattern, staging_dir) + self._generate_mov( + list(ffmpeg_args), + instance, + fps, + no_of_frames, + source_files_pattern, + staging_dir) self.log.info(f"Extracted {instance} to {staging_dir}") + def _prepare_file_for_image_family(self, img_file, instance, staging_dir): + """Converts existing file for image family to .jpg + + Image instance could have its own separate review (instance per layer + for example). This uses extracted file instead of extracting again. + Args: + img_file (str): name of output file (with 0000 value for ffmpeg + later) + instance: + staging_dir (str): temporary folder where extracted file is located + """ + repre_file = instance.data["representations"][0] + source_file_path = os.path.join(repre_file["stagingDir"], + repre_file["files"]) + if not os.path.exists(source_file_path): + raise RuntimeError(f"{source_file_path} doesn't exist for " + "review to create from") + _, ext = os.path.splitext(repre_file["files"]) + if ext != ".jpg": + im = Image.open(source_file_path) + if (im.mode in ('RGBA', 'LA') or ( + im.mode == 'P' and 'transparency' in im.info)): + # without this it produces messy low quality jpg + rgb_im = Image.new("RGBA", (im.width, im.height), "#ffffff") + rgb_im.alpha_composite(im) + rgb_im.convert("RGB").save(os.path.join(staging_dir, img_file)) + else: + im.save(os.path.join(staging_dir, img_file)) + else: + # handles already .jpg + shutil.copy(source_file_path, + os.path.join(staging_dir, img_file)) + def _generate_mov(self, ffmpeg_path, instance, fps, no_of_frames, source_files_pattern, staging_dir): """Generates .mov to upload to Ftrack. @@ -142,8 +191,9 @@ class ExtractReview(publish.Extractor): "tags": self.mov_options['tags'] }) - def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern, - staging_dir): + def _generate_thumbnail( + self, ffmpeg_args, instance, source_files_pattern, staging_dir + ): """Generates scaled down thumbnail and adds it as representation. Args: @@ -157,8 +207,7 @@ class ExtractReview(publish.Extractor): # Generate thumbnail thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") self.log.info(f"Generate thumbnail {thumbnail_path}") - args = [ - ffmpeg_path, + args = ffmpeg_args + [ "-y", "-i", source_files_pattern, "-vf", "scale=300:-1", @@ -211,6 +260,11 @@ class ExtractReview(publish.Extractor): (list) of PSItem """ layers = [] + # creating review for existing 'image' instance + if instance.data["family"] == "image" and instance.data.get("layer"): + layers.append(instance.data["layer"]) + return layers + for image_instance in instance.context: if image_instance.data["family"] != "image": continue diff --git a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py index b9d721dbdb..1a4932fe99 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -28,10 +28,10 @@ class ValidateInstanceAssetRepair(pyblish.api.Action): # Apply pyblish.logic to get the instances for the plug-in instances = pyblish.api.instances_by_plugin(failed, plugin) stub = photoshop.stub() + current_asset_name = get_current_asset_name() for instance in instances: data = stub.read(instance[0]) - - data["asset"] = legacy_io.Session["AVALON_ASSET"] + data["asset"] = current_asset_name stub.imprint(instance[0], data) @@ -55,7 +55,7 @@ class ValidateInstanceAsset(OptionalPyblishPluginMixin, def process(self, instance): instance_asset = instance.data["asset"] - current_asset = legacy_io.Session["AVALON_ASSET"] + current_asset = get_current_asset_name() if instance_asset != current_asset: msg = ( diff --git a/openpype/hosts/resolve/api/__init__.py b/openpype/hosts/resolve/api/__init__.py index 2b4546f8d6..dba275e6c4 100644 --- a/openpype/hosts/resolve/api/__init__.py +++ b/openpype/hosts/resolve/api/__init__.py @@ -6,13 +6,10 @@ from .utils import ( ) from .pipeline import ( - install, - uninstall, + ResolveHost, ls, containerise, update_container, - publish, - launch_workfiles_app, maintained_selection, remove_instance, list_instances @@ -76,14 +73,10 @@ __all__ = [ "bmdvf", # pipeline - "install", - "uninstall", + "ResolveHost", "ls", "containerise", "update_container", - "reload_pipeline", - "publish", - "launch_workfiles_app", "maintained_selection", "remove_instance", "list_instances", diff --git a/openpype/hosts/resolve/api/lib.py b/openpype/hosts/resolve/api/lib.py index eaee3bb9ba..37410c9727 100644 --- a/openpype/hosts/resolve/api/lib.py +++ b/openpype/hosts/resolve/api/lib.py @@ -125,15 +125,19 @@ def get_any_timeline(): return project.GetTimelineByIndex(1) -def get_new_timeline(): +def get_new_timeline(timeline_name: str = None): """Get new timeline object. + Arguments: + timeline_name (str): New timeline name. + Returns: object: resolve.Timeline """ project = get_current_project() media_pool = project.GetMediaPool() - new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name) + new_timeline = media_pool.CreateEmptyTimeline( + timeline_name or self.pype_timeline_name) project.SetCurrentTimeline(new_timeline) return new_timeline @@ -179,53 +183,52 @@ def create_bin(name: str, root: object = None) -> object: return media_pool.GetCurrentFolder() -def create_media_pool_item(fpath: str, - root: object = None) -> object: +def remove_media_pool_item(media_pool_item: object) -> bool: + media_pool = get_current_project().GetMediaPool() + return media_pool.DeleteClips([media_pool_item]) + + +def create_media_pool_item( + files: list, + root: object = None, +) -> object: """ Create media pool item. Args: - fpath (str): absolute path to a file + files (list[str]): list of absolute paths to files root (resolve.Folder)[optional]: root folder / bin object Returns: object: resolve.MediaPoolItem """ # get all variables - media_storage = get_media_storage() media_pool = get_current_project().GetMediaPool() root_bin = root or media_pool.GetRootFolder() + # make sure files list is not empty and first available file exists + filepath = next((f for f in files if os.path.isfile(f)), None) + if not filepath: + raise FileNotFoundError("No file found in input files list") + # try to search in bin if the clip does not exist - existing_mpi = get_media_pool_item(fpath, root_bin) + existing_mpi = get_media_pool_item(filepath, root_bin) if existing_mpi: return existing_mpi - dirname, file = os.path.split(fpath) - _name, ext = os.path.splitext(file) + # add all data in folder to media pool + media_pool_items = media_pool.ImportMedia(files) - # add all data in folder to mediapool - media_pool_items = media_storage.AddItemListToMediaPool( - os.path.normpath(dirname)) - - if not media_pool_items: - return False - - # if any are added then look into them for the right extension - media_pool_item = [mpi for mpi in media_pool_items - if ext in mpi.GetClipProperty("File Path")] - - # return only first found - return media_pool_item.pop() + return media_pool_items.pop() if media_pool_items else False -def get_media_pool_item(fpath, root: object = None) -> object: +def get_media_pool_item(filepath, root: object = None) -> object: """ Return clip if found in folder with use of input file path. Args: - fpath (str): absolute path to a file + filepath (str): absolute path to a file root (resolve.Folder)[optional]: root folder / bin object Returns: @@ -233,7 +236,7 @@ def get_media_pool_item(fpath, root: object = None) -> object: """ media_pool = get_current_project().GetMediaPool() root = root or media_pool.GetRootFolder() - fname = os.path.basename(fpath) + fname = os.path.basename(filepath) for _mpi in root.GetClipList(): _mpi_name = _mpi.GetClipProperty("File Name") @@ -277,7 +280,6 @@ def create_timeline_item(media_pool_item: object, if source_end is not None: clip_data.update({"endFrame": source_end}) - print(clip_data) # add to timeline media_pool.AppendToTimeline([clip_data]) @@ -394,14 +396,22 @@ def get_current_timeline_items( def get_pype_timeline_item_by_name(name: str) -> object: - track_itmes = get_current_timeline_items() - for _ti in track_itmes: - tag_data = get_timeline_item_pype_tag(_ti["clip"]["item"]) - tag_name = tag_data.get("name") + """Get timeline item by name. + + Args: + name (str): name of timeline item + + Returns: + object: resolve.TimelineItem + """ + for _ti_data in get_current_timeline_items(): + _ti_clip = _ti_data["clip"]["item"] + tag_data = get_timeline_item_pype_tag(_ti_clip) + tag_name = tag_data.get("namespace") if not tag_name: continue - if tag_data.get("name") in name: - return _ti + if tag_name in name: + return _ti_clip return None @@ -544,12 +554,11 @@ def set_pype_marker(timeline_item, tag_data): def get_pype_marker(timeline_item): timeline_item_markers = timeline_item.GetMarkers() - for marker_frame in timeline_item_markers: - note = timeline_item_markers[marker_frame]["note"] - color = timeline_item_markers[marker_frame]["color"] - name = timeline_item_markers[marker_frame]["name"] - print(f"_ marker data: {marker_frame} | {name} | {color} | {note}") + for marker_frame, marker in timeline_item_markers.items(): + color = marker["color"] + name = marker["name"] if name == self.pype_marker_name and color == self.pype_marker_color: + note = marker["note"] self.temp_marker_frame = marker_frame return json.loads(note) @@ -618,7 +627,7 @@ def create_compound_clip(clip_data, name, folder): if c.GetName() in name), None) if cct: - print(f"_ cct exists: {cct}") + print(f"Compound clip exists: {cct}") else: # Create empty timeline in current folder and give name: cct = mp.CreateEmptyTimeline(name) @@ -627,7 +636,7 @@ def create_compound_clip(clip_data, name, folder): clips = folder.GetClipList() cct = next((c for c in clips if c.GetName() in name), None) - print(f"_ cct created: {cct}") + print(f"Compound clip created: {cct}") with maintain_current_timeline(cct, tl_origin): # Add input clip to the current timeline: diff --git a/openpype/hosts/resolve/api/menu.py b/openpype/hosts/resolve/api/menu.py index b3717e01ea..34a63eb89f 100644 --- a/openpype/hosts/resolve/api/menu.py +++ b/openpype/hosts/resolve/api/menu.py @@ -5,11 +5,6 @@ from qtpy import QtWidgets, QtCore from openpype.tools.utils import host_tools -from .pipeline import ( - publish, - launch_workfiles_app -) - def load_stylesheet(): path = os.path.join(os.path.dirname(__file__), "menu_style.qss") @@ -113,7 +108,7 @@ class OpenPypeMenu(QtWidgets.QWidget): def on_workfile_clicked(self): print("Clicked Workfile") - launch_workfiles_app() + host_tools.show_workfiles() def on_create_clicked(self): print("Clicked Create") @@ -121,7 +116,7 @@ class OpenPypeMenu(QtWidgets.QWidget): def on_publish_clicked(self): print("Clicked Publish") - publish(None) + host_tools.show_publish(parent=None) def on_load_clicked(self): print("Clicked Load") diff --git a/openpype/hosts/resolve/api/pipeline.py b/openpype/hosts/resolve/api/pipeline.py index 899cb825bb..93dec300fb 100644 --- a/openpype/hosts/resolve/api/pipeline.py +++ b/openpype/hosts/resolve/api/pipeline.py @@ -12,14 +12,24 @@ from openpype.pipeline import ( schema, register_loader_plugin_path, register_creator_plugin_path, - deregister_loader_plugin_path, - deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) -from openpype.tools.utils import host_tools +from openpype.host import ( + HostBase, + IWorkfileHost, + ILoadHost +) from . import lib from .utils import get_resolve_module +from .workio import ( + open_file, + save_file, + file_extensions, + has_unsaved_changes, + work_root, + current_file +) log = Logger.get_logger(__name__) @@ -32,53 +42,56 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create") AVALON_CONTAINERS = ":AVALON_CONTAINERS" -def install(): - """Install resolve-specific functionality of avalon-core. +class ResolveHost(HostBase, IWorkfileHost, ILoadHost): + name = "resolve" - This is where you install menus and register families, data - and loaders into resolve. + def install(self): + """Install resolve-specific functionality of avalon-core. - It is called automatically when installing via `api.install(resolve)`. + This is where you install menus and register families, data + and loaders into resolve. - See the Maya equivalent for inspiration on how to implement this. + It is called automatically when installing via `api.install(resolve)`. - """ + See the Maya equivalent for inspiration on how to implement this. - log.info("openpype.hosts.resolve installed") + """ - pyblish.register_host("resolve") - pyblish.register_plugin_path(PUBLISH_PATH) - log.info("Registering DaVinci Resovle plug-ins..") + log.info("openpype.hosts.resolve installed") - register_loader_plugin_path(LOAD_PATH) - register_creator_plugin_path(CREATE_PATH) + pyblish.register_host(self.name) + pyblish.register_plugin_path(PUBLISH_PATH) + print("Registering DaVinci Resolve plug-ins..") - # register callback for switching publishable - pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) - get_resolve_module() + # register callback for switching publishable + pyblish.register_callback("instanceToggled", + on_pyblish_instance_toggled) + get_resolve_module() -def uninstall(): - """Uninstall all that was installed + def open_workfile(self, filepath): + return open_file(filepath) - This is where you undo everything that was done in `install()`. - That means, removing menus, deregistering families and data - and everything. It should be as though `install()` was never run, - because odds are calling this function means the user is interested - in re-installing shortly afterwards. If, for example, he has been - modifying the menu or registered families. + def save_workfile(self, filepath=None): + return save_file(filepath) - """ - pyblish.deregister_host("resolve") - pyblish.deregister_plugin_path(PUBLISH_PATH) - log.info("Deregistering DaVinci Resovle plug-ins..") + def work_root(self, session): + return work_root(session) - deregister_loader_plugin_path(LOAD_PATH) - deregister_creator_plugin_path(CREATE_PATH) + def get_current_workfile(self): + return current_file() - # register callback for switching publishable - pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) + def workfile_has_unsaved_changes(self): + return has_unsaved_changes() + + def get_workfile_extensions(self): + return file_extensions() + + def get_containers(self): + return ls() def containerise(timeline_item, @@ -114,10 +127,8 @@ def containerise(timeline_item, }) if data: - for k, v in data.items(): - data_imprint.update({k: v}) + data_imprint.update(data) - print("_ data_imprint: {}".format(data_imprint)) lib.set_timeline_item_pype_tag(timeline_item, data_imprint) return timeline_item @@ -206,15 +217,6 @@ def update_container(timeline_item, data=None): return bool(lib.set_timeline_item_pype_tag(timeline_item, container)) -def launch_workfiles_app(*args): - host_tools.show_workfiles() - - -def publish(parent): - """Shorthand to publish from within host""" - return host_tools.show_publish() - - @contextlib.contextmanager def maintained_selection(): """Maintain selection during context diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py index e5846c2fc2..8381f81acb 100644 --- a/openpype/hosts/resolve/api/plugin.py +++ b/openpype/hosts/resolve/api/plugin.py @@ -1,6 +1,5 @@ import re import uuid - import qargparse from qtpy import QtWidgets, QtCore @@ -9,6 +8,7 @@ from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline import ( LegacyCreator, LoaderPlugin, + Anatomy ) from . import lib @@ -291,17 +291,17 @@ class ClipLoader: active_bin = None data = dict() - def __init__(self, cls, context, **options): + def __init__(self, loader_obj, context, **options): """ Initialize object Arguments: - cls (openpype.pipeline.load.LoaderPlugin): plugin object + loader_obj (openpype.pipeline.load.LoaderPlugin): plugin object context (dict): loader plugin context options (dict)[optional]: possible keys: projectBinPath: "path/to/binItem" """ - self.__dict__.update(cls.__dict__) + self.__dict__.update(loader_obj.__dict__) self.context = context self.active_project = lib.get_current_project() @@ -318,54 +318,54 @@ class ClipLoader: # inject asset data to representation dict self._get_asset_data() - print("__init__ self.data: `{}`".format(self.data)) # add active components to class if self.new_timeline: - if options.get("timeline"): + loader_cls = loader_obj.__class__ + if loader_cls.timeline: # if multiselection is set then use options sequence - self.active_timeline = options["timeline"] + self.active_timeline = loader_cls.timeline else: # create new sequence - self.active_timeline = ( - lib.get_current_timeline() or - lib.get_new_timeline() + self.active_timeline = lib.get_new_timeline( + "{}_{}".format( + self.data["timeline_basename"], + str(uuid.uuid4())[:8] + ) ) + loader_cls.timeline = self.active_timeline + else: self.active_timeline = lib.get_current_timeline() - cls.timeline = self.active_timeline - def _populate_data(self): """ Gets context and convert it to self.data data structure: { "name": "assetName_subsetName_representationName" - "path": "path/to/file/created/by/get_repr..", "binPath": "projectBinPath", } """ # create name - repr = self.context["representation"] - repr_cntx = repr["context"] - asset = str(repr_cntx["asset"]) - subset = str(repr_cntx["subset"]) - representation = str(repr_cntx["representation"]) - self.data["clip_name"] = "_".join([asset, subset, representation]) + representation = self.context["representation"] + representation_context = representation["context"] + asset = str(representation_context["asset"]) + subset = str(representation_context["subset"]) + representation_name = str(representation_context["representation"]) + self.data["clip_name"] = "_".join([ + asset, + subset, + representation_name + ]) self.data["versionData"] = self.context["version"]["data"] - # gets file path - file = self.fname - if not file: - repr_id = repr["_id"] - print( - "Representation id `{}` is failing to load".format(repr_id)) - return None - self.data["path"] = file.replace("\\", "/") + + self.data["timeline_basename"] = "timeline_{}_{}".format( + subset, representation_name) # solve project bin structure path hierarchy = str("/".join(( "Loader", - repr_cntx["hierarchy"].replace("\\", "/"), + representation_context["hierarchy"].replace("\\", "/"), asset ))) @@ -382,25 +382,24 @@ class ClipLoader: asset_name = self.context["representation"]["context"]["asset"] self.data["assetData"] = get_current_project_asset(asset_name)["data"] - def load(self): + def load(self, files): + """Load clip into timeline + + Arguments: + files (list[str]): list of files to load into timeline + """ # create project bin for the media to be imported into self.active_bin = lib.create_bin(self.data["binPath"]) - # create mediaItem in active project bin - # create clip media + handle_start = self.data["versionData"].get("handleStart") or 0 + handle_end = self.data["versionData"].get("handleEnd") or 0 media_pool_item = lib.create_media_pool_item( - self.data["path"], self.active_bin) + files, + self.active_bin + ) _clip_property = media_pool_item.GetClipProperty - # get handles - handle_start = self.data["versionData"].get("handleStart") - handle_end = self.data["versionData"].get("handleEnd") - if handle_start is None: - handle_start = int(self.data["assetData"]["handleStart"]) - if handle_end is None: - handle_end = int(self.data["assetData"]["handleEnd"]) - source_in = int(_clip_property("Start")) source_out = int(_clip_property("End")) @@ -412,8 +411,6 @@ class ClipLoader: if self.with_handles: source_in -= handle_start source_out += handle_end - handle_start = 0 - handle_end = 0 # make track item from source in bin as item timeline_item = lib.create_timeline_item( @@ -422,24 +419,18 @@ class ClipLoader: print("Loading clips: `{}`".format(self.data["clip_name"])) return timeline_item - def update(self, timeline_item): + def update(self, timeline_item, files): # create project bin for the media to be imported into self.active_bin = lib.create_bin(self.data["binPath"]) # create mediaItem in active project bin # create clip media media_pool_item = lib.create_media_pool_item( - self.data["path"], self.active_bin) + files, + self.active_bin + ) _clip_property = media_pool_item.GetClipProperty - # get handles - handle_start = self.data["versionData"].get("handleStart") - handle_end = self.data["versionData"].get("handleEnd") - if handle_start is None: - handle_start = int(self.data["assetData"]["handleStart"]) - if handle_end is None: - handle_end = int(self.data["assetData"]["handleEnd"]) - source_in = int(_clip_property("Start")) source_out = int(_clip_property("End")) @@ -658,8 +649,6 @@ class PublishClip: # define ui inputs if non gui mode was used self.shot_num = self.ti_index - print( - "____ self.shot_num: {}".format(self.shot_num)) # ui_inputs data or default values if gui was not used self.rename = self.ui_inputs.get( @@ -838,3 +827,12 @@ class PublishClip: for key in par_split: parent = self._convert_to_entity(key) self.parents.append(parent) + + +def get_representation_files(representation): + anatomy = Anatomy() + files = [] + for file_data in representation["files"]: + path = anatomy.fill_root(file_data["path"]) + files.append(path) + return files diff --git a/openpype/hosts/resolve/api/utils.py b/openpype/hosts/resolve/api/utils.py index 871b3af38d..851851a3b3 100644 --- a/openpype/hosts/resolve/api/utils.py +++ b/openpype/hosts/resolve/api/utils.py @@ -17,7 +17,7 @@ def get_resolve_module(): # dont run if already loaded if api.bmdvr: log.info(("resolve module is assigned to " - f"`pype.hosts.resolve.api.bmdvr`: {api.bmdvr}")) + f"`openpype.hosts.resolve.api.bmdvr`: {api.bmdvr}")) return api.bmdvr try: """ @@ -41,6 +41,10 @@ def get_resolve_module(): ) elif sys.platform.startswith("linux"): expected_path = "/opt/resolve/libs/Fusion/Modules" + else: + raise NotImplementedError( + "Unsupported platform: {}".format(sys.platform) + ) # check if the default path has it... print(("Unable to find module DaVinciResolveScript from " @@ -74,6 +78,6 @@ def get_resolve_module(): api.bmdvr = bmdvr api.bmdvf = bmdvf log.info(("Assigning resolve module to " - f"`pype.hosts.resolve.api.bmdvr`: {api.bmdvr}")) + f"`openpype.hosts.resolve.api.bmdvr`: {api.bmdvr}")) log.info(("Assigning resolve module to " - f"`pype.hosts.resolve.api.bmdvf`: {api.bmdvf}")) + f"`openpype.hosts.resolve.api.bmdvf`: {api.bmdvf}")) diff --git a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py index bc03baad8d..73f5ac75b1 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class PreLaunchResolveLastWorkfile(PreLaunchHook): @@ -9,7 +9,8 @@ class PreLaunchResolveLastWorkfile(PreLaunchHook): workfile. This property is set explicitly in Launcher. """ order = 10 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hosts/resolve/hooks/pre_resolve_setup.py b/openpype/hosts/resolve/hooks/pre_resolve_setup.py index 3fd39d665c..326f37dffc 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_setup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_setup.py @@ -1,7 +1,7 @@ import os from pathlib import Path import platform -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.resolve.utils import setup @@ -30,7 +30,8 @@ class PreLaunchResolveSetup(PreLaunchHook): """ - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): current_platform = platform.system().lower() diff --git a/openpype/hosts/resolve/hooks/pre_resolve_startup.py b/openpype/hosts/resolve/hooks/pre_resolve_startup.py index 599e0c0008..6dbfd09a37 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_startup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_startup.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes import openpype.hosts.resolve @@ -9,7 +9,8 @@ class PreLaunchResolveStartup(PreLaunchHook): """ order = 11 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): # Set the openpype prelaunch startup script path for easy access diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py index 05bfb003d6..d3f83c7f24 100644 --- a/openpype/hosts/resolve/plugins/load/load_clip.py +++ b/openpype/hosts/resolve/plugins/load/load_clip.py @@ -1,13 +1,7 @@ -from copy import deepcopy - -from openpype.client import ( - get_version_by_id, - get_last_version_by_subset_id, -) -# from openpype.hosts import resolve +from openpype.client import get_last_version_by_subset_id from openpype.pipeline import ( - get_representation_path, - legacy_io, + get_representation_context, + get_current_project_name ) from openpype.hosts.resolve.api import lib, plugin from openpype.hosts.resolve.api.pipeline import ( @@ -48,47 +42,17 @@ class LoadClip(plugin.TimelineItemLoader): def load(self, context, name, namespace, options): - # in case loader uses multiselection - if self.timeline: - options.update({ - "timeline": self.timeline, - }) - # load clip to timeline and get main variables + files = plugin.get_representation_files(context["representation"]) + timeline_item = plugin.ClipLoader( - self, context, **options).load() + self, context, **options).load(files) namespace = namespace or timeline_item.GetName() - version = context['version'] - version_data = version.get("data", {}) - version_name = version.get("name", None) - colorspace = version_data.get("colorspace", None) - object_name = "{}_{}".format(name, namespace) - - # add additional metadata from the version to imprint Avalon knob - add_keys = [ - "frameStart", "frameEnd", "source", "author", - "fps", "handleStart", "handleEnd" - ] - - # move all version data keys to tag data - data_imprint = {} - for key in add_keys: - data_imprint.update({ - key: version_data.get(key, str(None)) - }) - - # add variables related to version context - data_imprint.update({ - "version": version_name, - "colorspace": colorspace, - "objectName": object_name - }) # update color of clip regarding the version order - self.set_item_color(timeline_item, version) - - self.log.info("Loader done: `{}`".format(name)) + self.set_item_color(timeline_item, version=context["version"]) + data_imprint = self.get_tag_data(context, name, namespace) return containerise( timeline_item, name, namespace, context, @@ -102,57 +66,65 @@ class LoadClip(plugin.TimelineItemLoader): """ Updating previously loaded clips """ - # load clip to timeline and get main variables - context = deepcopy(representation["context"]) - context.update({"representation": representation}) + context = get_representation_context(representation) name = container['name'] namespace = container['namespace'] - timeline_item_data = lib.get_pype_timeline_item_by_name(namespace) - timeline_item = timeline_item_data["clip"]["item"] - project_name = legacy_io.active_project() - version = get_version_by_id(project_name, representation["parent"]) + timeline_item = container["_timeline_item"] + + media_pool_item = timeline_item.GetMediaPoolItem() + + files = plugin.get_representation_files(representation) + + loader = plugin.ClipLoader(self, context) + timeline_item = loader.update(timeline_item, files) + + # update color of clip regarding the version order + self.set_item_color(timeline_item, version=context["version"]) + + # if original media pool item has no remaining usages left + # remove it from the media pool + if int(media_pool_item.GetClipProperty("Usage")) == 0: + lib.remove_media_pool_item(media_pool_item) + + data_imprint = self.get_tag_data(context, name, namespace) + return update_container(timeline_item, data_imprint) + + def get_tag_data(self, context, name, namespace): + """Return data to be imprinted on the timeline item marker""" + + representation = context["representation"] + version = context['version'] version_data = version.get("data", {}) version_name = version.get("name", None) colorspace = version_data.get("colorspace", None) object_name = "{}_{}".format(name, namespace) - self.fname = get_representation_path(representation) - context["version"] = {"data": version_data} - - loader = plugin.ClipLoader(self, context) - timeline_item = loader.update(timeline_item) # add additional metadata from the version to imprint Avalon knob - add_keys = [ + # move all version data keys to tag data + add_version_data_keys = [ "frameStart", "frameEnd", "source", "author", "fps", "handleStart", "handleEnd" ] - - # move all version data keys to tag data - data_imprint = {} - for key in add_keys: - data_imprint.update({ - key: version_data.get(key, str(None)) - }) + data = { + key: version_data.get(key, "None") for key in add_version_data_keys + } # add variables related to version context - data_imprint.update({ + data.update({ "representation": str(representation["_id"]), "version": version_name, "colorspace": colorspace, "objectName": object_name }) - - # update color of clip regarding the version order - self.set_item_color(timeline_item, version) - - return update_container(timeline_item, data_imprint) + return data @classmethod def set_item_color(cls, timeline_item, version): + """Color timeline item based on whether it is outdated or latest""" # define version name version_name = version.get("name", None) # get all versions in list - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version["parent"], @@ -168,3 +140,28 @@ class LoadClip(plugin.TimelineItemLoader): timeline_item.SetClipColor(cls.clip_color_last) else: timeline_item.SetClipColor(cls.clip_color) + + def remove(self, container): + timeline_item = container["_timeline_item"] + media_pool_item = timeline_item.GetMediaPoolItem() + timeline = lib.get_current_timeline() + + # DeleteClips function was added in Resolve 18.5+ + # by checking None we can detect whether the + # function exists in Resolve + if timeline.DeleteClips is not None: + timeline.DeleteClips([timeline_item]) + else: + # Resolve versions older than 18.5 can't delete clips via API + # so all we can do is just remove the pype marker to 'untag' it + if lib.get_pype_marker(timeline_item): + # Note: We must call `get_pype_marker` because + # `delete_pype_marker` uses a global variable set by + # `get_pype_marker` to delete the right marker + # TODO: Improve code to avoid the global `temp_marker_frame` + lib.delete_pype_marker(timeline_item) + + # if media pool item has no remaining usages left + # remove it from the media pool + if int(media_pool_item.GetClipProperty("Usage")) == 0: + lib.remove_media_pool_item(media_pool_item) diff --git a/openpype/hosts/resolve/plugins/publish/precollect_workfile.py b/openpype/hosts/resolve/plugins/publish/precollect_workfile.py index 0f94216556..a2f3eaed7a 100644 --- a/openpype/hosts/resolve/plugins/publish/precollect_workfile.py +++ b/openpype/hosts/resolve/plugins/publish/precollect_workfile.py @@ -1,8 +1,8 @@ import pyblish.api from pprint import pformat +from openpype.pipeline import get_current_asset_name from openpype.hosts.resolve import api as rapi -from openpype.pipeline import legacy_io from openpype.hosts.resolve.otio import davinci_export @@ -14,7 +14,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin): def process(self, context): - asset = legacy_io.Session["AVALON_ASSET"] + asset = get_current_asset_name() subset = "workfile" project = rapi.get_current_project() fps = project.GetSetting("timelineFrameRate") diff --git a/openpype/hosts/resolve/startup.py b/openpype/hosts/resolve/startup.py index e807a48f5a..5ac3c99524 100644 --- a/openpype/hosts/resolve/startup.py +++ b/openpype/hosts/resolve/startup.py @@ -27,7 +27,8 @@ def ensure_installed_host(): if host: return host - install_host(openpype.hosts.resolve.api) + host = openpype.hosts.resolve.api.ResolveHost() + install_host(host) return registered_host() @@ -37,10 +38,10 @@ def launch_menu(): openpype.hosts.resolve.api.launch_pype_menu() -def open_file(path): +def open_workfile(path): # Avoid the need to "install" the host host = ensure_installed_host() - host.open_file(path) + host.open_workfile(path) def main(): @@ -49,7 +50,7 @@ def main(): if workfile_path and os.path.exists(workfile_path): log.info(f"Opening last workfile: {workfile_path}") - open_file(workfile_path) + open_workfile(workfile_path) else: log.info("No last workfile set to open. Skipping..") diff --git a/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py b/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py index 1087a7b7a0..4f14927074 100644 --- a/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py +++ b/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py @@ -8,12 +8,13 @@ log = Logger.get_logger(__name__) def main(env): - import openpype.hosts.resolve.api as bmdvr + from openpype.hosts.resolve.api import ResolveHost, launch_pype_menu # activate resolve from openpype - install_host(bmdvr) + host = ResolveHost() + install_host(host) - bmdvr.launch_pype_menu() + launch_pype_menu() if __name__ == "__main__": diff --git a/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py b/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py deleted file mode 100644 index 92f2e43a72..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py +++ /dev/null @@ -1,49 +0,0 @@ -#! python3 -import os -import sys - -import opentimelineio as otio - -from openpype.pipeline import install_host - -import openpype.hosts.resolve.api as bmdvr -from openpype.hosts.resolve.api.testing_utils import TestGUI -from openpype.hosts.resolve.otio import davinci_export as otio_export - - -class ThisTestGUI(TestGUI): - extensions = [".exr", ".jpg", ".mov", ".png", ".mp4", ".ari", ".arx"] - - def __init__(self): - super(ThisTestGUI, self).__init__() - # activate resolve from openpype - install_host(bmdvr) - - def _open_dir_button_pressed(self, event): - # selected_path = self.fu.RequestFile(os.path.expanduser("~")) - selected_path = self.fu.RequestDir(os.path.expanduser("~")) - self._widgets["inputTestSourcesFolder"].Text = selected_path - - # main function - def process(self, event): - self.input_dir_path = self._widgets["inputTestSourcesFolder"].Text - project = bmdvr.get_current_project() - otio_timeline = otio_export.create_otio_timeline(project) - print(f"_ otio_timeline: `{otio_timeline}`") - edl_path = os.path.join(self.input_dir_path, "this_file_name.edl") - print(f"_ edl_path: `{edl_path}`") - # xml_string = otio_adapters.fcpx_xml.write_to_string(otio_timeline) - # print(f"_ xml_string: `{xml_string}`") - otio.adapters.write_to_file( - otio_timeline, edl_path, adapter_name="cmx_3600") - project = bmdvr.get_current_project() - media_pool = project.GetMediaPool() - timeline = media_pool.ImportTimelineFromFile(edl_path) - # at the end close the window - self._close_window(None) - - -if __name__ == "__main__": - test_gui = ThisTestGUI() - test_gui.show_gui() - sys.exit(not bool(True)) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py b/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py deleted file mode 100644 index 91a361ec08..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py +++ /dev/null @@ -1,73 +0,0 @@ -#! python3 -import os -import sys - -import clique - -from openpype.pipeline import install_host -from openpype.hosts.resolve.api.testing_utils import TestGUI -import openpype.hosts.resolve.api as bmdvr -from openpype.hosts.resolve.api.lib import ( - create_media_pool_item, - create_timeline_item, -) - - -class ThisTestGUI(TestGUI): - extensions = [".exr", ".jpg", ".mov", ".png", ".mp4", ".ari", ".arx"] - - def __init__(self): - super(ThisTestGUI, self).__init__() - # activate resolve from openpype - install_host(bmdvr) - - def _open_dir_button_pressed(self, event): - # selected_path = self.fu.RequestFile(os.path.expanduser("~")) - selected_path = self.fu.RequestDir(os.path.expanduser("~")) - self._widgets["inputTestSourcesFolder"].Text = selected_path - - # main function - def process(self, event): - self.input_dir_path = self._widgets["inputTestSourcesFolder"].Text - - self.dir_processing(self.input_dir_path) - - # at the end close the window - self._close_window(None) - - def dir_processing(self, dir_path): - collections, reminders = clique.assemble(os.listdir(dir_path)) - - # process reminders - for _rem in reminders: - _rem_path = os.path.join(dir_path, _rem) - - # go deeper if directory - if os.path.isdir(_rem_path): - print(_rem_path) - self.dir_processing(_rem_path) - else: - self.file_processing(_rem_path) - - # process collections - for _coll in collections: - _coll_path = os.path.join(dir_path, list(_coll).pop()) - self.file_processing(_coll_path) - - def file_processing(self, fpath): - print(f"_ fpath: `{fpath}`") - _base, ext = os.path.splitext(fpath) - # skip if unwanted extension - if ext not in self.extensions: - return - media_pool_item = create_media_pool_item(fpath) - print(media_pool_item) - - track_item = create_timeline_item(media_pool_item) - print(track_item) - - -if __name__ == "__main__": - test_gui = ThisTestGUI() - test_gui.show_gui() - sys.exit(not bool(True)) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py b/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py deleted file mode 100644 index 2e83188bde..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py +++ /dev/null @@ -1,24 +0,0 @@ -#! python3 -from openpype.pipeline import install_host -from openpype.hosts.resolve import api as bmdvr -from openpype.hosts.resolve.api.lib import ( - create_media_pool_item, - create_timeline_item, -) - - -def file_processing(fpath): - media_pool_item = create_media_pool_item(fpath) - print(media_pool_item) - - track_item = create_timeline_item(media_pool_item) - print(track_item) - - -if __name__ == "__main__": - path = "C:/CODE/__openpype_projects/jtest03dev/shots/sq01/mainsq01sh030/publish/plate/plateMain/v006/jt3d_mainsq01sh030_plateMain_v006.0996.exr" - - # activate resolve from openpype - install_host(bmdvr) - - file_processing(path) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py b/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py deleted file mode 100644 index b64714ab16..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py +++ /dev/null @@ -1,5 +0,0 @@ -#! python3 -from openpype.hosts.resolve.startup import main - -if __name__ == "__main__": - main() diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py deleted file mode 100644 index 8270496f64..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py +++ /dev/null @@ -1,13 +0,0 @@ -#! python3 -from openpype.pipeline import install_host -from openpype.hosts.resolve import api as bmdvr -from openpype.hosts.resolve.api.lib import get_current_project - -if __name__ == "__main__": - install_host(bmdvr) - project = get_current_project() - timeline_count = project.GetTimelineCount() - print(f"Timeline count: {timeline_count}") - timeline = project.GetTimelineByIndex(timeline_count) - print(f"Timeline name: {timeline.GetName()}") - print(timeline.GetTrackCount("video")) diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py index 9f02d65d00..a2afd160fa 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py @@ -1,8 +1,9 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_streams, path_to_subprocess_arg, run_subprocess, @@ -48,8 +49,6 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin): else: first_filename = files - staging_dir = None - # Convert to jpeg if not yet full_input_path = os.path.join( thumbnail_repre["stagingDir"], first_filename @@ -62,12 +61,12 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin): instance.context.data["cleanupFullPaths"].append(full_thumbnail_path) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_executable_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} jpeg_items = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(ffmpeg_executable_args), # override file if already exists "-y" ] diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py index a7ae02a2eb..fd2d4a9f36 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py @@ -1,7 +1,5 @@ -import os import pyblish.api -from openpype.settings import get_project_settings from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -21,27 +19,30 @@ class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin): optional = True def process(self, instance): - if instance.data["family"] == "workfile": - ext = instance.data["representations"][0]["ext"] - main_workfile_extensions = self.get_main_workfile_extensions() - if ext not in main_workfile_extensions: - self.log.warning("Only secondary workfile present!") - return + if instance.data["family"] != "workfile": + return - if not instance.data.get("resources"): - msg = "No secondary workfile present for workfile '{}'". \ - format(instance.data["name"]) - ext = main_workfile_extensions[0] - formatting_data = {"file_name": instance.data["name"], - "extension": ext} + ext = instance.data["representations"][0]["ext"] + main_workfile_extensions = self.get_main_workfile_extensions( + instance + ) + if ext not in main_workfile_extensions: + self.log.warning("Only secondary workfile present!") + return - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data - ) + if not instance.data.get("resources"): + msg = "No secondary workfile present for workfile '{}'". \ + format(instance.data["name"]) + ext = main_workfile_extensions[0] + formatting_data = {"file_name": instance.data["name"], + "extension": ext} + + raise PublishXmlValidationError( + self, msg, formatting_data=formatting_data) @staticmethod - def get_main_workfile_extensions(): - project_settings = get_project_settings(os.environ["AVALON_PROJECT"]) + def get_main_workfile_extensions(instance): + project_settings = instance.context.data["project_settings"] try: extensions = (project_settings["standalonepublisher"] diff --git a/openpype/hosts/substancepainter/plugins/load/load_mesh.py b/openpype/hosts/substancepainter/plugins/load/load_mesh.py index 822095641d..57db869a11 100644 --- a/openpype/hosts/substancepainter/plugins/load/load_mesh.py +++ b/openpype/hosts/substancepainter/plugins/load/load_mesh.py @@ -47,7 +47,8 @@ class SubstanceLoadProjectMesh(load.LoaderPlugin): if not substance_painter.project.is_open(): # Allow to 'initialize' a new project - result = prompt_new_file_with_mesh(mesh_filepath=self.fname) + path = self.filepath_from_context(context) + result = prompt_new_file_with_mesh(mesh_filepath=path) if not result: self.log.info("User cancelled new project prompt.") return @@ -65,7 +66,7 @@ class SubstanceLoadProjectMesh(load.LoaderPlugin): else: raise LoadError("Reload of mesh failed") - path = self.fname + path = self.filepath_from_context(context) substance_painter.project.reload_mesh(path, settings, on_mesh_reload) diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py index d11abd1019..316f72509e 100644 --- a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py +++ b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py @@ -114,7 +114,7 @@ class CollectTextureSet(pyblish.api.InstancePlugin): # Clone the instance image_instance = context.create_instance(image_subset) image_instance[:] = instance[:] - image_instance.data.update(copy.deepcopy(instance.data)) + image_instance.data.update(copy.deepcopy(dict(instance.data))) image_instance.data["name"] = image_subset image_instance.data["label"] = image_subset image_instance.data["subset"] = image_subset diff --git a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py index 1bed07f785..3454b6e135 100644 --- a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py +++ b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py @@ -36,7 +36,7 @@ class BatchMovieCreator(TrayPublishCreator): # Position batch creator after simple creators order = 110 - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): creator_settings = ( project_settings["traypublisher"]["create"]["BatchMovieCreator"] ) diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py b/openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py new file mode 100644 index 0000000000..e8a2cae16c --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py @@ -0,0 +1,37 @@ +import pyblish.api + + +class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin): + """Collect Frame Data From AssetEntity found in context + + Frame range data will only be collected if the keys + are not yet collected for the instance. + """ + + order = pyblish.api.CollectorOrder + 0.491 + label = "Collect Missing Frame Data From Asset" + families = ["plate", "pointcache", + "vdbcache", "online", + "render"] + hosts = ["traypublisher"] + + def process(self, instance): + missing_keys = [] + for key in ( + "fps", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd" + ): + if key not in instance.data: + missing_keys.append(key) + keys_set = [] + for key in missing_keys: + asset_data = instance.data["assetEntity"]["data"] + if key in asset_data: + instance.data[key] = asset_data[key] + keys_set.append(key) + if keys_set: + self.log.debug(f"Frame range data {keys_set} " + "has been collected from asset entity.") diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py b/openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py new file mode 100644 index 0000000000..db70d4fe0a --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py @@ -0,0 +1,70 @@ +import pyblish.api +import clique + +from openpype.pipeline import OptionalPyblishPluginMixin + + +class CollectSequenceFrameData( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): + """Collect Original Sequence Frame Data + + If the representation includes files with frame numbers, + then set `frameStart` and `frameEnd` for the instance to the + start and end frame respectively + """ + + order = pyblish.api.CollectorOrder + 0.4905 + label = "Collect Original Sequence Frame Data" + families = ["plate", "pointcache", + "vdbcache", "online", + "render"] + hosts = ["traypublisher"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + frame_data = self.get_frame_data_from_repre_sequence(instance) + + if not frame_data: + # if no dict data skip collecting the frame range data + return + + for key, value in frame_data.items(): + instance.data[key] = value + self.log.debug(f"Collected Frame range data '{key}':{value} ") + + + def get_frame_data_from_repre_sequence(self, instance): + repres = instance.data.get("representations") + asset_data = instance.data["assetEntity"]["data"] + + if repres: + first_repre = repres[0] + if "ext" not in first_repre: + self.log.warning("Cannot find file extension" + " in representation data") + return + + files = first_repre["files"] + collections, _ = clique.assemble(files) + if not collections: + # No sequences detected and we can't retrieve + # frame range + self.log.debug( + "No sequences detected in the representation data." + " Skipping collecting frame range data.") + return + collection = collections[0] + repres_frames = list(collection.indexes) + + return { + "frameStart": repres_frames[0], + "frameEnd": repres_frames[-1], + "handleStart": 0, + "handleEnd": 0, + "fps": asset_data["fps"] + } diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py index b962ea464a..09de2d8db2 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py @@ -15,7 +15,7 @@ class ValidateFrameRange(OptionalPyblishPluginMixin, label = "Validate Frame Range" hosts = ["traypublisher"] - families = ["render"] + families = ["render", "plate"] order = ValidateContentsOrder optional = True diff --git a/openpype/hosts/tvpaint/api/communication_server.py b/openpype/hosts/tvpaint/api/communication_server.py index 6f76c25e0c..d67ef8f798 100644 --- a/openpype/hosts/tvpaint/api/communication_server.py +++ b/openpype/hosts/tvpaint/api/communication_server.py @@ -11,7 +11,7 @@ import filecmp import tempfile import threading import shutil -from queue import Queue + from contextlib import closing from aiohttp import web @@ -319,19 +319,19 @@ class QtTVPaintRpc(BaseTVPaintRpc): async def workfiles_tool(self): log.info("Triggering Workfile tool") item = MainThreadItem(self.tools_helper.show_workfiles) - self._execute_in_main_thread(item) + self._execute_in_main_thread(item, wait=False) return async def loader_tool(self): log.info("Triggering Loader tool") item = MainThreadItem(self.tools_helper.show_loader) - self._execute_in_main_thread(item) + self._execute_in_main_thread(item, wait=False) return async def publish_tool(self): log.info("Triggering Publish tool") item = MainThreadItem(self.tools_helper.show_publisher_tool) - self._execute_in_main_thread(item) + self._execute_in_main_thread(item, wait=False) return async def scene_inventory_tool(self): @@ -350,13 +350,13 @@ class QtTVPaintRpc(BaseTVPaintRpc): async def library_loader_tool(self): log.info("Triggering Library loader tool") item = MainThreadItem(self.tools_helper.show_library_loader) - self._execute_in_main_thread(item) + self._execute_in_main_thread(item, wait=False) return async def experimental_tools(self): log.info("Triggering Library loader tool") item = MainThreadItem(self.tools_helper.show_experimental_tools_dialog) - self._execute_in_main_thread(item) + self._execute_in_main_thread(item, wait=False) return async def _async_execute_in_main_thread(self, item, **kwargs): @@ -867,7 +867,7 @@ class QtCommunicator(BaseCommunicator): def __init__(self, qt_app): super().__init__() - self.callback_queue = Queue() + self.callback_queue = collections.deque() self.qt_app = qt_app def _create_routes(self): @@ -880,14 +880,14 @@ class QtCommunicator(BaseCommunicator): def execute_in_main_thread(self, main_thread_item, wait=True): """Add `MainThreadItem` to callback queue and wait for result.""" - self.callback_queue.put(main_thread_item) + self.callback_queue.append(main_thread_item) if wait: return main_thread_item.wait() return async def async_execute_in_main_thread(self, main_thread_item, wait=True): """Add `MainThreadItem` to callback queue and wait for result.""" - self.callback_queue.put(main_thread_item) + self.callback_queue.append(main_thread_item) if wait: return await main_thread_item.async_wait() @@ -904,9 +904,9 @@ class QtCommunicator(BaseCommunicator): self._exit() return None - if self.callback_queue.empty(): - return None - return self.callback_queue.get() + if self.callback_queue: + return self.callback_queue.popleft() + return None def _on_client_connect(self): super()._on_client_connect() diff --git a/openpype/hosts/tvpaint/api/lib.py b/openpype/hosts/tvpaint/api/lib.py index 49846d7f29..f8b8c29cdb 100644 --- a/openpype/hosts/tvpaint/api/lib.py +++ b/openpype/hosts/tvpaint/api/lib.py @@ -233,7 +233,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): Pre and Post behaviors is enumerator of possible values: - "none" - - "repeat" / "loop" + - "repeat" - "pingpong" - "hold" @@ -242,7 +242,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): { 0: { "pre": "none", - "post": "loop" + "post": "repeat" } } ``` diff --git a/openpype/hosts/tvpaint/hooks/pre_launch_args.py b/openpype/hosts/tvpaint/hooks/pre_launch_args.py index c31403437a..a1c946b60b 100644 --- a/openpype/hosts/tvpaint/hooks/pre_launch_args.py +++ b/openpype/hosts/tvpaint/hooks/pre_launch_args.py @@ -1,7 +1,5 @@ -from openpype.lib import ( - PreLaunchHook, - get_openpype_execute_args -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes class TvpaintPrelaunchHook(PreLaunchHook): @@ -13,7 +11,8 @@ class TvpaintPrelaunchHook(PreLaunchHook): Existence of last workfile is checked. If workfile does not exists tries to copy templated workfile from predefined path. """ - app_groups = ["tvpaint"] + app_groups = {"tvpaint"} + launch_types = {LaunchTypes.local} def execute(self): # Pop tvpaint executable diff --git a/openpype/hosts/tvpaint/lib.py b/openpype/hosts/tvpaint/lib.py index 95653b6ecb..97cf8d3633 100644 --- a/openpype/hosts/tvpaint/lib.py +++ b/openpype/hosts/tvpaint/lib.py @@ -77,13 +77,15 @@ def _calculate_pre_behavior_copy( for frame_idx in range(range_start, layer_frame_start): output_idx_by_frame_idx[frame_idx] = first_exposure_frame - elif pre_beh in ("loop", "repeat"): + elif pre_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in reversed(range(range_start, layer_frame_start)): eq_frame_idx_offset = ( (layer_frame_end - frame_idx) % frame_count ) - eq_frame_idx = layer_frame_end - eq_frame_idx_offset + eq_frame_idx = layer_frame_start + ( + layer_frame_end - eq_frame_idx_offset + ) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif pre_beh == "pingpong": @@ -139,10 +141,10 @@ def _calculate_post_behavior_copy( for frame_idx in range(layer_frame_end + 1, range_end + 1): output_idx_by_frame_idx[frame_idx] = last_exposure_frame - elif post_beh in ("loop", "repeat"): + elif post_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in range(layer_frame_end + 1, range_end + 1): - eq_frame_idx = frame_idx % frame_count + eq_frame_idx = layer_frame_start + (frame_idx % frame_count) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif post_beh == "pingpong": diff --git a/openpype/hosts/tvpaint/plugins/create/create_render.py b/openpype/hosts/tvpaint/plugins/create/create_render.py index 2369c7329f..b7a7c208d9 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_render.py +++ b/openpype/hosts/tvpaint/plugins/create/create_render.py @@ -139,7 +139,7 @@ class CreateRenderlayer(TVPaintCreator): # - Mark by default instance for review mark_for_review = True - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["tvpaint"]["create"]["create_render_layer"] ) @@ -387,7 +387,7 @@ class CreateRenderPass(TVPaintCreator): # Settings mark_for_review = True - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["tvpaint"]["create"]["create_render_pass"] ) @@ -690,7 +690,7 @@ class TVPaintAutoDetectRenderCreator(TVPaintCreator): group_idx_offset = 10 group_idx_padding = 3 - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings ["tvpaint"] @@ -1029,7 +1029,7 @@ class TVPaintSceneRenderCreator(TVPaintAutoCreator): mark_for_review = True active_on_create = False - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["tvpaint"]["create"]["create_render_scene"] ) diff --git a/openpype/hosts/tvpaint/plugins/create/create_review.py b/openpype/hosts/tvpaint/plugins/create/create_review.py index 886dae7c39..7bb7510a8e 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_review.py +++ b/openpype/hosts/tvpaint/plugins/create/create_review.py @@ -12,7 +12,7 @@ class TVPaintReviewCreator(TVPaintAutoCreator): # Settings active_on_create = True - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["tvpaint"]["create"]["create_review"] ) diff --git a/openpype/hosts/tvpaint/plugins/create/create_workfile.py b/openpype/hosts/tvpaint/plugins/create/create_workfile.py index 41347576d5..c3982c0eca 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_workfile.py +++ b/openpype/hosts/tvpaint/plugins/create/create_workfile.py @@ -9,7 +9,7 @@ class TVPaintWorkfileCreator(TVPaintAutoCreator): label = "Workfile" icon = "fa.file-o" - def apply_settings(self, project_settings, system_settings): + def apply_settings(self, project_settings): plugin_settings = ( project_settings["tvpaint"]["create"]["create_workfile"] ) diff --git a/openpype/hosts/tvpaint/plugins/load/load_image.py b/openpype/hosts/tvpaint/plugins/load/load_image.py index 5283d04355..a400738019 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_image.py @@ -77,8 +77,9 @@ class ImportImage(plugin.Loader): ) # Fill import script with filename and layer name # - filename mus not contain backwards slashes + path = self.filepath_from_context(context).replace("\\", "/") george_script = self.import_script.format( - self.fname.replace("\\", "/"), + path, layer_name, load_options_str ) diff --git a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py index 7f7a68cc41..3707ef97aa 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py @@ -86,10 +86,12 @@ class LoadImage(plugin.Loader): subset_name = context["subset"]["name"] layer_name = self.get_unique_layer_name(asset_name, subset_name) + path = self.filepath_from_context(context) + # Fill import script with filename and layer name # - filename mus not contain backwards slashes george_script = self.import_script.format( - self.fname.replace("\\", "/"), + path.replace("\\", "/"), layer_name, load_options_str ) @@ -169,7 +171,7 @@ class LoadImage(plugin.Loader): george_script = "\n".join(george_script_lines) execute_george_through_file(george_script) - def _remove_container(self, container, members=None): + def _remove_container(self, container): if not container: return representation = container["representation"] @@ -271,9 +273,6 @@ class LoadImage(plugin.Loader): # Remove old layers self._remove_layers(layer_ids=layer_ids_to_remove) - # Change `fname` to new representation - self.fname = self.filepath_from_context(context) - name = container["name"] namespace = container["namespace"] new_container = self.load(context, name, namespace, {}) diff --git a/openpype/hosts/tvpaint/plugins/load/load_sound.py b/openpype/hosts/tvpaint/plugins/load/load_sound.py index f312db262a..3003280eef 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_sound.py +++ b/openpype/hosts/tvpaint/plugins/load/load_sound.py @@ -60,9 +60,10 @@ class ImportSound(plugin.Loader): output_filepath = output_file.name.replace("\\", "/") # Prepare george script + path = self.filepath_from_context(context).replace("\\", "/") import_script = "\n".join(self.import_script_lines) george_script = import_script.format( - self.fname.replace("\\", "/"), + path, output_filepath ) self.log.info("*** George script:\n{}\n***".format(george_script)) diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index fc7588f56e..169bfdcdd8 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -3,7 +3,7 @@ import os from openpype.lib import StringTemplate from openpype.pipeline import ( registered_host, - legacy_io, + get_current_context, Anatomy, ) from openpype.pipeline.workfile import ( @@ -18,6 +18,7 @@ from openpype.hosts.tvpaint.api.lib import ( from openpype.hosts.tvpaint.api.pipeline import ( get_current_workfile_context, ) +from openpype.pipeline.version_start import get_versioning_start class LoadWorkfile(plugin.Loader): @@ -31,18 +32,18 @@ class LoadWorkfile(plugin.Loader): def load(self, context, name, namespace, options): # Load context of current workfile as first thing # - which context and extension has - host = registered_host() - current_file = host.get_current_workfile() - - context = get_current_workfile_context() - - filepath = self.fname.replace("\\", "/") + filepath = self.filepath_from_context(context) + filepath = filepath.replace("\\", "/") if not os.path.exists(filepath): raise FileExistsError( "The loaded file does not exist. Try downloading it first." ) + host = registered_host() + current_file = host.get_current_workfile() + work_context = get_current_workfile_context() + george_script = "tv_LoadProject '\"'\"{}\"'\"'".format( filepath ) @@ -50,14 +51,15 @@ class LoadWorkfile(plugin.Loader): # Save workfile. host_name = "tvpaint" - project_name = context.get("project") - asset_name = context.get("asset") - task_name = context.get("task") - # Far cases when there is workfile without context + project_name = work_context.get("project") + asset_name = work_context.get("asset") + task_name = work_context.get("task") + # Far cases when there is workfile without work_context if not asset_name: - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] + context = get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] template_key = get_workfile_template_key_from_context( asset_name, @@ -94,7 +96,13 @@ class LoadWorkfile(plugin.Loader): )[1] if version is None: - version = 1 + version = get_versioning_start( + project_name, + "tvpaint", + task_name=task_name, + task_type=data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py index ab5bbc5e2c..c10fc4de97 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py @@ -9,7 +9,8 @@ import json import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, + ToolNotFoundError, run_subprocess, ) from openpype.pipeline import KnownPublishError @@ -34,11 +35,12 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): if not repres: return - oiio_path = get_oiio_tools_path() - # Raise an exception when oiiotool is not available - # - this can currently happen on MacOS machines - if not os.path.exists(oiio_path): - KnownPublishError( + try: + oiio_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + # Raise an exception when oiiotool is not available + # - this can currently happen on MacOS machines + raise KnownPublishError( "OpenImageIO tool is not available on this machine." ) @@ -64,8 +66,8 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): src_filepaths.add(src_filepath) - args = [ - oiio_path, src_filepath, + args = oiio_args + [ + src_filepath, "--compression", self.exr_compression, # TODO how to define color conversion? "--colorconvert", "sRGB", "linear", diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py index 8a610cf388..fd568b2826 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py @@ -6,7 +6,10 @@ from PIL import Image import pyblish.api -from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline.publish import ( + KnownPublishError, + get_publish_instance_families, +) from openpype.hosts.tvpaint.api.lib import ( execute_george, execute_george_through_file, @@ -63,7 +66,6 @@ class ExtractSequence(pyblish.api.Extractor): "ignoreLayersTransparency", False ) - family_lowered = instance.data["family"].lower() mark_in = instance.context.data["sceneMarkIn"] mark_out = instance.context.data["sceneMarkOut"] @@ -76,11 +78,9 @@ class ExtractSequence(pyblish.api.Extractor): # Frame start/end may be stored as float frame_start = int(instance.data["frameStart"]) - frame_end = int(instance.data["frameEnd"]) # Handles are not stored per instance but on Context handle_start = instance.context.data["handleStart"] - handle_end = instance.context.data["handleEnd"] scene_bg_color = instance.context.data["sceneBgColor"] @@ -143,8 +143,9 @@ class ExtractSequence(pyblish.api.Extractor): ) # Fill tags and new families from project settings + instance_families = get_publish_instance_families(instance) tags = [] - if "review" in instance.data["families"]: + if "review" in instance_families: tags.append("review") # Sequence of one frame diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp index 88106bc770..ec45a45123 100644 --- a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp +++ b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp @@ -573,56 +573,6 @@ void FAR PASCAL PI_Close( PIFilter* iFilter ) } -/**************************************************************************************/ -// we have something to do ! - -int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg ) -{ - if( !iArg ) - { - - // If the requester is not open, we open it. - if( Data.mReq == 0) - { - // Create empty requester because menu items are defined with - // `define_menu` callback - DWORD req = TVOpenFilterReqEx( - iFilter, - 185, - 20, - NULL, - NULL, - PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ, - FILTERREQ_NO_TBAR - ); - if( req == 0 ) - { - TVWarning( iFilter, TXT_REQUESTER_ERROR ); - return 0; - } - - - Data.mReq = req; - // This is a very simple requester, so we create it's content right here instead - // of waiting for the PICBREQ_OPEN message... - // Not recommended for more complex requesters. (see the other examples) - - // Sets the title of the requester. - TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER ); - // Request to listen to ticks - TVGrabTicks(iFilter, req, PITICKS_FLAG_ON); - } - else - { - // If it is already open, we just put it on front of all other requesters. - TVReqToFront( iFilter, Data.mReq ); - } - } - - return 1; -} - - int newMenuItemsProcess(PIFilter* iFilter) { // Menu items defined with `define_menu` should be propagated. @@ -702,6 +652,62 @@ int newMenuItemsProcess(PIFilter* iFilter) { return 1; } + +/**************************************************************************************/ +// we have something to do ! + +int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg ) +{ + if( !iArg ) + { + + // If the requester is not open, we open it. + if( Data.mReq == 0) + { + // Create empty requester because menu items are defined with + // `define_menu` callback + DWORD req = TVOpenFilterReqEx( + iFilter, + 185, + 20, + NULL, + NULL, + PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ, + FILTERREQ_NO_TBAR + ); + if( req == 0 ) + { + TVWarning( iFilter, TXT_REQUESTER_ERROR ); + return 0; + } + + Data.mReq = req; + + // This is a very simple requester, so we create it's content right here instead + // of waiting for the PICBREQ_OPEN message... + // Not recommended for more complex requesters. (see the other examples) + + // Sets the title of the requester. + TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER ); + // Request to listen to ticks + TVGrabTicks(iFilter, req, PITICKS_FLAG_ON); + + if ( Data.firstParams == true ) { + Data.firstParams = false; + } else { + newMenuItemsProcess(iFilter); + } + } + else + { + // If it is already open, we just put it on front of all other requesters. + TVReqToFront( iFilter, Data.mReq ); + } + } + + return 1; +} + /**************************************************************************************/ // something happened that needs our attention. // Global variable where current button up data are stored diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll index 7081778bee..9c6e969e24 100644 Binary files a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll differ diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll index 0f2afec245..b573476a21 100644 Binary files a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll differ diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index ed23950b35..fcc5d98ab6 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -1,7 +1,6 @@ import os import re from openpype.modules import IHostAddon, OpenPypeModule -from openpype.widgets.message_window import Window UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -13,6 +12,11 @@ class UnrealAddon(OpenPypeModule, IHostAddon): def initialize(self, module_settings): self.enabled = True + def get_global_environments(self): + return { + "AYON_UNREAL_ROOT": UNREAL_ROOT_DIR, + } + def add_implementation_envs(self, env, app): """Modify environments to contain all required for implementation.""" # Set AYON_UNREAL_PLUGIN required for Unreal implementation @@ -21,6 +25,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon): from .lib import get_compatible_integration + from openpype.widgets.message_window import Window + pattern = re.compile(r'^\d+-\d+$') if not pattern.match(app.name): @@ -53,7 +59,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon): # Set default environments if are not set via settings defaults = { - "OPENPYPE_LOG_NO_COLORS": "True" + "OPENPYPE_LOG_NO_COLORS": "True", + "UE_PYTHONPATH": os.environ.get("PYTHONPATH", ""), } for key, value in defaults.items(): if not env.get(key): diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index f01609d314..a635bd4cab 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -2,22 +2,25 @@ """Hook to launch Unreal and prepare projects.""" import os import copy +import shutil +import tempfile from pathlib import Path -from openpype.widgets.splash_screen import SplashScreen + from qtpy import QtCore + +from openpype import resources +from openpype.lib.applications import ( + PreLaunchHook, + ApplicationLaunchFailed, + LaunchTypes, +) +from openpype.pipeline.workfile import get_workfile_template_key +import openpype.hosts.unreal.lib as unreal_lib from openpype.hosts.unreal.ue_workers import ( UEProjectGenerationWorker, UEPluginInstallWorker ) - -from openpype import resources -from openpype.lib import ( - PreLaunchHook, - ApplicationLaunchFailed, - ApplicationNotFound, -) -from openpype.pipeline.workfile import get_workfile_template_key -import openpype.hosts.unreal.lib as unreal_lib +from openpype.hosts.unreal.ui import SplashScreen class UnrealPrelaunchHook(PreLaunchHook): @@ -29,6 +32,8 @@ class UnrealPrelaunchHook(PreLaunchHook): shell script. """ + app_groups = {"unreal"} + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -111,6 +116,7 @@ class UnrealPrelaunchHook(PreLaunchHook): ue_project_worker = UEProjectGenerationWorker() ue_project_worker.setup( engine_version, + self.data["project_name"], unreal_project_name, engine_path, project_dir @@ -186,32 +192,58 @@ class UnrealPrelaunchHook(PreLaunchHook): project_path.mkdir(parents=True, exist_ok=True) - # Set "AYON_UNREAL_PLUGIN" to current process environment for - # execution of `create_unreal_project` - - if self.launch_context.env.get("AYON_UNREAL_PLUGIN"): - self.log.info(( - f"{self.signature} using Ayon plugin from " - f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}" - )) - env_key = "AYON_UNREAL_PLUGIN" - if self.launch_context.env.get(env_key): - os.environ[env_key] = self.launch_context.env[env_key] - # engine_path points to the specific Unreal Engine root # so, we are going up from the executable itself 3 levels. engine_path: Path = Path(executable).parents[3] - if not unreal_lib.check_plugin_existence(engine_path): - self.exec_plugin_install(engine_path) + # Check if new env variable exists, and if it does, if the path + # actually contains the plugin. If not, install it. + + built_plugin_path = self.launch_context.env.get( + "AYON_BUILT_UNREAL_PLUGIN", None) + + if unreal_lib.check_built_plugin_existance(built_plugin_path): + self.log.info(( + f"{self.signature} using existing built Ayon plugin from " + f"{built_plugin_path}" + )) + unreal_lib.copy_built_plugin(engine_path, Path(built_plugin_path)) + else: + # Set "AYON_UNREAL_PLUGIN" to current process environment for + # execution of `create_unreal_project` + env_key = "AYON_UNREAL_PLUGIN" + if self.launch_context.env.get(env_key): + self.log.info(( + f"{self.signature} using Ayon plugin from " + f"{self.launch_context.env.get(env_key)}" + )) + if self.launch_context.env.get(env_key): + os.environ[env_key] = self.launch_context.env[env_key] + + if not unreal_lib.check_plugin_existence(engine_path): + self.exec_plugin_install(engine_path) project_file = project_path / unreal_project_filename if not project_file.is_file(): - self.exec_ue_project_gen(engine_version, - unreal_project_name, - engine_path, - project_path) + with tempfile.TemporaryDirectory() as temp_dir: + self.exec_ue_project_gen(engine_version, + unreal_project_name, + engine_path, + Path(temp_dir)) + try: + self.log.info(( + f"Moving from {temp_dir} to " + f"{project_path.as_posix()}" + )) + shutil.copytree( + temp_dir, project_path, dirs_exist_ok=True) + + except shutil.Error as e: + raise ApplicationLaunchFailed(( + f"{self.signature} Cannot copy directory {temp_dir} " + f"to {project_path.as_posix()} - {e}" + )) from e self.launch_context.env["AYON_UNREAL_VERSION"] = engine_version # Append project file to launch arguments diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 97771472cf..6d544f65b2 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -1,17 +1,16 @@ # -*- coding: utf-8 -*- """Unreal launching and project tools.""" +import json import os import platform -import json - +import re +import subprocess +from collections import OrderedDict +from distutils import dir_util +from pathlib import Path from typing import List -from distutils import dir_util -import subprocess -import re -from pathlib import Path -from collections import OrderedDict from openpype.settings import get_project_settings @@ -179,6 +178,7 @@ def _parse_launcher_locations(install_json_path: str) -> dict: def create_unreal_project(project_name: str, + unreal_project_name: str, ue_version: str, pr_dir: Path, engine_path: Path, @@ -193,7 +193,8 @@ def create_unreal_project(project_name: str, folder and enable this plugin. Args: - project_name (str): Name of the project. + project_name (str): Name of the project in AYON. + unreal_project_name (str): Name of the project in Unreal. ue_version (str): Unreal engine version (like 4.23). pr_dir (Path): Path to directory where project will be created. engine_path (Path): Path to Unreal Engine installation. @@ -211,8 +212,12 @@ def create_unreal_project(project_name: str, Returns: None + Deprecated: + since 3.16.0 + """ env = env or os.environ + preset = get_project_settings(project_name)["unreal"]["project_setup"] ue_id = ".".join(ue_version.split(".")[:2]) # get unreal engine identifier @@ -230,7 +235,7 @@ def create_unreal_project(project_name: str, ue_editor_exe: Path = get_editor_exe_path(engine_path, ue_version) cmdlet_project: Path = get_path_to_cmdlet_project(ue_version) - project_file = pr_dir / f"{project_name}.uproject" + project_file = pr_dir / f"{unreal_project_name}.uproject" print("--- Generating a new project ...") commandlet_cmd = [f'{ue_editor_exe.as_posix()}', @@ -251,8 +256,9 @@ def create_unreal_project(project_name: str, return_code = gen_process.wait() if return_code and return_code != 0: - raise RuntimeError(f'Failed to generate \'{project_name}\' project! ' - f'Exited with return code {return_code}') + raise RuntimeError( + (f"Failed to generate '{unreal_project_name}' project! " + f"Exited with return code {return_code}")) print("--- Project has been generated successfully.") @@ -282,7 +288,7 @@ def create_unreal_project(project_name: str, subprocess.run(command1) command2 = [u_build_tool.as_posix(), - f"-ModuleWithSuffix={project_name},3555", arch, + f"-ModuleWithSuffix={unreal_project_name},3555", arch, "Development", "-TargetType=Editor", f'-Project={project_file}', f'{project_file}', @@ -363,11 +369,11 @@ def get_compatible_integration( def get_path_to_cmdlet_project(ue_version: str) -> Path: cmd_project = Path( - os.path.abspath(os.getenv("OPENPYPE_ROOT"))) + os.path.dirname(os.path.abspath(__file__))) # For now, only tested on Windows (For Linux and Mac # it has to be implemented) - cmd_project /= f"openpype/hosts/unreal/integration/UE_{ue_version}" + cmd_project /= f"integration/UE_{ue_version}" # if the integration doesn't exist for current engine version # try to find the closest to it. @@ -423,6 +429,36 @@ def get_build_id(engine_path: Path, ue_version: str) -> str: return "{" + loaded_modules.get("BuildId") + "}" +def check_built_plugin_existance(plugin_path) -> bool: + if not plugin_path: + return False + + integration_plugin_path = Path(plugin_path) + + if not integration_plugin_path.is_dir(): + raise RuntimeError("Path to the integration plugin is null!") + + if not (integration_plugin_path / "Binaries").is_dir() \ + or not (integration_plugin_path / "Intermediate").is_dir(): + return False + + return True + + +def copy_built_plugin(engine_path: Path, plugin_path: Path) -> None: + ayon_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" + + if not ayon_plugin_path.is_dir(): + ayon_plugin_path.mkdir(parents=True, exist_ok=True) + + engine_plugin_config_path: Path = ayon_plugin_path / "Config" + engine_plugin_config_path.mkdir(exist_ok=True) + + dir_util._path_created = {} + + dir_util.copy_tree(plugin_path.as_posix(), ayon_plugin_path.as_posix()) + + def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: env = env or os.environ integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py index 52eea4122a..1d60b63f9a 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -76,18 +76,24 @@ class AnimationAlembicLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) asset_tools = unreal.AssetToolsHelpers.get_asset_tools() asset_tools.import_asset_tasks([task]) diff --git a/openpype/hosts/unreal/plugins/load/load_animation.py b/openpype/hosts/unreal/plugins/load/load_animation.py index a5ecb677e8..7ed85ee411 100644 --- a/openpype/hosts/unreal/plugins/load/load_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_animation.py @@ -26,7 +26,7 @@ class AnimationFBXLoader(plugin.Loader): icon = "cube" color = "orange" - def _process(self, asset_dir, asset_name, instance_name): + def _process(self, path, asset_dir, asset_name, instance_name): automated = False actor = None @@ -55,7 +55,7 @@ class AnimationFBXLoader(plugin.Loader): asset_doc = get_current_project_asset(fields=["data.fps"]) - task.set_editor_property('filename', self.fname) + task.set_editor_property('filename', path) task.set_editor_property('destination_path', asset_dir) task.set_editor_property('destination_name', asset_name) task.set_editor_property('replace_existing', False) @@ -177,14 +177,15 @@ class AnimationFBXLoader(plugin.Loader): EditorAssetLibrary.make_directory(asset_dir) - libpath = self.fname.replace("fbx", "json") + path = self.filepath_from_context(context) + libpath = path.replace(".fbx", ".json") with open(libpath, "r") as fp: data = json.load(fp) instance_name = data.get("instance_name") - animation = self._process(asset_dir, asset_name, instance_name) + animation = self._process(path, asset_dir, asset_name, instance_name) asset_content = EditorAssetLibrary.list_assets( hierarchy_dir, recursive=True, include_folder=False) diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index 59ea14697d..d663ce20ea 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -12,7 +12,7 @@ from unreal import ( from openpype.client import get_asset_by_name from openpype.pipeline import ( AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api.pipeline import ( @@ -184,7 +184,7 @@ class CameraLoader(plugin.Loader): frame_ranges[i + 1][0], frame_ranges[i + 1][1], [level]) - project_name = legacy_io.active_project() + project_name = get_current_project_name() data = get_asset_by_name(project_name, asset)["data"] cam_seq.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) @@ -200,12 +200,13 @@ class CameraLoader(plugin.Loader): settings.set_editor_property('reduce_keys', False) if cam_seq: + path = self.filepath_from_context(context) self._import_camera( EditorLevelLibrary.get_editor_world(), cam_seq, cam_seq.get_bindings(), settings, - self.fname + path ) # Set range of all sections @@ -389,7 +390,7 @@ class CameraLoader(plugin.Loader): # Set range of all sections # Changing the range of the section is not enough. We need to change # the frame of all the keys in the section. - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset = container.get('asset') data = get_asset_by_name(project_name, asset)["data"] diff --git a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py index 3a292fdbd1..13ba236a7d 100644 --- a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py @@ -111,8 +111,9 @@ class PointCacheAlembicLoader(plugin.Loader): if frame_start == frame_end: frame_end += 1 + path = self.filepath_from_context(context) task = self.get_task( - self.fname, asset_dir, asset_name, False, frame_start, frame_end) + path, asset_dir, asset_name, False, frame_start, frame_end) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index 86b2e1456c..3b82da5068 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -23,7 +23,7 @@ from openpype.pipeline import ( load_container, get_representation_path, AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.pipeline.context_tools import get_current_project_asset from openpype.settings import get_current_project_settings @@ -302,7 +302,7 @@ class LayoutLoader(plugin.Loader): if not version_ids: return output - project_name = legacy_io.active_project() + project_name = get_current_project_name() repre_docs = get_representations( project_name, representation_names=["fbx", "abc"], @@ -603,7 +603,7 @@ class LayoutLoader(plugin.Loader): frame_ranges[i + 1][0], frame_ranges[i + 1][1], [level]) - project_name = legacy_io.active_project() + project_name = get_current_project_name() data = get_asset_by_name(project_name, asset)["data"] shot.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) @@ -618,7 +618,8 @@ class LayoutLoader(plugin.Loader): EditorLevelLibrary.load_level(level) - loaded_assets = self._process(self.fname, asset_dir, shot) + path = self.filepath_from_context(context) + loaded_assets = self._process(path, asset_dir, shot) for s in sequences: EditorAssetLibrary.save_asset(s.get_path_name()) diff --git a/openpype/hosts/unreal/plugins/load/load_layout_existing.py b/openpype/hosts/unreal/plugins/load/load_layout_existing.py index 929a9a1399..c53e92930a 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout_existing.py +++ b/openpype/hosts/unreal/plugins/load/load_layout_existing.py @@ -11,7 +11,7 @@ from openpype.pipeline import ( load_container, get_representation_path, AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as upipeline @@ -380,7 +380,8 @@ class ExistingLayoutLoader(plugin.Loader): raise AssertionError("Current level not saved") project_name = context["project"]["name"] - containers = self._process(self.fname, project_name) + path = self.filepath_from_context(context) + containers = self._process(path, project_name) curr_level_path = Path( curr_level.get_outer().get_path_name()).parent.as_posix() @@ -410,7 +411,7 @@ class ExistingLayoutLoader(plugin.Loader): asset_dir = container.get('namespace') source_path = get_representation_path(representation) - project_name = legacy_io.active_project() + project_name = get_current_project_name() containers = self._process(source_path, project_name) data = { diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index 7591d5582f..9285602b64 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -78,18 +78,24 @@ class SkeletalMeshAlembicLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py index e9676cde3a..9aa0e4d1a8 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py @@ -23,7 +23,7 @@ class SkeletalMeshFBXLoader(plugin.Loader): def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. - This is two step process. First, import FBX to temporary path and + This is a two step process. First, import FBX to temporary path and then call `containerise()` on it - this moves all content to new directory and then it will create AssetContainer there and imprint it with metadata. This will mark this path as container. @@ -52,11 +52,16 @@ class SkeletalMeshFBXLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix @@ -65,7 +70,8 @@ class SkeletalMeshFBXLoader(plugin.Loader): task = unreal.AssetImportTask() - task.set_editor_property('filename', self.fname) + path = self.filepath_from_context(context) + task.set_editor_property('filename', path) task.set_editor_property('destination_path', asset_dir) task.set_editor_property('destination_name', asset_name) task.set_editor_property('replace_existing', False) diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index befc7b0ac9..bb13692f9e 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -79,11 +79,13 @@ class StaticMeshAlembicLoader(plugin.Loader): root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" else: - asset_name = "{}".format(name) - version = context.get('version').get('name') + name_version = f"{name}_v{version.get('name'):03d}" default_conversion = False if options.get("default_conversion"): @@ -91,15 +93,16 @@ class StaticMeshAlembicLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) + path = self.filepath_from_context(context) task = self.get_task( - self.fname, asset_dir, asset_name, False, default_conversion) + path, asset_dir, asset_name, False, default_conversion) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py index e416256486..ffc68d8375 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py @@ -78,17 +78,24 @@ class StaticMeshFBXLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}", suffix="" + f"{root}/{asset}/{name_version}", suffix="" ) container_name += suffix unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_uasset.py b/openpype/hosts/unreal/plugins/load/load_uasset.py index 30f63abe39..88aaac41e8 100644 --- a/openpype/hosts/unreal/plugins/load/load_uasset.py +++ b/openpype/hosts/unreal/plugins/load/load_uasset.py @@ -64,8 +64,9 @@ class UAssetLoader(plugin.Loader): destination_path = asset_dir.replace( "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1) + path = self.filepath_from_context(context) shutil.copy( - self.fname, + path, f"{destination_path}/{name}_{unique_number:02}.{self.extension}") # Create Asset Container diff --git a/openpype/hosts/unreal/plugins/load/load_yeticache.py b/openpype/hosts/unreal/plugins/load/load_yeticache.py new file mode 100644 index 0000000000..22f5029bac --- /dev/null +++ b/openpype/hosts/unreal/plugins/load/load_yeticache.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- +"""Loader for Yeti Cache.""" +import os +import json + +from openpype.pipeline import ( + get_representation_path, + AYON_CONTAINER_ID +) +from openpype.hosts.unreal.api import plugin +from openpype.hosts.unreal.api import pipeline as unreal_pipeline +import unreal # noqa + + +class YetiLoader(plugin.Loader): + """Load Yeti Cache""" + + families = ["yeticacheUE"] + label = "Import Yeti" + representations = ["abc"] + icon = "pagelines" + color = "orange" + + @staticmethod + def get_task(filename, asset_dir, asset_name, replace): + task = unreal.AssetImportTask() + options = unreal.AbcImportSettings() + + task.set_editor_property('filename', filename) + task.set_editor_property('destination_path', asset_dir) + task.set_editor_property('destination_name', asset_name) + task.set_editor_property('replace_existing', replace) + task.set_editor_property('automated', True) + task.set_editor_property('save', True) + + task.options = options + + return task + + @staticmethod + def is_groom_module_active(): + """ + Check if Groom plugin is active. + + This is a workaround, because the Unreal python API don't have + any method to check if plugin is active. + """ + prj_file = unreal.Paths.get_project_file_path() + + with open(prj_file, "r") as fp: + data = json.load(fp) + + plugins = data.get("Plugins") + + if not plugins: + return False + + plugin_names = [p.get("Name") for p in plugins] + + return "HairStrands" in plugin_names + + def load(self, context, name, namespace, options): + """Load and containerise representation into Content Browser. + + This is two step process. First, import FBX to temporary path and + then call `containerise()` on it - this moves all content to new + directory and then it will create AssetContainer there and imprint it + with metadata. This will mark this path as container. + + Args: + context (dict): application context + name (str): subset name + namespace (str): in Unreal this is basically path to container. + This is not passed here, so namespace is set + by `containerise()` because only then we know + real path. + data (dict): Those would be data to be imprinted. This is not used + now, data are imprinted by `containerise()`. + + Returns: + list(str): list of container content + + """ + # Check if Groom plugin is active + if not self.is_groom_module_active(): + raise RuntimeError("Groom plugin is not activated.") + + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" + asset = context.get('asset').get('name') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{root}/{asset}/{name}", suffix="") + + unique_number = 1 + while unreal.EditorAssetLibrary.does_directory_exist( + f"{asset_dir}_{unique_number:02}" + ): + unique_number += 1 + + asset_dir = f"{asset_dir}_{unique_number:02}" + container_name = f"{container_name}_{unique_number:02}{suffix}" + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 + + # Create Asset Container + unreal_pipeline.create_container( + container=container_name, path=asset_dir) + + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": context["representation"]["_id"], + "parent": context["representation"]["parent"], + "family": context["representation"]["context"]["family"] + } + unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) + + asset_content = unreal.EditorAssetLibrary.list_assets( + asset_dir, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + return asset_content + + def update(self, container, representation): + name = container["asset_name"] + source_path = get_representation_path(representation) + destination_path = container["namespace"] + + task = self.get_task(source_path, destination_path, name, True) + + # do import fbx and replace existing data + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + container_path = f'{container["namespace"]}/{container["objectName"]}' + # update metadata + unreal_pipeline.imprint( + container_path, + { + "representation": str(representation["_id"]), + "parent": str(representation["parent"]) + }) + + asset_content = unreal.EditorAssetLibrary.list_assets( + destination_path, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + def remove(self, container): + path = container["namespace"] + parent_path = os.path.dirname(path) + + unreal.EditorAssetLibrary.delete_directory(path) + + asset_content = unreal.EditorAssetLibrary.list_assets( + parent_path, recursive=False + ) + + if len(asset_content) == 0: + unreal.EditorAssetLibrary.delete_directory(parent_path) diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py index 57e7957575..d30d04551d 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_layout.py +++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py @@ -8,7 +8,7 @@ from unreal import EditorLevelLibrary as ell from unreal import EditorAssetLibrary as eal from openpype.client import get_representation_by_name -from openpype.pipeline import legacy_io, publish +from openpype.pipeline import publish class ExtractLayout(publish.Extractor): @@ -32,7 +32,7 @@ class ExtractLayout(publish.Extractor): "Wrong level loaded" json_data = [] - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for member in instance[:]: actor = ell.get_actor_reference(member) diff --git a/openpype/hosts/unreal/plugins/publish/extract_uasset.py b/openpype/hosts/unreal/plugins/publish/extract_uasset.py index 48b62faa97..0dd7ff4a0d 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_uasset.py +++ b/openpype/hosts/unreal/plugins/publish/extract_uasset.py @@ -19,9 +19,8 @@ class ExtractUAsset(publish.Extractor): "umap" if "umap" in instance.data.get("families") else "uasset") ar = unreal.AssetRegistryHelpers.get_asset_registry() - self.log.info("Performing extraction..") + self.log.debug("Performing extraction..") staging_dir = self.staging_dir(instance) - filename = f"{instance.name}.{extension}" members = instance.data.get("members", []) diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py index 76bb25fac3..96485d5a2d 100644 --- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py +++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py @@ -1,4 +1,6 @@ import clique +import os +import re import pyblish.api @@ -21,7 +23,19 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): representations = instance.data.get("representations") for repr in representations: data = instance.data.get("assetEntity", {}).get("data", {}) - patterns = [clique.PATTERNS["frames"]] + repr_files = repr["files"] + if isinstance(repr_files, str): + continue + + ext = repr.get("ext") + if not ext: + _, ext = os.path.splitext(repr_files[0]) + elif not ext.startswith("."): + ext = ".{}".format(ext) + pattern = r"\D?(?P(?P0*)\d+){}$".format( + re.escape(ext)) + patterns = [pattern] + collections, remainder = clique.assemble( repr["files"], minimum_items=1, patterns=patterns) @@ -30,6 +44,10 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): collection = collections[0] frames = list(collection.indexes) + if instance.data.get("slate"): + # Slate is not part of the frame range + frames = frames[1:] + current_range = (frames[0], frames[-1]) required_range = (data["clipIn"], data["clipOut"]) diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index 2b7e1375e6..386ad877d7 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -3,16 +3,17 @@ import os import platform import re import subprocess +import tempfile from distutils import dir_util +from distutils.dir_util import copy_tree from pathlib import Path from typing import List, Union -import tempfile -from distutils.dir_util import copy_tree - -import openpype.hosts.unreal.lib as ue_lib from qtpy import QtCore +import openpype.hosts.unreal.lib as ue_lib +from openpype.settings import get_project_settings + def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)): match = re.search(r"\[[1-9]+/[0-9]+]", line) @@ -39,42 +40,71 @@ def retrieve_exit_code(line: str): return None -class UEProjectGenerationWorker(QtCore.QObject): +class UEWorker(QtCore.QObject): finished = QtCore.Signal(str) - failed = QtCore.Signal(str) + failed = QtCore.Signal(str, int) progress = QtCore.Signal(int) log = QtCore.Signal(str) + + engine_path: Path = None + env = None + + def execute(self): + raise NotImplementedError("Please implement this method!") + + def run(self): + try: + self.execute() + except Exception as e: + import traceback + self.log.emit(str(e)) + self.log.emit(traceback.format_exc()) + self.failed.emit(str(e), 1) + raise e + + +class UEProjectGenerationWorker(UEWorker): stage_begin = QtCore.Signal(str) ue_version: str = None project_name: str = None - env = None - engine_path: Path = None project_dir: Path = None dev_mode = False def setup(self, ue_version: str, - project_name, + project_name: str, + unreal_project_name, engine_path: Path, project_dir: Path, dev_mode: bool = False, env: dict = None): + """Set the worker with necessary parameters. + + Args: + ue_version (str): Unreal Engine version. + project_name (str): Name of the project in AYON. + unreal_project_name (str): Name of the project in Unreal. + engine_path (Path): Path to the Unreal Engine. + project_dir (Path): Path to the project directory. + dev_mode (bool, optional): Whether to run the project in dev mode. + Defaults to False. + env (dict, optional): Environment variables. Defaults to None. + + """ self.ue_version = ue_version self.project_dir = project_dir self.env = env or os.environ - preset = ue_lib.get_project_settings( - project_name - )["unreal"]["project_setup"] + preset = get_project_settings(project_name)["unreal"]["project_setup"] if dev_mode or preset["dev_mode"]: self.dev_mode = True - self.project_name = project_name + self.project_name = unreal_project_name self.engine_path = engine_path - def run(self): + def execute(self): # engine_path should be the location of UE_X.X folder ue_editor_exe = ue_lib.get_editor_exe_path(self.engine_path, @@ -285,15 +315,8 @@ class UEProjectGenerationWorker(QtCore.QObject): self.finished.emit("Project successfully built!") -class UEPluginInstallWorker(QtCore.QObject): - finished = QtCore.Signal(str) +class UEPluginInstallWorker(UEWorker): installing = QtCore.Signal(str) - failed = QtCore.Signal(str, int) - progress = QtCore.Signal(int) - log = QtCore.Signal(str) - - engine_path: Path = None - env = None def setup(self, engine_path: Path, env: dict = None, ): self.engine_path = engine_path @@ -361,7 +384,7 @@ class UEPluginInstallWorker(QtCore.QObject): dir_util.remove_tree(temp_dir.as_posix()) - def run(self): + def execute(self): src_plugin_dir = Path(self.env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(src_plugin_dir): diff --git a/openpype/hosts/unreal/ui/__init__.py b/openpype/hosts/unreal/ui/__init__.py new file mode 100644 index 0000000000..606b21ef19 --- /dev/null +++ b/openpype/hosts/unreal/ui/__init__.py @@ -0,0 +1,5 @@ +from .splash_screen import SplashScreen + +__all__ = ( + "SplashScreen", +) diff --git a/openpype/widgets/splash_screen.py b/openpype/hosts/unreal/ui/splash_screen.py similarity index 98% rename from openpype/widgets/splash_screen.py rename to openpype/hosts/unreal/ui/splash_screen.py index 7c1ff72ecd..7ac77821d9 100644 --- a/openpype/widgets/splash_screen.py +++ b/openpype/hosts/unreal/ui/splash_screen.py @@ -1,6 +1,5 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style, resources -from igniter.nice_progress_bar import NiceProgressBar class SplashScreen(QtWidgets.QDialog): @@ -143,7 +142,7 @@ class SplashScreen(QtWidgets.QDialog): button_layout.addWidget(self.close_btn) # Progress Bar - self.progress_bar = NiceProgressBar() + self.progress_bar = QtWidgets.QProgressBar() self.progress_bar.setValue(0) self.progress_bar.setAlignment(QtCore.Qt.AlignTop) diff --git a/openpype/hosts/webpublisher/README.md b/openpype/hosts/webpublisher/README.md index 0826e44490..07a957fa7f 100644 --- a/openpype/hosts/webpublisher/README.md +++ b/openpype/hosts/webpublisher/README.md @@ -3,4 +3,4 @@ Webpublisher Plugins meant for processing of Webpublisher. -Gets triggered by calling openpype.cli.remotepublish with appropriate arguments. \ No newline at end of file +Gets triggered by calling `openpype_console modules webpublisher publish` with appropriate arguments. diff --git a/openpype/hosts/webpublisher/addon.py b/openpype/hosts/webpublisher/addon.py index eb7fced2e6..4438775b03 100644 --- a/openpype/hosts/webpublisher/addon.py +++ b/openpype/hosts/webpublisher/addon.py @@ -20,11 +20,10 @@ class WebpublisherAddon(OpenPypeModule, IHostAddon): Close Python process at the end. """ - from openpype.pipeline.publish.lib import remote_publish - from .lib import get_webpublish_conn, publish_and_log + from .lib import get_webpublish_conn, publish_and_log, publish_in_test if is_test: - remote_publish(log, close_plugin_name) + publish_in_test(log, close_plugin_name) return dbcon = get_webpublish_conn() diff --git a/openpype/hosts/webpublisher/lib.py b/openpype/hosts/webpublisher/lib.py index b207f85b46..ecd28d2432 100644 --- a/openpype/hosts/webpublisher/lib.py +++ b/openpype/hosts/webpublisher/lib.py @@ -12,7 +12,6 @@ from openpype.client.mongo import OpenPypeMongoConnection from openpype.settings import get_project_settings from openpype.lib import Logger from openpype.lib.profiles_filtering import filter_profiles -from openpype.pipeline.publish.lib import find_close_plugin ERROR_STATUS = "error" IN_PROGRESS_STATUS = "in_progress" @@ -68,6 +67,46 @@ def get_batch_asset_task_info(ctx): return asset, task_name, task_type +def find_close_plugin(close_plugin_name, log): + if close_plugin_name: + plugins = pyblish.api.discover() + for plugin in plugins: + if plugin.__name__ == close_plugin_name: + return plugin + + log.debug("Close plugin not found, app might not close.") + + +def publish_in_test(log, close_plugin_name=None): + """Loops through all plugins, logs to console. Used for tests. + + Args: + log (Logger) + close_plugin_name (Optional[str]): Name of plugin with responsibility + to close application. + """ + + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" + + close_plugin = find_close_plugin(close_plugin_name, log) + + for result in pyblish.util.publish_iter(): + for record in result["records"]: + # Why do we log again? pyblish logger is logging to stdout... + log.info("{}: {}".format(result["plugin"].label, record.msg)) + + if not result["error"]: + continue + + # QUESTION We don't break on error? + error_message = error_format.format(**result) + log.error(error_message) + if close_plugin: # close host app explicitly after error + context = pyblish.api.Context() + close_plugin().process(context) + + def get_webpublish_conn(): """Get connection to OP 'webpublishes' collection.""" mongo_client = OpenPypeMongoConnection.get_mongo_client() @@ -231,7 +270,7 @@ def find_variant_key(application_manager, host): def get_task_data(batch_dir): """Return parsed data from first task manifest.json - Used for `remotepublishfromapp` command where batch contains only + Used for `publishfromapp` command where batch contains only single task with publishable workfile. Returns: diff --git a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py index 79ed499a20..1416255083 100644 --- a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py +++ b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py @@ -25,6 +25,7 @@ from openpype.lib import ( ) from openpype.pipeline.create import get_subset_name from openpype_modules.webpublisher.lib import parse_json +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedFiles(pyblish.api.ContextPlugin): @@ -103,7 +104,13 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): project_settings=context.data["project_settings"] ) version = self._get_next_version( - project_name, asset_doc, subset_name + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context ) next_versions.append(version) @@ -141,8 +148,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): try: no_of_frames = self._get_number_of_frames(file_url) if no_of_frames: - frame_end = int(frame_start) + \ - math.ceil(no_of_frames) + frame_end = ( + int(frame_start) + math.ceil(no_of_frames) + ) frame_end = math.ceil(frame_end) - 1 instance.data["frameEnd"] = frame_end self.log.debug("frameEnd:: {}".format( @@ -270,7 +278,16 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): config["families"], config["tags"]) - def _get_next_version(self, project_name, asset_doc, subset_name): + def _get_next_version( + self, + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context + ): """Returns version number or 1 for 'asset' and 'subset'""" version_doc = get_last_version_by_subset_name( @@ -279,9 +296,19 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): asset_doc["_id"], fields=["name"] ) - version = 1 if version_doc: - version += int(version_doc["name"]) + version = int(version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + "webpublisher", + task_name=task_name, + task_type=task_type, + family=family, + subset=subset_name, + project_settings=context.data["project_settings"] + ) + return version def _get_number_of_frames(self, file_url): diff --git a/openpype/hosts/webpublisher/publish_functions.py b/openpype/hosts/webpublisher/publish_functions.py index 83f53ced68..f5dc88f54d 100644 --- a/openpype/hosts/webpublisher/publish_functions.py +++ b/openpype/hosts/webpublisher/publish_functions.py @@ -6,7 +6,7 @@ import pyblish.util from openpype.lib import Logger from openpype.lib.applications import ( ApplicationManager, - get_app_environments_for_context, + LaunchTypes, ) from openpype.pipeline import install_host from openpype.hosts.webpublisher.api import WebpublisherHost @@ -34,7 +34,7 @@ def cli_publish(project_name, batch_path, user_email, targets): Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of 'publish') batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) user_email (string): email address for webpublisher - used to @@ -49,8 +49,8 @@ def cli_publish(project_name, batch_path, user_email, targets): if not batch_path: raise RuntimeError("No publish paths specified") - log = Logger.get_logger("remotepublish") - log.info("remotepublish command") + log = Logger.get_logger("Webpublish") + log.info("Webpublish command") # Register target and host webpublisher_host = WebpublisherHost() @@ -107,7 +107,7 @@ def cli_publish_from_app( Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of publish batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) host_name (str): 'photoshop' @@ -117,9 +117,9 @@ def cli_publish_from_app( (to choose validator for example) """ - log = Logger.get_logger("RemotePublishFromApp") + log = Logger.get_logger("PublishFromApp") - log.info("remotepublishphotoshop command") + log.info("Webpublish photoshop command") task_data = get_task_data(batch_path) @@ -156,22 +156,31 @@ def cli_publish_from_app( found_variant_key = find_variant_key(application_manager, host_name) app_name = "{}/{}".format(host_name, found_variant_key) + data = { + "last_workfile_path": workfile_path, + "start_last_workfile": True, + "project_name": project_name, + "asset_name": asset_name, + "task_name": task_name, + "launch_type": LaunchTypes.automated, + } + launch_context = application_manager.create_launch_context( + app_name, **data) + launch_context.run_prelaunch_hooks() + # must have for proper launch of app - env = get_app_environments_for_context( - project_name, - asset_name, - task_name, - app_name - ) + env = launch_context.env print("env:: {}".format(env)) + env["OPENPYPE_PUBLISH_DATA"] = batch_path + # must pass identifier to update log lines for a batch + env["BATCH_LOG_ID"] = str(_id) + env["HEADLESS_PUBLISH"] = 'true' # to use in app lib + env["USER_EMAIL"] = user_email + os.environ.update(env) - os.environ["OPENPYPE_PUBLISH_DATA"] = batch_path - # must pass identifier to update log lines for a batch - os.environ["BATCH_LOG_ID"] = str(_id) - os.environ["HEADLESS_PUBLISH"] = 'true' # to use in app lib - os.environ["USER_EMAIL"] = user_email - + # Why is this here? Registered host in this process does not affect + # regitered host in launched process. pyblish.api.register_host(host_name) if targets: if isinstance(targets, str): @@ -184,15 +193,7 @@ def cli_publish_from_app( os.environ["PYBLISH_TARGETS"] = os.pathsep.join( set(current_targets)) - data = { - "last_workfile_path": workfile_path, - "start_last_workfile": True, - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name - } - - launched_app = application_manager.launch(app_name, **data) + launched_app = application_manager.launch_with_context(launch_context) timeout = get_timeout(project_name, host_name, task_type) diff --git a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py index 9fe4b4d3c1..20d585e906 100644 --- a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py +++ b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py @@ -216,7 +216,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): "extensions": [".tvpp"], "command": "publish", "arguments": { - "targets": ["tvpaint_worker"] + "targets": ["tvpaint_worker", "webpublish"] }, "add_to_queue": False }, @@ -230,7 +230,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): # Make sure targets are set to None for cases that default # would change # - targets argument is not used in 'publishfromapp' - "targets": ["remotepublish"] + "targets": ["automated", "webpublish"] }, # does publish need to be handled by a queue, eg. only # single process running concurrently? @@ -247,7 +247,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): "project": content["project_name"], "user": content["user"], - "targets": ["filespublish"] + "targets": ["filespublish", "webpublish"] } add_to_queue = False @@ -280,13 +280,14 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): for key, value in add_args.items(): # Skip key values where value is None - if value is not None: - args.append("--{}".format(key)) - # Extend list into arguments (targets can be a list) - if isinstance(value, (tuple, list)): - args.extend(value) - else: - args.append(value) + if value is None: + continue + arg_key = "--{}".format(key) + if not isinstance(value, (tuple, list)): + value = [value] + + for item in value: + args += [arg_key, item] log.info("args:: {}".format(args)) if add_to_queue: diff --git a/openpype/hosts/webpublisher/webserver_service/webserver.py b/openpype/hosts/webpublisher/webserver_service/webserver.py index 093b53d9d3..d7c2ea01b9 100644 --- a/openpype/hosts/webpublisher/webserver_service/webserver.py +++ b/openpype/hosts/webpublisher/webserver_service/webserver.py @@ -45,7 +45,7 @@ def run_webserver(executable, upload_dir, host=None, port=None): server_manager = webserver_module.create_new_server_manager(port, host) webserver_url = server_manager.url - # queue for remotepublishfromapp tasks + # queue for publishfromapp tasks studio_task_queue = collections.deque() resource = RestApiResource(server_manager, diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 06de486f2e..f1eb564e5e 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -5,11 +5,11 @@ import sys import os import site +from openpype import PACKAGE_DIR # Add Python version specific vendor folder python_version_dir = os.path.join( - os.getenv("OPENPYPE_REPOS_ROOT", ""), - "openpype", "vendor", "python", "python_{}".format(sys.version[0]) + PACKAGE_DIR, "vendor", "python", "python_{}".format(sys.version[0]) ) # Prepend path in sys paths sys.path.insert(0, python_version_dir) @@ -22,11 +22,14 @@ from .events import ( ) from .vendor_bin_utils import ( + ToolNotFoundError, find_executable, get_vendor_bin_path, get_oiio_tools_path, + get_oiio_tool_args, get_ffmpeg_tool_path, - is_oiio_supported + get_ffmpeg_tool_args, + is_oiio_supported, ) from .attribute_definitions import ( @@ -52,12 +55,13 @@ from .env_tools import ( from .terminal import Terminal from .execute import ( + get_ayon_launcher_args, get_openpype_execute_args, - get_pype_execute_args, get_linux_launcher_args, execute, run_subprocess, run_detached_process, + run_ayon_launcher_process, run_openpype_process, clean_envs_for_openpype_process, path_to_subprocess_arg, @@ -65,7 +69,6 @@ from .execute import ( ) from .log import ( Logger, - PypeLogger, ) from .path_templates import ( @@ -77,12 +80,6 @@ from .path_templates import ( FormatObject, ) -from .mongo import ( - get_default_components, - validate_mongo_connection, - OpenPypeMongoConnection -) - from .dateutils import ( get_datetime_data, get_timestamp, @@ -115,25 +112,6 @@ from .transcoding import ( convert_ffprobe_fps_value, convert_ffprobe_fps_to_float, ) -from .avalon_context import ( - CURRENT_DOC_SCHEMAS, - create_project, - - get_workfile_template_key, - get_workfile_template_key_from_context, - get_last_workfile_with_version, - get_last_workfile, - - BuildWorkfile, - - get_creator_by_name, - - get_custom_workfile_template, - - get_custom_workfile_template_by_context, - get_custom_workfile_template_by_string_context, - get_custom_workfile_template -) from .local_settings import ( IniSettingRegistry, @@ -163,9 +141,6 @@ from .applications import ( ) from .plugin_tools import ( - TaskNotSetError, - get_subset_name, - get_subset_name_with_asset_doc, prepare_template_data, source_hash, ) @@ -177,9 +152,6 @@ from .path_tools import ( version_up, get_version_from_path, get_last_version_from_path, - create_project_folders, - create_workdir_extra_folders, - get_project_basic_paths, ) from .openpype_version import ( @@ -205,13 +177,13 @@ __all__ = [ "emit_event", "register_event_callback", - "find_executable", + "get_ayon_launcher_args", "get_openpype_execute_args", - "get_pype_execute_args", "get_linux_launcher_args", "execute", "run_subprocess", "run_detached_process", + "run_ayon_launcher_process", "run_openpype_process", "clean_envs_for_openpype_process", "path_to_subprocess_arg", @@ -220,9 +192,13 @@ __all__ = [ "env_value_to_bool", "get_paths_from_environ", + "ToolNotFoundError", + "find_executable", "get_vendor_bin_path", "get_oiio_tools_path", + "get_oiio_tool_args", "get_ffmpeg_tool_path", + "get_ffmpeg_tool_args", "is_oiio_supported", "AbstractAttrDef", @@ -257,22 +233,6 @@ __all__ = [ "convert_ffprobe_fps_value", "convert_ffprobe_fps_to_float", - "CURRENT_DOC_SCHEMAS", - "create_project", - - "get_workfile_template_key", - "get_workfile_template_key_from_context", - "get_last_workfile_with_version", - "get_last_workfile", - - "BuildWorkfile", - - "get_creator_by_name", - - "get_custom_workfile_template_by_context", - "get_custom_workfile_template_by_string_context", - "get_custom_workfile_template", - "IniSettingRegistry", "JSONSettingRegistry", "OpenPypeSecureRegistry", @@ -298,9 +258,7 @@ __all__ = [ "filter_profiles", - "TaskNotSetError", - "get_subset_name", - "get_subset_name_with_asset_doc", + "prepare_template_data", "source_hash", "format_file_size", @@ -323,15 +281,6 @@ __all__ = [ "get_formatted_current_time", "Logger", - "PypeLogger", - - "get_default_components", - "validate_mongo_connection", - "OpenPypeMongoConnection", - - "create_project_folders", - "create_workdir_extra_folders", - "get_project_basic_paths", "op_version_control_available", "get_openpype_version", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index 8adae34827..ff5e27c122 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -11,10 +11,7 @@ from abc import ABCMeta, abstractmethod import six -from openpype.client import ( - get_project, - get_asset_by_name, -) +from openpype import AYON_SERVER_ENABLED, PACKAGE_DIR from openpype.settings import ( get_system_settings, get_project_settings, @@ -46,6 +43,25 @@ CUSTOM_LAUNCH_APP_GROUPS = { } +class LaunchTypes: + """Launch types are filters for pre/post-launch hooks. + + Please use these variables in case they'll change values. + """ + + # Local launch - application is launched on local machine + local = "local" + # Farm render job - application is on farm + farm_render = "farm-render" + # Farm publish job - integration post-render job + farm_publish = "farm-publish" + # Remote launch - application is launched on remote machine from which + # can be started publishing + remote = "remote" + # Automated launch - application is launched with automated publishing + automated = "automated" + + def parse_environments(env_data, env_group=None, platform_name=None): """Parse environment values from settings byt group and platform. @@ -482,6 +498,42 @@ class ApplicationManager: break return output + def create_launch_context(self, app_name, **data): + """Prepare launch context for application. + + Args: + app_name (str): Name of application that should be launched. + **data (Any): Any additional data. Data may be used during + + Returns: + ApplicationLaunchContext: Launch context for application. + + Raises: + ApplicationNotFound: Application was not found by entered name. + """ + + app = self.applications.get(app_name) + if not app: + raise ApplicationNotFound(app_name) + + executable = app.find_executable() + + return ApplicationLaunchContext( + app, executable, **data + ) + + def launch_with_context(self, launch_context): + """Launch application using existing launch context. + + Args: + launch_context (ApplicationLaunchContext): Prepared launch + context. + """ + + if not launch_context.executable: + raise ApplictionExecutableNotFound(launch_context.application) + return launch_context.launch() + def launch(self, app_name, **data): """Launch procedure. @@ -502,18 +554,10 @@ class ApplicationManager: failed. Exception should contain explanation message, traceback should not be needed. """ - app = self.applications.get(app_name) - if not app: - raise ApplicationNotFound(app_name) - executable = app.find_executable() - if not executable: - raise ApplictionExecutableNotFound(app) + context = self.create_launch_context(app_name, **data) + return self.launch_with_context(context) - context = ApplicationLaunchContext( - app, executable, **data - ) - return context.launch() class EnvironmentToolGroup: @@ -735,13 +779,17 @@ class LaunchHook: # Order of prelaunch hook, will be executed as last if set to None. order = None # List of host implementations, skipped if empty. - hosts = [] - # List of application groups - app_groups = [] - # List of specific application names - app_names = [] - # List of platform availability, skipped if empty. - platforms = [] + hosts = set() + # Set of application groups + app_groups = set() + # Set of specific application names + app_names = set() + # Set of platform availability + platforms = set() + # Set of launch types for which is available + # - if empty then is available for all launch types + # - by default has 'local' which is most common reason for launc hooks + launch_types = {LaunchTypes.local} def __init__(self, launch_context): """Constructor of launch hook. @@ -789,6 +837,10 @@ class LaunchHook: if launch_context.app_name not in cls.app_names: return False + if cls.launch_types: + if launch_context.launch_type not in cls.launch_types: + return False + return True @property @@ -858,9 +910,9 @@ class PostLaunchHook(LaunchHook): class ApplicationLaunchContext: """Context of launching application. - Main purpose of context is to prepare launch arguments and keyword arguments - for new process. Most important part of keyword arguments preparations - are environment variables. + Main purpose of context is to prepare launch arguments and keyword + arguments for new process. Most important part of keyword arguments + preparations are environment variables. During the whole process is possible to use `data` attribute to store object usable in multiple places. @@ -873,14 +925,30 @@ class ApplicationLaunchContext: insert argument between `nuke.exe` and `--NukeX`. To keep them together it is better to wrap them in another list: `[["nuke.exe", "--NukeX"]]`. + Notes: + It is possible to use launch context only to prepare environment + variables. In that case `executable` may be None and can be used + 'run_prelaunch_hooks' method to run prelaunch hooks which prepare + them. + Args: application (Application): Application definition. executable (ApplicationExecutable): Object with path to executable. + env_group (Optional[str]): Environment variable group. If not set + 'DEFAULT_ENV_SUBGROUP' is used. + launch_type (Optional[str]): Launch type. If not set 'local' is used. **data (dict): Any additional data. Data may be used during preparation to store objects usable in multiple places. """ - def __init__(self, application, executable, env_group=None, **data): + def __init__( + self, + application, + executable, + env_group=None, + launch_type=None, + **data + ): from openpype.modules import ModulesManager # Application object @@ -895,6 +963,10 @@ class ApplicationLaunchContext: self.executable = executable + if launch_type is None: + launch_type = LaunchTypes.local + self.launch_type = launch_type + if env_group is None: env_group = DEFAULT_ENV_SUBGROUP @@ -902,8 +974,11 @@ class ApplicationLaunchContext: self.data = dict(data) + launch_args = [] + if executable is not None: + launch_args = executable.as_args() # subprocess.Popen launch arguments (first argument in constructor) - self.launch_args = executable.as_args() + self.launch_args = launch_args self.launch_args.extend(application.arguments) if self.data.get("app_args"): self.launch_args.extend(self.data.pop("app_args")) @@ -945,6 +1020,7 @@ class ApplicationLaunchContext: self.postlaunch_hooks = None self.process = None + self._prelaunch_hooks_executed = False @property def env(self): @@ -1214,6 +1290,27 @@ class ApplicationLaunchContext: # Return process which is already terminated return process + def run_prelaunch_hooks(self): + """Run prelaunch hooks. + + This method will be executed only once, any future calls will skip + the processing. + """ + + if self._prelaunch_hooks_executed: + self.log.warning("Prelaunch hooks were already executed.") + return + # Discover launch hooks + self.discover_launch_hooks() + + # Execute prelaunch hooks + for prelaunch_hook in self.prelaunch_hooks: + self.log.debug("Executing prelaunch hook: {}".format( + str(prelaunch_hook.__class__.__name__) + )) + prelaunch_hook.execute() + self._prelaunch_hooks_executed = True + def launch(self): """Collect data for new process and then create it. @@ -1226,15 +1323,8 @@ class ApplicationLaunchContext: self.log.warning("Application was already launched.") return - # Discover launch hooks - self.discover_launch_hooks() - - # Execute prelaunch hooks - for prelaunch_hook in self.prelaunch_hooks: - self.log.debug("Executing prelaunch hook: {}".format( - str(prelaunch_hook.__class__.__name__) - )) - prelaunch_hook.execute() + if not self._prelaunch_hooks_executed: + self.run_prelaunch_hooks() self.log.debug("All prelaunch hook executed. Starting new process.") @@ -1352,6 +1442,7 @@ def get_app_environments_for_context( task_name, app_name, env_group=None, + launch_type=None, env=None, modules_manager=None ): @@ -1362,63 +1453,33 @@ def get_app_environments_for_context( task_name (str): Name of task. app_name (str): Name of application that is launched and can be found by ApplicationManager. - env (dict): Initial environment variables. `os.environ` is used when - not passed. - modules_manager (ModulesManager): Initialized modules manager. + env_group (Optional[str]): Name of environment group. If not passed + default group is used. + launch_type (Optional[str]): Type for which prelaunch hooks are + executed. + env (Optional[dict[str, str]]): Initial environment variables. + `os.environ` is used when not passed. + modules_manager (Optional[ModulesManager]): Initialized modules + manager. Returns: dict: Environments for passed context and application. """ - from openpype.modules import ModulesManager - from openpype.pipeline import AvalonMongoDB, Anatomy - from openpype.lib.openpype_version import is_running_staging - - # Avalon database connection - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name - dbcon.install() - - # Project document - project_doc = get_project(project_name) - asset_doc = get_asset_by_name(project_name, asset_name) - - if modules_manager is None: - modules_manager = ModulesManager() - - # Prepare app object which can be obtained only from ApplciationManager + # Prepare app object which can be obtained only from ApplicationManager app_manager = ApplicationManager() - app = app_manager.applications[app_name] - - # Project's anatomy - anatomy = Anatomy(project_name) - - data = EnvironmentPrepData({ - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name, - - "app": app, - - "dbcon": dbcon, - "project_doc": project_doc, - "asset_doc": asset_doc, - - "anatomy": anatomy, - - "env": env - }) - data["env"].update(anatomy.root_environments()) - if is_running_staging(): - data["env"]["OPENPYPE_IS_STAGING"] = "1" - - prepare_app_environments(data, env_group, modules_manager) - prepare_context_environments(data, env_group, modules_manager) - - # Discard avalon connection - dbcon.uninstall() - - return data["env"] + context = app_manager.create_launch_context( + app_name, + project_name=project_name, + asset_name=asset_name, + task_name=task_name, + env_group=env_group, + launch_type=launch_type, + env=env, + modules_manager=modules_manager, + ) + context.run_prelaunch_hooks() + return context.env def _merge_env(env, current_env): @@ -1444,10 +1505,8 @@ def _add_python_version_paths(app, env, logger, modules_manager): return # Add Python 2/3 modules - openpype_root = os.getenv("OPENPYPE_REPOS_ROOT") python_vendor_dir = os.path.join( - openpype_root, - "openpype", + PACKAGE_DIR, "vendor", "python" ) @@ -1649,11 +1708,7 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): project_doc = data["project_doc"] asset_doc = data["asset_doc"] task_name = data["task_name"] - if ( - not project_doc - or not asset_doc - or not task_name - ): + if not project_doc: log.info( "Skipping context environments preparation." " Launch context does not contain required data." @@ -1666,18 +1721,16 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): system_settings = get_system_settings() data["project_settings"] = project_settings data["system_settings"] = system_settings - # Apply project specific environments on current env value - apply_project_environments_value( - project_name, data["env"], project_settings, env_group - ) app = data["app"] context_env = { "AVALON_PROJECT": project_doc["name"], - "AVALON_ASSET": asset_doc["name"], - "AVALON_TASK": task_name, "AVALON_APP_NAME": app.full_name } + if asset_doc: + context_env["AVALON_ASSET"] = asset_doc["name"] + if task_name: + context_env["AVALON_TASK"] = task_name log.debug( "Context environments set:\n{}".format( @@ -1685,9 +1738,25 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): ) ) data["env"].update(context_env) + + # Apply project specific environments on current env value + # - apply them once the context environments are set + apply_project_environments_value( + project_name, data["env"], project_settings, env_group + ) + if not app.is_host: return + data["env"]["AVALON_APP"] = app.host_name + + if not asset_doc or not task_name: + # QUESTION replace with log.info and skip workfile discovery? + # - technically it should be possible to launch host without context + raise ApplicationLaunchFailed( + "Host launch require asset and task context." + ) + workdir_data = get_template_data( project_doc, asset_doc, task_name, app.host_name, system_settings ) @@ -1725,7 +1794,6 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): "Couldn't create workdir because: {}".format(str(exc)) ) - data["env"]["AVALON_APP"] = app.host_name data["env"]["AVALON_WORKDIR"] = workdir _prepare_last_workfile(data, workdir, modules_manager) @@ -1959,17 +2027,28 @@ def get_non_python_host_kwargs(kwargs, allow_console=True): allow_console (bool): use False for inner Popen opening app itself or it will open additional console (at least for Harmony) """ + if kwargs is None: kwargs = {} if platform.system().lower() != "windows": return kwargs - executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + if AYON_SERVER_ENABLED: + executable_path = os.environ.get("AYON_EXECUTABLE") + else: + executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + executable_filename = "" if executable_path: executable_filename = os.path.basename(executable_path) - if "openpype_gui" in executable_filename: + + if AYON_SERVER_ENABLED: + is_gui_executable = "ayon_console" not in executable_filename + else: + is_gui_executable = "openpype_gui" in executable_filename + + if is_gui_executable: kwargs.update({ "creationflags": subprocess.CREATE_NO_WINDOW, "stdout": subprocess.DEVNULL, diff --git a/openpype/lib/attribute_definitions.py b/openpype/lib/attribute_definitions.py index 6054d2a92a..a71d6cc72a 100644 --- a/openpype/lib/attribute_definitions.py +++ b/openpype/lib/attribute_definitions.py @@ -424,17 +424,25 @@ class TextDef(AbstractAttrDef): class EnumDef(AbstractAttrDef): - """Enumeration of single item from items. + """Enumeration of items. + + Enumeration of single item from items. Or list of items if multiselection + is enabled. Args: - items: Items definition that can be converted using - 'prepare_enum_items'. - default: Default value. Must be one key(value) from passed items. + items (Union[list[str], list[dict[str, Any]]): Items definition that + can be converted using 'prepare_enum_items'. + default (Optional[Any]): Default value. Must be one key(value) from + passed items or list of values for multiselection. + multiselection (Optional[bool]): If True, multiselection is allowed. + Output is list of selected items. """ type = "enum" - def __init__(self, key, items, default=None, **kwargs): + def __init__( + self, key, items, default=None, multiselection=False, **kwargs + ): if not items: raise ValueError(( "Empty 'items' value. {} must have" @@ -443,30 +451,44 @@ class EnumDef(AbstractAttrDef): items = self.prepare_enum_items(items) item_values = [item["value"] for item in items] - if default not in item_values: - for value in item_values: - default = value - break + item_values_set = set(item_values) + if multiselection: + if default is None: + default = [] + default = list(item_values_set.intersection(default)) + + elif default not in item_values: + default = next(iter(item_values), None) super(EnumDef, self).__init__(key, default=default, **kwargs) self.items = items - self._item_values = set(item_values) + self._item_values = item_values_set + self.multiselection = multiselection def __eq__(self, other): if not super(EnumDef, self).__eq__(other): return False - return self.items == other.items + return ( + self.items == other.items + and self.multiselection == other.multiselection + ) def convert_value(self, value): - if value in self._item_values: - return value - return self.default + if not self.multiselection: + if value in self._item_values: + return value + return self.default + + if value is None: + return copy.deepcopy(self.default) + return list(self._item_values.intersection(value)) def serialize(self): data = super(EnumDef, self).serialize() data["items"] = copy.deepcopy(self.items) + data["multiselection"] = self.multiselection return data @staticmethod diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py deleted file mode 100644 index a9ae27cb79..0000000000 --- a/openpype/lib/avalon_context.py +++ /dev/null @@ -1,654 +0,0 @@ -"""Should be used only inside of hosts.""" - -import platform -import logging -import functools -import warnings - -import six - -from openpype.client import ( - get_project, - get_asset_by_name, -) -from openpype.client.operations import ( - CURRENT_ASSET_DOC_SCHEMA, - CURRENT_PROJECT_SCHEMA, - CURRENT_PROJECT_CONFIG_SCHEMA, -) -from .profiles_filtering import filter_profiles -from .path_templates import StringTemplate - -legacy_io = None - -log = logging.getLogger("AvalonContext") - - -# Backwards compatibility - should not be used anymore -# - Will be removed in OP 3.16.* -CURRENT_DOC_SCHEMAS = { - "project": CURRENT_PROJECT_SCHEMA, - "asset": CURRENT_ASSET_DOC_SCHEMA, - "config": CURRENT_PROJECT_CONFIG_SCHEMA -} - - -class AvalonContextDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", AvalonContextDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=AvalonContextDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.client.operations.create_project") -def create_project( - project_name, project_code, library_project=False, dbcon=None -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - dbcon(AvalonMongoDB): Object of connection to MongoDB. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client.operations import create_project - - return create_project(project_name, project_code, library_project) - - -def with_pipeline_io(func): - @functools.wraps(func) - def wrapped(*args, **kwargs): - global legacy_io - if legacy_io is None: - from openpype.pipeline import legacy_io - return func(*args, **kwargs) - return wrapped - - -@deprecated("openpype.client.get_linked_asset_ids") -def get_linked_asset_ids(asset_doc): - """Return linked asset ids for `asset_doc` from DB - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - (list): MongoDB ids of input links. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_asset_ids - from openpype.pipeline import legacy_io - - project_name = legacy_io.active_project() - - return get_linked_asset_ids(project_name, asset_doc=asset_doc) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key_from_context") -def get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name=None, - dbcon=None, project_settings=None -): - """Helper function to get template key for workfile template. - - Do the same as `get_workfile_template_key` but returns value for "session - context". - - It is required to pass one of 'dbcon' with already set project name or - 'project_name' arguments. - - Args: - asset_name(str): Name of asset document. - task_name(str): Task name for which is template key retrieved. - Must be available on asset document under `data.tasks`. - host_name(str): Name of host implementation for which is workfile - used. - project_name(str): Project name where asset and task is. Not required - when 'dbcon' is passed. - dbcon(AvalonMongoDB): Connection to mongo with already set project - under `AVALON_PROJECT`. Not required when 'project_name' is passed. - project_settings(dict): Project settings for passed 'project_name'. - Not required at all but makes function faster. - Raises: - ValueError: When both 'dbcon' and 'project_name' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import ( - get_workfile_template_key_from_context - ) - - if not project_name: - if not dbcon: - raise ValueError(( - "`get_workfile_template_key_from_context` requires to pass" - " one of 'dbcon' or 'project_name' arguments." - )) - project_name = dbcon.active_project() - - return get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name, project_settings - ) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key") -def get_workfile_template_key( - task_type, host_name, project_name=None, project_settings=None -): - """Workfile template key which should be used to get workfile template. - - Function is using profiles from project settings to return right template - for passet task type and host name. - - One of 'project_name' or 'project_settings' must be passed it is preferred - to pass settings if are already available. - - Args: - task_type(str): Name of task type. - host_name(str): Name of host implementation (e.g. "maya", "nuke", ...) - project_name(str): Name of project in which context should look for - settings. Not required if `project_settings` are passed. - project_settings(dict): Prepare project settings for project name. - Not needed if `project_name` is passed. - - Raises: - ValueError: When both 'project_name' and 'project_settings' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_workfile_template_key - - return get_workfile_template_key( - task_type, host_name, project_name, project_settings - ) - - -@deprecated("openpype.pipeline.context_tools.compute_session_changes") -def compute_session_changes( - session, task=None, asset=None, app=None, template_key=None -): - """Compute the changes for a Session object on asset, task or app switch - - This does *NOT* update the Session object, but returns the changes - required for a valid update of the Session. - - Args: - session (dict): The initial session to compute changes to. - This is required for computing the full Work Directory, as that - also depends on the values that haven't changed. - task (str, Optional): Name of task to switch to. - asset (str or dict, Optional): Name of asset to switch to. - You can also directly provide the Asset dictionary as returned - from the database to avoid an additional query. (optimization) - app (str, Optional): Name of app to switch to. - - Returns: - dict: The required changes in the Session dictionary. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import compute_session_changes - - if isinstance(asset, six.string_types): - project_name = legacy_io.active_project() - asset = get_asset_by_name(project_name, asset) - - return compute_session_changes( - session, - asset, - task, - template_key - ) - - -@deprecated("openpype.pipeline.context_tools.get_workdir_from_session") -def get_workdir_from_session(session=None, template_key=None): - """Calculate workdir path based on session data. - - Args: - session (Union[None, Dict[str, str]]): Session to use. If not passed - current context session is used (from legacy_io). - template_key (Union[str, None]): Precalculate template key to define - workfile template name in Anatomy. - - Returns: - str: Workdir path. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.context_tools import get_workdir_from_session - - return get_workdir_from_session(session, template_key) - - -@deprecated("openpype.pipeline.context_tools.change_current_context") -def update_current_task(task=None, asset=None, app=None, template_key=None): - """Update active Session to a new task work area. - - This updates the live Session to a different `asset`, `task` or `app`. - - Args: - task (str): The task to set. - asset (str): The asset to set. - app (str): The app to set. - - Returns: - dict: The changed key, values in the current Session. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import change_current_context - - project_name = legacy_io.active_project() - if isinstance(asset, six.string_types): - asset = get_asset_by_name(project_name, asset) - - return change_current_context(asset, task, template_key) - - -@deprecated("openpype.pipeline.workfile.BuildWorkfile") -def BuildWorkfile(): - """Build workfile class was moved to workfile pipeline. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.workfile import BuildWorkfile - - return BuildWorkfile() - - -@deprecated("openpype.pipeline.create.get_legacy_creator_by_name") -def get_creator_by_name(creator_name, case_sensitive=False): - """Find creator plugin by name. - - Args: - creator_name (str): Name of creator class that should be returned. - case_sensitive (bool): Match of creator plugin name is case sensitive. - Set to `False` by default. - - Returns: - Creator: Return first matching plugin or `None`. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.create import get_legacy_creator_by_name - - return get_legacy_creator_by_name(creator_name, case_sensitive) - - -def _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy=None -): - """Prepare Task context for anatomy data. - - WARNING: this data structure is currently used only in workfile templates. - Key "task" is currently in rest of pipeline used as string with task - name. - - Args: - project_doc (dict): Project document with available "name" and - "data.code" keys. - asset_doc (dict): Asset document from MongoDB. - task_name (str): Name of context task. - anatomy (Anatomy): Optionally Anatomy for passed project name can be - passed as Anatomy creation may be slow. - - Returns: - dict: With Anatomy context data. - """ - - from openpype.pipeline.template_data import get_general_template_data - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - asset_name = asset_doc["name"] - project_task_types = anatomy["tasks"] - - # get relevant task type from asset doc - assert task_name in asset_doc["data"]["tasks"], ( - "Task name \"{}\" not found on asset \"{}\"".format( - task_name, asset_name - ) - ) - - task_type = asset_doc["data"]["tasks"][task_name].get("type") - - assert task_type, ( - "Task name \"{}\" on asset \"{}\" does not have specified task type." - ).format(asset_name, task_name) - - # get short name for task type defined in default anatomy settings - project_task_type_data = project_task_types.get(task_type) - assert project_task_type_data, ( - "Something went wrong. Default anatomy tasks are not holding" - "requested task type: `{}`".format(task_type) - ) - - data = { - "project": { - "name": project_doc["name"], - "code": project_doc["data"].get("code") - }, - "asset": asset_name, - "task": { - "name": task_name, - "type": task_type, - "short": project_task_type_data["short_name"] - } - } - - system_general_data = get_general_template_data() - data.update(system_general_data) - - return data - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_context") -def get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - It is expected that passed argument are already queried documents of - project and asset as parents of processing task name. - - Existence of formatted path is not validated. - - Args: - template_profiles(list): Template profiles from settings. - project_doc(dict): Project document from MongoDB. - asset_doc(dict): Asset document from MongoDB. - task_name(str): Name of task for which templates are filtered. - anatomy(Anatomy): Optionally passed anatomy object for passed project - name. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - # get project, asset, task anatomy context data - anatomy_context_data = _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy - ) - # add root dict - anatomy_context_data["root"] = anatomy.roots - - # get task type for the task in context - current_task_type = anatomy_context_data["task"]["type"] - - # get path from matching profile - matching_item = filter_profiles( - template_profiles, - {"task_types": current_task_type} - ) - # when path is available try to format it in case - # there are some anatomy template strings - if matching_item: - template = matching_item["path"][platform.system().lower()] - return StringTemplate.format_strict_template( - template, anatomy_context_data - ) - - return None - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_string_context" -) -def get_custom_workfile_template_by_string_context( - template_profiles, project_name, asset_name, task_name, - dbcon=None, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - Passed context are string representations of project, asset and task. - Function will query documents of project and asset to be able use - `get_custom_workfile_template_by_context` for rest of logic. - - Args: - template_profiles(list): Loaded workfile template profiles. - project_name(str): Project name. - asset_name(str): Asset name. - task_name(str): Task name. - dbcon(AvalonMongoDB): Optional avalon implementation of mongo - connection with context Session. - anatomy(Anatomy): Optionally prepared anatomy object for passed - project. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - project_name = None - if anatomy is not None: - project_name = anatomy.project_name - - if not project_name and dbcon is not None: - project_name = dbcon.active_project() - - if not project_name: - raise ValueError("Can't determina project") - - project_doc = get_project(project_name, fields=["name", "data.code"]) - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["name", "data.tasks"]) - - return get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy - ) - - -@deprecated("openpype.pipeline.context_tools.get_custom_workfile_template") -def get_custom_workfile_template(template_profiles): - """Filter and fill workfile template profiles by current context. - - Current context is defined by `legacy_io.Session`. That's why this - function should be used only inside host where context is set and stable. - - Args: - template_profiles(list): Template profiles from settings. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - - return get_custom_workfile_template_by_string_context( - template_profiles, - legacy_io.Session["AVALON_PROJECT"], - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"], - legacy_io - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile_with_version") -def get_last_workfile_with_version( - workdir, file_template, fill_data, extensions -): - """Return last workfile version. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - - Returns: - tuple: Last workfile with version if there is any otherwise - returns (None, None). - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile_with_version - - return get_last_workfile_with_version( - workdir, file_template, fill_data, extensions - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile") -def get_last_workfile( - workdir, file_template, fill_data, extensions, full_path=False -): - """Return last workfile filename. - - Returns file with version 1 if there is not workfile yet. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - full_path(bool): Full path to file is returned if set to True. - - Returns: - str: Last or first workfile as filename of full path to filename. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile - - return get_last_workfile( - workdir, file_template, fill_data, extensions, full_path - ) - - -@deprecated("openpype.client.get_linked_representation_id") -def get_linked_ids_for_representations( - project_name, repre_ids, dbcon=None, link_type=None, max_depth=0 -): - """Returns list of linked ids of particular type (if provided). - - Goes from representations to version, back to representations - Args: - project_name (str) - repre_ids (list) or (ObjectId) - dbcon (avalon.mongodb.AvalonMongoDB, optional): Avalon Mongo connection - with Session. - link_type (str): ['reference', '..] - max_depth (int): limit how many levels of recursion - - Returns: - (list) of ObjectId - linked representations - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_representation_id - - if not isinstance(repre_ids, list): - repre_ids = [repre_ids] - - output = [] - for repre_id in repre_ids: - output.extend(get_linked_representation_id( - project_name, - repre_id=repre_id, - link_type=link_type, - max_depth=max_depth - )) - return output diff --git a/openpype/lib/delivery.py b/openpype/lib/delivery.py deleted file mode 100644 index efb542de75..0000000000 --- a/openpype/lib/delivery.py +++ /dev/null @@ -1,252 +0,0 @@ -"""Functions useful for delivery action or loader""" -import os -import shutil -import functools -import warnings - - -class DeliveryDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", DeliveryDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=DeliveryDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.lib.path_tools.collect_frames") -def collect_frames(files): - """Returns dict of source path and its frame, if from sequence - - Uses clique as most precise solution, used when anatomy template that - created files is not known. - - Assumption is that frames are separated by '.', negative frames are not - allowed. - - Args: - files(list) or (set with single value): list of source paths - - Returns: - (dict): {'/asset/subset_v001.0001.png': '0001', ....} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import collect_frames - - return collect_frames(files) - - -@deprecated("openpype.lib.path_tools.format_file_size") -def sizeof_fmt(num, suffix=None): - """Returns formatted string with size in appropriate unit - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import format_file_size - return format_file_size(num, suffix) - - -@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy") -def path_from_representation(representation, anatomy): - """Get representation path using representation document and anatomy. - - Args: - representation (Dict[str, Any]): Representation document. - anatomy (Anatomy): Project anatomy. - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.load import get_representation_path_with_anatomy - - return get_representation_path_with_anatomy(representation, anatomy) - - -@deprecated -def copy_file(src_path, dst_path): - """Hardlink file if possible(to save space), copy if not""" - from openpype.lib import create_hard_link # safer importing - - if os.path.exists(dst_path): - return - try: - create_hard_link( - src_path, - dst_path - ) - except OSError: - shutil.copyfile(src_path, dst_path) - - -@deprecated("openpype.pipeline.delivery.get_format_dict") -def get_format_dict(anatomy, location_path): - """Returns replaced root values from user provider value. - - Args: - anatomy (Anatomy) - location_path (str): user provided value - - Returns: - (dict): prepared for formatting of a template - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import get_format_dict - - return get_format_dict(anatomy, location_path) - - -@deprecated("openpype.pipeline.delivery.check_destination_path") -def check_destination_path(repre_id, - anatomy, anatomy_data, - datetime_data, template_name): - """ Try to create destination path based on 'template_name'. - - In the case that path cannot be filled, template contains unmatched - keys, provide error message to filter out repre later. - - Args: - anatomy (Anatomy) - anatomy_data (dict): context to fill anatomy - datetime_data (dict): values with actual date - template_name (str): to pick correct delivery template - - Returns: - (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import check_destination_path - - return check_destination_path( - repre_id, - anatomy, - anatomy_data, - datetime_data, - template_name - ) - - -@deprecated("openpype.pipeline.delivery.deliver_single_file") -def process_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """Copy single file to calculated path based on template - - Args: - src_path(str): path of source representation file - _repre (dict): full repre, used only in process_sequence, here only - as to share same signature - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_single_file - - return deliver_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) - - -@deprecated("openpype.pipeline.delivery.deliver_sequence") -def process_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """ For Pype2(mainly - works in 3 too) where representation might not - contain files. - - Uses listing physical files (not 'files' on repre as a)might not be - present, b)might not be reliable for representation and copying them. - - TODO Should be refactored when files are sufficient to drive all - representations. - - Args: - src_path(str): path of source representation file - repre (dict): full representation - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_sequence - - return deliver_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) diff --git a/openpype/lib/events.py b/openpype/lib/events.py index dca58fcf93..496b765a05 100644 --- a/openpype/lib/events.py +++ b/openpype/lib/events.py @@ -3,6 +3,7 @@ import os import re import copy import inspect +import collections import logging import weakref from uuid import uuid4 @@ -340,8 +341,8 @@ class EventSystem(object): event.emit() return event - def emit_event(self, event): - """Emit event object. + def _process_event(self, event): + """Process event topic and trigger callbacks. Args: event (Event): Prepared event with topic and data. @@ -356,6 +357,91 @@ class EventSystem(object): for callback in invalid_callbacks: self._registered_callbacks.remove(callback) + def emit_event(self, event): + """Emit event object. + + Args: + event (Event): Prepared event with topic and data. + """ + + self._process_event(event) + + +class QueuedEventSystem(EventSystem): + """Events are automatically processed in queue. + + If callback triggers another event, the event is not processed until + all callbacks of previous event are processed. + + Allows to implement custom event process loop by changing 'auto_execute'. + + Note: + This probably should be default behavior of 'EventSystem'. Changing it + now could cause problems in existing code. + + Args: + auto_execute (Optional[bool]): If 'True', events are processed + automatically. Custom loop calling 'process_next_event' + must be implemented when set to 'False'. + """ + + def __init__(self, auto_execute=True): + super(QueuedEventSystem, self).__init__() + self._event_queue = collections.deque() + self._current_event = None + self._auto_execute = auto_execute + + def __len__(self): + return self.count() + + def count(self): + """Get number of events in queue. + + Returns: + int: Number of events in queue. + """ + + return len(self._event_queue) + + def process_next_event(self): + """Process next event in queue. + + Should be used only if 'auto_execute' is set to 'False'. Only single + event is processed. + + Returns: + Union[Event, None]: Processed event. + """ + + if self._current_event is not None: + raise ValueError("An event is already in progress.") + + if not self._event_queue: + return None + event = self._event_queue.popleft() + self._current_event = event + self._process_event(event) + self._current_event = None + return event + + def emit_event(self, event): + """Emit event object. + + Args: + event (Event): Prepared event with topic and data. + """ + + if not self._auto_execute or self._current_event is not None: + self._event_queue.append(event) + return + + self._event_queue.append(event) + while self._event_queue: + event = self._event_queue.popleft() + self._current_event = event + self._process_event(event) + self._current_event = None + class GlobalEventSystem: """Event system living in global scope of process. diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index 6f52efdfcc..c54541a116 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -5,6 +5,8 @@ import platform import json import tempfile +from openpype import AYON_SERVER_ENABLED + from .log import Logger from .vendor_bin_utils import find_executable @@ -162,12 +164,19 @@ def run_subprocess(*args, **kwargs): return full_output -def clean_envs_for_openpype_process(env=None): - """Modify environments that may affect OpenPype process. +def clean_envs_for_ayon_process(env=None): + """Modify environments that may affect ayon-launcher process. Main reason to implement this function is to pop PYTHONPATH which may be affected by in-host environments. + + Args: + env (Optional[dict[str, str]]): Environment variables to modify. + + Returns: + dict[str, str]: Environment variables for ayon process. """ + if env is None: env = os.environ @@ -179,6 +188,64 @@ def clean_envs_for_openpype_process(env=None): return env +def clean_envs_for_openpype_process(env=None): + """Modify environments that may affect OpenPype process. + + Main reason to implement this function is to pop PYTHONPATH which may be + affected by in-host environments. + """ + + if AYON_SERVER_ENABLED: + return clean_envs_for_ayon_process(env=env) + + if env is None: + env = os.environ + + # Exclude some environment variables from a copy of the environment + env = env.copy() + for key in ["PYTHONPATH", "PYTHONHOME"]: + env.pop(key, None) + + return env + + +def run_ayon_launcher_process(*args, **kwargs): + """Execute OpenPype process with passed arguments and wait. + + Wrapper for 'run_process' which prepends OpenPype executable arguments + before passed arguments and define environments if are not passed. + + Values from 'os.environ' are used for environments if are not passed. + They are cleaned using 'clean_envs_for_openpype_process' function. + + Example: + ``` + run_ayon_process("run", "") + ``` + + Args: + *args (str): ayon-launcher cli arguments. + **kwargs (Any): Keyword arguments for subprocess.Popen. + + Returns: + str: Full output of subprocess concatenated stdout and stderr. + """ + + args = get_ayon_launcher_args(*args) + env = kwargs.pop("env", None) + # Keep env untouched if are passed and not empty + if not env: + # Skip envs that can affect OpenPype process + # - fill more if you find more + env = clean_envs_for_openpype_process(os.environ) + + # Only keep OpenPype version if we are running from build. + if not is_running_from_build(): + env.pop("OPENPYPE_VERSION", None) + + return run_subprocess(args, env=env, **kwargs) + + def run_openpype_process(*args, **kwargs): """Execute OpenPype process with passed arguments and wait. @@ -189,14 +256,16 @@ def run_openpype_process(*args, **kwargs): They are cleaned using 'clean_envs_for_openpype_process' function. Example: - ``` - run_detached_process("run", "") - ``` + >>> run_openpype_process("version") Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. """ + + if AYON_SERVER_ENABLED: + return run_ayon_launcher_process(*args, **kwargs) + args = get_openpype_execute_args(*args) env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty @@ -219,18 +288,18 @@ def run_detached_process(args, **kwargs): They are cleaned using 'clean_envs_for_openpype_process' function. Example: - ``` - run_detached_openpype_process("run", "") - ``` + >>> run_detached_process("run", "./path_to.py") + Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. Returns: subprocess.Popen: Pointer to launched process but it is possible that launched process is already killed (on linux). """ + env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty if not env: @@ -294,16 +363,37 @@ def path_to_subprocess_arg(path): return subprocess.list2cmdline([path]) -def get_pype_execute_args(*args): - """Backwards compatible function for 'get_openpype_execute_args'.""" - import traceback +def get_ayon_launcher_args(*args): + """Arguments to run ayon-launcher process. - log = Logger.get_logger("get_pype_execute_args") - stack = "\n".join(traceback.format_stack()) - log.warning(( - "Using deprecated function 'get_pype_execute_args'. Called from:\n{}" - ).format(stack)) - return get_openpype_execute_args(*args) + Arguments for subprocess when need to spawn new pype process. Which may be + needed when new python process for pype scripts must be executed in build + pype. + + Reasons: + Ayon-launcher started from code has different executable set to + virtual env python and must have path to script as first argument + which is not needed for built application. + + Args: + *args (str): Any arguments that will be added after executables. + + Returns: + list[str]: List of arguments to run ayon-launcher process. + """ + + executable = os.environ["AYON_EXECUTABLE"] + launch_args = [executable] + + executable_filename = os.path.basename(executable) + if "python" in executable_filename.lower(): + filepath = os.path.join(os.environ["AYON_ROOT"], "start.py") + launch_args.append(filepath) + + if args: + launch_args.extend(args) + + return launch_args def get_openpype_execute_args(*args): @@ -321,19 +411,22 @@ def get_openpype_execute_args(*args): It is possible to pass any arguments that will be added after pype executables. """ - pype_executable = os.environ["OPENPYPE_EXECUTABLE"] - pype_args = [pype_executable] - executable_filename = os.path.basename(pype_executable) + if AYON_SERVER_ENABLED: + return get_ayon_launcher_args(*args) + + executable = os.environ["OPENPYPE_EXECUTABLE"] + launch_args = [executable] + + executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - pype_args.append( - os.path.join(os.environ["OPENPYPE_ROOT"], "start.py") - ) + filepath = os.path.join(os.environ["OPENPYPE_ROOT"], "start.py") + launch_args.append(filepath) if args: - pype_args.extend(args) + launch_args.extend(args) - return pype_args + return launch_args def get_linux_launcher_args(*args): @@ -345,6 +438,9 @@ def get_linux_launcher_args(*args): It is possible that this function is used in OpenPype build which does not have yet the new executable. In that case 'None' is returned. + Todos: + Replace by script in scripts for ayon-launcher. + Args: args (iterable): List of additional arguments added after executable argument. @@ -353,19 +449,24 @@ def get_linux_launcher_args(*args): list: Executables with possible positional argument to script when called from code. """ - filename = "app_launcher" - openpype_executable = os.environ["OPENPYPE_EXECUTABLE"] - executable_filename = os.path.basename(openpype_executable) + filename = "app_launcher" + if AYON_SERVER_ENABLED: + executable = os.environ["AYON_EXECUTABLE"] + else: + executable = os.environ["OPENPYPE_EXECUTABLE"] + + executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - script_path = os.path.join( - os.environ["OPENPYPE_ROOT"], - "{}.py".format(filename) - ) - launch_args = [openpype_executable, script_path] + if AYON_SERVER_ENABLED: + root = os.environ["AYON_ROOT"] + else: + root = os.environ["OPENPYPE_ROOT"] + script_path = os.path.join(root, "{}.py".format(filename)) + launch_args = [executable, script_path] else: new_executable = os.path.join( - os.path.dirname(openpype_executable), + os.path.dirname(executable), filename ) executable_path = find_executable(new_executable) diff --git a/openpype/lib/local_settings.py b/openpype/lib/local_settings.py index c6c9699240..dae6e074af 100644 --- a/openpype/lib/local_settings.py +++ b/openpype/lib/local_settings.py @@ -29,6 +29,7 @@ except ImportError: import six import appdirs +from openpype import AYON_SERVER_ENABLED from openpype.settings import ( get_local_settings, get_system_settings @@ -493,10 +494,18 @@ class OpenPypeSettingsRegistry(JSONSettingRegistry): """ def __init__(self, name=None): - self.vendor = "pypeclub" - self.product = "openpype" + if AYON_SERVER_ENABLED: + vendor = "Ynput" + product = "AYON" + default_name = "AYON_settings" + else: + vendor = "pypeclub" + product = "openpype" + default_name = "openpype_settings" + self.vendor = vendor + self.product = product if not name: - name = "openpype_settings" + name = default_name path = appdirs.user_data_dir(self.product, self.vendor) super(OpenPypeSettingsRegistry, self).__init__(name, path) @@ -517,11 +526,54 @@ def _create_local_site_id(registry=None): return new_id +def get_ayon_appdirs(*args): + """Local app data directory of AYON client. + + Args: + *args (Iterable[str]): Subdirectories/files in local app data dir. + + Returns: + str: Path to directory/file in local app data dir. + """ + + return os.path.join( + appdirs.user_data_dir("AYON", "Ynput"), + *args + ) + + +def _get_ayon_local_site_id(): + # used for background syncing + site_id = os.environ.get("AYON_SITE_ID") + if site_id: + return site_id + + site_id_path = get_ayon_appdirs("site_id") + if os.path.exists(site_id_path): + with open(site_id_path, "r") as stream: + site_id = stream.read() + + if site_id: + return site_id + + try: + from ayon_common.utils import get_local_site_id as _get_local_site_id + site_id = _get_local_site_id() + except ImportError: + raise ValueError("Couldn't access local site id") + + return site_id + + def get_local_site_id(): """Get local site identifier. Identifier is created if does not exists yet. """ + + if AYON_SERVER_ENABLED: + return _get_ayon_local_site_id() + # override local id from environment # used for background syncing if os.environ.get("OPENPYPE_LOCAL_ID"): diff --git a/openpype/lib/log.py b/openpype/lib/log.py index 26dcd86eec..72071063ec 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -24,6 +24,7 @@ import traceback import threading import copy +from openpype import AYON_SERVER_ENABLED from openpype.client.mongo import ( MongoEnvNotSet, get_default_components, @@ -212,7 +213,7 @@ class Logger: log_mongo_url_components = None # Database name in Mongo - log_database_name = os.environ["OPENPYPE_DATABASE_NAME"] + log_database_name = os.environ.get("OPENPYPE_DATABASE_NAME") # Collection name under database in Mongo log_collection_name = "logs" @@ -326,12 +327,17 @@ class Logger: # Change initialization state to prevent runtime changes # if is executed during runtime cls.initialized = False - cls.log_mongo_url_components = get_default_components() + if not AYON_SERVER_ENABLED: + cls.log_mongo_url_components = get_default_components() # Define if should logging to mongo be used - use_mongo_logging = bool(log4mongo is not None) - if use_mongo_logging: - use_mongo_logging = os.environ.get("OPENPYPE_LOG_TO_SERVER") == "1" + if AYON_SERVER_ENABLED: + use_mongo_logging = False + else: + use_mongo_logging = ( + log4mongo is not None + and os.environ.get("OPENPYPE_LOG_TO_SERVER") == "1" + ) # Set mongo id for process (ONLY ONCE) if use_mongo_logging and cls.mongo_process_id is None: @@ -453,6 +459,9 @@ class Logger: if not cls.use_mongo_logging: return + if not cls.log_database_name: + raise ValueError("Database name for logs is not set") + client = log4mongo.handlers._connection if not client: client = cls.get_log_mongo_connection() @@ -483,21 +492,3 @@ class Logger: cls.initialize() return OpenPypeMongoConnection.get_mongo_client() - - -class PypeLogger(Logger): - """Duplicate of 'Logger'. - - Deprecated: - Class will be removed after release version 3.16.* - """ - - @classmethod - def get_logger(cls, *args, **kwargs): - logger = Logger.get_logger(*args, **kwargs) - # TODO uncomment when replaced most of places - logger.warning(( - "'openpype.lib.PypeLogger' is deprecated class." - " Please use 'openpype.lib.Logger' instead." - )) - return logger diff --git a/openpype/lib/mongo.py b/openpype/lib/mongo.py deleted file mode 100644 index bb2ee6016a..0000000000 --- a/openpype/lib/mongo.py +++ /dev/null @@ -1,61 +0,0 @@ -import warnings -import functools -from openpype.client.mongo import ( - MongoEnvNotSet, - OpenPypeMongoConnection, -) - - -class MongoDeprecatedWarning(DeprecationWarning): - pass - - -def mongo_deprecated(func): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - @functools.wraps(func) - def new_func(*args, **kwargs): - warnings.simplefilter("always", MongoDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'." - " Function was moved to 'openpype.client.mongo'." - ).format(func.__name__), - category=MongoDeprecatedWarning, - stacklevel=2 - ) - return func(*args, **kwargs) - return new_func - - -@mongo_deprecated -def get_default_components(): - from openpype.client.mongo import get_default_components - - return get_default_components() - - -@mongo_deprecated -def should_add_certificate_path_to_mongo_url(mongo_url): - from openpype.client.mongo import should_add_certificate_path_to_mongo_url - - return should_add_certificate_path_to_mongo_url(mongo_url) - - -@mongo_deprecated -def validate_mongo_connection(mongo_uri): - from openpype.client.mongo import validate_mongo_connection - - return validate_mongo_connection(mongo_uri) - - -__all__ = ( - "MongoEnvNotSet", - "OpenPypeMongoConnection", - "get_default_components", - "should_add_certificate_path_to_mongo_url", - "validate_mongo_connection", -) diff --git a/openpype/lib/openpype_version.py b/openpype/lib/openpype_version.py index e052002468..1c8356d5fe 100644 --- a/openpype/lib/openpype_version.py +++ b/openpype/lib/openpype_version.py @@ -13,6 +13,7 @@ import os import sys import openpype.version +from openpype import AYON_SERVER_ENABLED from .python_module_tools import import_filepath @@ -25,8 +26,25 @@ def get_openpype_version(): return openpype.version.__version__ +def get_ayon_launcher_version(): + version_filepath = os.path.join( + os.environ["AYON_ROOT"], + "version.py" + ) + if not os.path.exists(version_filepath): + return None + content = {} + with open(version_filepath, "r") as stream: + exec(stream.read(), content) + return content["__version__"] + + def get_build_version(): """OpenPype version of build.""" + + if AYON_SERVER_ENABLED: + return get_ayon_launcher_version() + # Return OpenPype version if is running from code if not is_running_from_build(): return get_openpype_version() @@ -50,7 +68,11 @@ def is_running_from_build(): Returns: bool: True if running from build. """ - executable_path = os.environ["OPENPYPE_EXECUTABLE"] + + if AYON_SERVER_ENABLED: + executable_path = os.environ["AYON_EXECUTABLE"] + else: + executable_path = os.environ["OPENPYPE_EXECUTABLE"] executable_filename = os.path.basename(executable_path) if "python" in executable_filename.lower(): return False @@ -58,6 +80,8 @@ def is_running_from_build(): def is_staging_enabled(): + if AYON_SERVER_ENABLED: + return os.getenv("AYON_USE_STAGING") == "1" return os.environ.get("OPENPYPE_USE_STAGING") == "1" @@ -88,6 +112,9 @@ def is_running_staging(): bool: Using staging version or not. """ + if AYON_SERVER_ENABLED: + return is_staging_enabled() + if os.environ.get("OPENPYPE_IS_STAGING") == "1": return True diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 0b6d0a3391..fec6a0c47d 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -2,59 +2,12 @@ import os import re import logging import platform -import functools -import warnings import clique log = logging.getLogger(__name__) -class PathToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PathToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PathToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - def format_file_size(file_size, suffix=None): """Returns formatted string with size in appropriate unit. @@ -269,99 +222,3 @@ def get_last_version_from_path(path_dir, filter): return filtred_files[-1] return None - - -@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths") -def concatenate_splitted_paths(split_paths, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import concatenate_splitted_paths - - return concatenate_splitted_paths(split_paths, anatomy) - - -@deprecated -def get_format_data(anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.template_data import get_project_template_data - - data = get_project_template_data(project_name=anatomy.project_name) - data["root"] = anatomy.roots - return data - - -@deprecated("openpype.pipeline.project_folders.fill_paths") -def fill_paths(path_list, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import fill_paths - - return fill_paths(path_list, anatomy) - - -@deprecated("openpype.pipeline.project_folders.create_project_folders") -def create_project_folders(basic_paths, project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_project_folders - - return create_project_folders(project_name, basic_paths) - - -@deprecated("openpype.pipeline.project_folders.get_project_basic_paths") -def get_project_basic_paths(project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import get_project_basic_paths - - return get_project_basic_paths(project_name) - - -@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders") -def create_workdir_extra_folders( - workdir, host_name, task_type, task_name, project_name, - project_settings=None -): - """Create extra folders in work directory based on context. - - Args: - workdir (str): Path to workdir where workfiles is stored. - host_name (str): Name of host implementation. - task_type (str): Type of task for which extra folders should be - created. - task_name (str): Name of task for which extra folders should be - created. - project_name (str): Name of project on which task is. - project_settings (dict): Prepared project settings. Are loaded if not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_workdir_extra_folders - - return create_workdir_extra_folders( - workdir, - host_name, - task_type, - task_name, - project_name, - project_settings - ) diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 10fd3940b8..d204fc2c8f 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -4,157 +4,9 @@ import os import logging import re -import warnings -import functools - -from openpype.client import get_asset_by_id - log = logging.getLogger(__name__) -class PluginToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PluginToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PluginToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.create.TaskNotSetError") -def TaskNotSetError(*args, **kwargs): - from openpype.pipeline.create import TaskNotSetError - - return TaskNotSetError(*args, **kwargs) - - -@deprecated("openpype.pipeline.create.get_subset_name") -def get_subset_name_with_asset_doc( - family, - variant, - task_name, - asset_doc, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None -): - """Calculate subset name based on passed context and OpenPype settings. - - Subst name templates are defined in `project_settings/global/tools/creator - /subset_name_profiles` where are profiles with host name, family, task name - and task type filters. If context does not match any profile then - `DEFAULT_SUBSET_TEMPLATE` is used as default template. - - That's main reason why so many arguments are required to calculate subset - name. - - Args: - family (str): Instance family. - variant (str): In most of cases it is user input during creation. - task_name (str): Task name on which context is instance created. - asset_doc (dict): Queried asset document with it's tasks in data. - Used to get task type. - project_name (str): Name of project on which is instance created. - Important for project settings that are loaded. - host_name (str): One of filtering criteria for template profile - filters. - default_template (str): Default template if any profile does not match - passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if - is not passed. - dynamic_data (dict): Dynamic data specific for a creator which creates - instance. - """ - - from openpype.pipeline.create import get_subset_name - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - -@deprecated -def get_subset_name( - family, - variant, - task_name, - asset_id, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None, - dbcon=None -): - """Calculate subset name using OpenPype settings. - - This variant of function expects asset id as argument. - - This is legacy function should be replaced with - `get_subset_name_with_asset_doc` where asset document is expected. - """ - - from openpype.pipeline.create import get_subset_name - - if project_name is None: - project_name = dbcon.project_name - - asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"]) - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - def prepare_template_data(fill_pairs): """ Prepares formatted data for filling template. diff --git a/openpype/lib/pype_info.py b/openpype/lib/pype_info.py index 8370ecc88f..2f57d76850 100644 --- a/openpype/lib/pype_info.py +++ b/openpype/lib/pype_info.py @@ -5,6 +5,7 @@ import platform import getpass import socket +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import get_local_settings from .execute import get_openpype_execute_args from .local_settings import get_local_site_id @@ -33,6 +34,21 @@ def get_openpype_info(): } +def get_ayon_info(): + executable_args = get_openpype_execute_args() + if is_running_from_build(): + version_type = "build" + else: + version_type = "code" + return { + "build_verison": get_build_version(), + "version_type": version_type, + "executable": executable_args[-1], + "ayon_root": os.environ["AYON_ROOT"], + "server_url": os.environ["AYON_SERVER_URL"] + } + + def get_workstation_info(): """Basic information about workstation.""" host_name = socket.gethostname() @@ -52,12 +68,17 @@ def get_workstation_info(): def get_all_current_info(): """All information about current process in one dictionary.""" - return { - "pype": get_openpype_info(), + + output = { "workstation": get_workstation_info(), "env": os.environ.copy(), "local_settings": get_local_settings() } + if AYON_SERVER_ENABLED: + output["ayon"] = get_ayon_info() + else: + output["openpype"] = get_openpype_info() + return output def extract_pype_info_to_file(dirpath): diff --git a/openpype/lib/python_module_tools.py b/openpype/lib/python_module_tools.py index a10263f991..bedf19562d 100644 --- a/openpype/lib/python_module_tools.py +++ b/openpype/lib/python_module_tools.py @@ -270,8 +270,8 @@ def is_func_signature_supported(func, *args, **kwargs): Args: func (function): A function where the signature should be tested. - *args (tuple[Any]): Positional arguments for function signature. - **kwargs (dict[str, Any]): Keyword arguments for function signature. + *args (Any): Positional arguments for function signature. + **kwargs (Any): Keyword arguments for function signature. Returns: bool: Function can pass in arguments. diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index 771f670f89..1b75b96525 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -11,8 +11,8 @@ import xml.etree.ElementTree from .execute import run_subprocess from .vendor_bin_utils import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, ) @@ -83,11 +83,11 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False): Stdout should contain xml format string. """ - args = [ - get_oiio_tools_path(), + args = get_oiio_tool_args( + "oiiotool", "--info", "-v" - ] + ) if subimages: args.append("-a") @@ -315,6 +315,92 @@ def parse_oiio_xml_output(xml_string, logger=None): return output +def get_review_info_by_layer_name(channel_names): + """Get channels info grouped by layer name. + + Finds all layers in channel names and returns list of dictionaries with + information about channels in layer. + Example output (not real world example): + [ + { + "name": "Main", + "review_channels": { + "R": "Main.red", + "G": "Main.green", + "B": "Main.blue", + "A": None, + } + }, + { + "name": "Composed", + "review_channels": { + "R": "Composed.R", + "G": "Composed.G", + "B": "Composed.B", + "A": "Composed.A", + } + }, + ... + ] + + Args: + channel_names (list[str]): List of channel names. + + Returns: + list[dict]: List of channels information. + """ + + layer_names_order = [] + rgba_by_layer_name = collections.defaultdict(dict) + channels_by_layer_name = collections.defaultdict(dict) + + for channel_name in channel_names: + layer_name = "" + last_part = channel_name + if "." in channel_name: + layer_name, last_part = channel_name.rsplit(".", 1) + + channels_by_layer_name[layer_name][channel_name] = last_part + if last_part.lower() not in { + "r", "red", + "g", "green", + "b", "blue", + "a", "alpha" + }: + continue + + if layer_name not in layer_names_order: + layer_names_order.append(layer_name) + # R, G, B or A + channel = last_part[0].upper() + rgba_by_layer_name[layer_name][channel] = channel_name + + # Put empty layer to the beginning of the list + # - if input has R, G, B, A channels they should be used for review + if "" in layer_names_order: + layer_names_order.remove("") + layer_names_order.insert(0, "") + + output = [] + for layer_name in layer_names_order: + rgba_layer_info = rgba_by_layer_name[layer_name] + red = rgba_layer_info.get("R") + green = rgba_layer_info.get("G") + blue = rgba_layer_info.get("B") + if not red or not green or not blue: + continue + output.append({ + "name": layer_name, + "review_channels": { + "R": red, + "G": green, + "B": blue, + "A": rgba_layer_info.get("A"), + } + }) + return output + + def get_convert_rgb_channels(channel_names): """Get first available RGB(A) group from channels info. @@ -323,7 +409,7 @@ def get_convert_rgb_channels(channel_names): # Ideal situation channels_info: [ "R", "G", "B", "A" - } + ] ``` Result will be `("R", "G", "B", "A")` @@ -331,50 +417,60 @@ def get_convert_rgb_channels(channel_names): # Not ideal situation channels_info: [ "beauty.red", - "beuaty.green", + "beauty.green", "beauty.blue", "depth.Z" ] ``` Result will be `("beauty.red", "beauty.green", "beauty.blue", None)` + Args: + channel_names (list[str]): List of channel names. + Returns: - NoneType: There is not channel combination that matches RGB - combination. - tuple: Tuple of 4 channel names defying channel names for R, G, B, A - where A can be None. + Union[NoneType, tuple[str, str, str, Union[str, None]]]: Tuple of + 4 channel names defying channel names for R, G, B, A or None + if there is not any layer with RGB combination. """ - rgb_by_main_name = collections.defaultdict(dict) - main_name_order = [""] - for channel_name in channel_names: - name_parts = channel_name.split(".") - rgb_part = name_parts.pop(-1).lower() - main_name = ".".join(name_parts) - if rgb_part in ("r", "red"): - rgb_by_main_name[main_name]["R"] = channel_name - elif rgb_part in ("g", "green"): - rgb_by_main_name[main_name]["G"] = channel_name - elif rgb_part in ("b", "blue"): - rgb_by_main_name[main_name]["B"] = channel_name - elif rgb_part in ("a", "alpha"): - rgb_by_main_name[main_name]["A"] = channel_name - else: - continue - if main_name not in main_name_order: - main_name_order.append(main_name) - output = None - for main_name in main_name_order: - colors = rgb_by_main_name.get(main_name) or {} - red = colors.get("R") - green = colors.get("G") - blue = colors.get("B") - alpha = colors.get("A") - if red is not None and green is not None and blue is not None: - output = (red, green, blue, alpha) - break + channels_info = get_review_info_by_layer_name(channel_names) + for item in channels_info: + review_channels = item["review_channels"] + return ( + review_channels["R"], + review_channels["G"], + review_channels["B"], + review_channels["A"] + ) + return None - return output + +def get_review_layer_name(src_filepath): + """Find layer name that could be used for review. + + Args: + src_filepath (str): Path to input file. + + Returns: + Union[str, None]: Layer name of None. + """ + + ext = os.path.splitext(src_filepath)[-1].lower() + if ext != ".exr": + return None + + # Load info about file from oiio tool + input_info = get_oiio_info_for_input(src_filepath) + if not input_info: + return None + + channel_names = input_info["channelnames"] + channels_info = get_review_info_by_layer_name(channel_names) + for item in channels_info: + # Layer name can be '', when review channels are 'R', 'G', 'B' + # without layer + return item["name"] or None + return None def should_convert_for_ffmpeg(src_filepath): @@ -395,7 +491,7 @@ def should_convert_for_ffmpeg(src_filepath): if not is_oiio_supported(): return None - # Load info about info from oiio tool + # Load info about file from oiio tool input_info = get_oiio_info_for_input(src_filepath) if not input_info: return None @@ -486,12 +582,11 @@ def convert_for_ffmpeg( compression = "none" # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -656,12 +751,11 @@ def convert_input_paths_for_ffmpeg( for input_path in input_paths: # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -726,11 +820,11 @@ def get_ffprobe_data(path_to_file, logger=None): """ if not logger: logger = logging.getLogger(__name__) - logger.info( + logger.debug( "Getting information about input \"{}\".".format(path_to_file) ) - args = [ - get_ffmpeg_tool_path("ffprobe"), + ffprobe_args = get_ffmpeg_tool_args("ffprobe") + args = ffprobe_args + [ "-hide_banner", "-loglevel", "fatal", "-show_error", @@ -1085,17 +1179,13 @@ def convert_colorspace( if logger is None: logger = logging.getLogger(__name__) - oiio_cmd = [get_oiio_tools_path()] - - if input_args: - oiio_cmd.extend(input_args) - - oiio_cmd.extend([ + oiio_cmd = get_oiio_tool_args( + "oiiotool", input_path, # Don't add any additional attributes "--nosoftwareattrib", "--colorconfig", config_path - ]) + ) if all([target_colorspace, view, display]): raise ValueError("Colorspace and both screen and display" diff --git a/openpype/lib/usdlib.py b/openpype/lib/usdlib.py index 5ef1d38f87..c166feb3a6 100644 --- a/openpype/lib/usdlib.py +++ b/openpype/lib/usdlib.py @@ -9,7 +9,7 @@ except ImportError: from mvpxr import Usd, UsdGeom, Sdf, Kind from openpype.client import get_project, get_asset_by_name -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy, get_current_project_name log = logging.getLogger(__name__) @@ -126,7 +126,7 @@ def create_model(filename, asset, variant_subsets): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -177,7 +177,7 @@ def create_shade(filename, asset, variant_subsets): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -213,7 +213,7 @@ def create_shade_variation(filename, asset, model_variant, shade_variants): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -314,7 +314,7 @@ def get_usd_master_path(asset, subset, representation): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() anatomy = Anatomy(project_name) project_doc = get_project( project_name, @@ -334,6 +334,9 @@ def get_usd_master_path(asset, subset, representation): "name": project_name, "code": project_doc.get("data", {}).get("code") }, + "folder": { + "name": asset_doc["name"], + }, "asset": asset_doc["name"], "subset": subset, "representation": representation, diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index f27c78d486..dc8bb7435e 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -3,9 +3,15 @@ import logging import platform import subprocess +from openpype import AYON_SERVER_ENABLED + log = logging.getLogger("Vendor utils") +class ToolNotFoundError(Exception): + """Raised when tool arguments are not found.""" + + class CachedToolPaths: """Cache already used and discovered tools and their executables. @@ -252,7 +258,7 @@ def _check_args_returncode(args): return proc.returncode == 0 -def _oiio_executable_validation(filepath): +def _oiio_executable_validation(args): """Validate oiio tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -270,32 +276,63 @@ def _oiio_executable_validation(filepath): should be used. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "--help"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_oiio_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get oiio arguments + from ayon_third_party import get_oiio_arguments + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_oiio_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get OpenImageIO args. Reason: {}".format(exc)) + return None def get_oiio_tools_path(tool="oiiotool"): - """Path to vendorized OpenImageIO tool executables. + """Path to OpenImageIO tool executables. - On Window it adds .exe extension if missing from tool argument. + On Windows it adds .exe extension if missing from tool argument. Args: - tool (string): Tool name (oiiotool, maketx, ...). + tool (string): Tool name 'oiiotool', 'maketx', etc. Default is "oiiotool". """ if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool) + if args: + if len(args) > 1: + raise ValueError( + "AYON oiio arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_OIIO_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -321,7 +358,33 @@ def get_oiio_tools_path(tool="oiiotool"): return tool_executable_path -def _ffmpeg_executable_validation(filepath): +def get_oiio_tool_args(tool_name, *extra_args): + """Arguments to launch OpenImageIO tool. + + Args: + tool_name (str): Tool name 'oiiotool', 'maketx', etc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool_name) + if args: + return args + extra_args + + path = get_oiio_tools_path(tool_name) + if path: + return [path] + extra_args + raise ToolNotFoundError( + "OIIO '{}' tool not found.".format(tool_name) + ) + + +def _ffmpeg_executable_validation(args): """Validate ffmpeg tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -338,24 +401,45 @@ def _ffmpeg_executable_validation(filepath): It does not validate if the executable is really a ffmpeg tool. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "-version"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_ffmpeg_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get ffmpeg arguments + from ayon_third_party import get_ffmpeg_arguments + + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_ffmpeg_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get FFmpeg args. Reason: {}".format(exc)) + return None def get_ffmpeg_tool_path(tool="ffmpeg"): """Path to vendorized FFmpeg executable. Args: - tool (string): Tool name (ffmpeg, ffprobe, ...). + tool (str): Tool name 'ffmpeg', 'ffprobe', etc. Default is "ffmpeg". Returns: @@ -365,6 +449,17 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool) + if args is not None: + if len(args) > 1: + raise ValueError( + "AYON ffmpeg arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_FFMPEG_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -390,19 +485,44 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): return tool_executable_path +def get_ffmpeg_tool_args(tool_name, *extra_args): + """Arguments to launch FFmpeg tool. + + Args: + tool_name (str): Tool name 'ffmpeg', 'ffprobe', exc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool_name) + if args: + return args + extra_args + + executable_path = get_ffmpeg_tool_path(tool_name) + if executable_path: + return [executable_path] + extra_args + raise ToolNotFoundError( + "FFmpeg '{}' tool not found.".format(tool_name) + ) + + def is_oiio_supported(): """Checks if oiiotool is configured for this platform. Returns: bool: OIIO tool executable is available. """ - loaded_path = oiio_path = get_oiio_tools_path() - if oiio_path: - oiio_path = find_executable(oiio_path) - if not oiio_path: - log.debug("OIIOTool is not configured or not present at {}".format( - loaded_path - )) + try: + args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + args = None + if not args: + log.debug("OIIOTool is not configured or not present.") return False - return True + return _oiio_executable_validation(args) diff --git a/openpype/modules/avalon_apps/avalon_app.py b/openpype/modules/avalon_apps/avalon_app.py index a0226ecc5c..57754793c4 100644 --- a/openpype/modules/avalon_apps/avalon_app.py +++ b/openpype/modules/avalon_apps/avalon_app.py @@ -1,5 +1,6 @@ import os +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayModule @@ -75,20 +76,11 @@ class AvalonModule(OpenPypeModule, ITrayModule): def show_library_loader(self): if self._library_loader_window is None: - from qtpy import QtCore - from openpype.tools.libraryloader import LibraryLoaderWindow from openpype.pipeline import install_openpype_plugins - - libraryloader = LibraryLoaderWindow( - show_projects=True, - show_libraries=True - ) - # Remove always on top flag for tray - window_flags = libraryloader.windowFlags() - if window_flags | QtCore.Qt.WindowStaysOnTopHint: - window_flags ^= QtCore.Qt.WindowStaysOnTopHint - libraryloader.setWindowFlags(window_flags) - self._library_loader_window = libraryloader + if AYON_SERVER_ENABLED: + self._init_ayon_loader() + else: + self._init_library_loader() install_openpype_plugins() @@ -106,3 +98,25 @@ class AvalonModule(OpenPypeModule, ITrayModule): if self.tray_initialized: from .rest_api import AvalonRestApiResource self.rest_api_obj = AvalonRestApiResource(self, server_manager) + + def _init_library_loader(self): + from qtpy import QtCore + from openpype.tools.libraryloader import LibraryLoaderWindow + + libraryloader = LibraryLoaderWindow( + show_projects=True, + show_libraries=True + ) + # Remove always on top flag for tray + window_flags = libraryloader.windowFlags() + if window_flags | QtCore.Qt.WindowStaysOnTopHint: + window_flags ^= QtCore.Qt.WindowStaysOnTopHint + libraryloader.setWindowFlags(window_flags) + self._library_loader_window = libraryloader + + def _init_ayon_loader(self): + from openpype.tools.ayon_loader.ui import LoaderWindow + + libraryloader = LoaderWindow() + + self._library_loader_window = libraryloader diff --git a/openpype/modules/base.py b/openpype/modules/base.py index fb9b4e1096..a3c21718b9 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Base class for Pype Modules.""" +import copy import os import sys import json @@ -12,8 +13,12 @@ import collections import traceback from uuid import uuid4 from abc import ABCMeta, abstractmethod -import six +import six +import appdirs +import ayon_api + +from openpype import AYON_SERVER_ENABLED from openpype.settings import ( get_system_settings, SYSTEM_SETTINGS_KEY, @@ -30,8 +35,9 @@ from openpype.settings.lib import ( from openpype.lib import ( Logger, import_filepath, - import_module_from_dirpath + import_module_from_dirpath, ) +from openpype.lib.openpype_version import is_staging_enabled from .interfaces import ( OpenPypeInterface, @@ -53,6 +59,14 @@ IGNORED_DEFAULT_FILENAMES = ( "example_addons", "default_modules", ) +# Modules that won't be loaded in AYON mode from "./openpype/modules" +# - the same modules are ignored in "./server_addon/create_ayon_addons.py" +IGNORED_FILENAMES_IN_AYON = { + "ftrack", + "shotgrid", + "sync_server", + "slack", +} # Inherit from `object` for Python 2 hosts @@ -186,7 +200,11 @@ def get_dynamic_modules_dirs(): Returns: list: Paths loaded from studio overrides. """ + output = [] + if AYON_SERVER_ENABLED: + return output + value = get_studio_system_settings_overrides() for key in ("modules", "addon_paths", platform.system().lower()): if key not in value: @@ -299,6 +317,136 @@ def load_modules(force=False): time.sleep(0.1) +def _get_ayon_addons_information(): + """Receive information about addons to use from server. + + Todos: + Actually ask server for the information. + Allow project name as optional argument to be able to query information + about used addons for specific project. + Returns: + List[Dict[str, Any]]: List of addon information to use. + """ + + output = [] + bundle_name = os.getenv("AYON_BUNDLE_NAME") + bundles = ayon_api.get_bundles()["bundles"] + final_bundle = next( + ( + bundle + for bundle in bundles + if bundle["name"] == bundle_name + ), + None + ) + if final_bundle is None: + return output + + bundle_addons = final_bundle["addons"] + addons = ayon_api.get_addons_info()["addons"] + for addon in addons: + name = addon["name"] + versions = addon.get("versions") + addon_version = bundle_addons.get(name) + if addon_version is None or not versions: + continue + version = versions.get(addon_version) + if version: + version = copy.deepcopy(version) + version["name"] = name + version["version"] = addon_version + output.append(version) + return output + + +def _load_ayon_addons(openpype_modules, modules_key, log): + """Load AYON addons based on information from server. + + This function should not trigger downloading of any addons but only use + what is already available on the machine (at least in first stages of + development). + + Args: + openpype_modules (_ModuleClass): Module object where modules are + stored. + log (logging.Logger): Logger object. + + Returns: + List[str]: List of v3 addons to skip to load because v4 alternative is + imported. + """ + + v3_addons_to_skip = [] + + addons_info = _get_ayon_addons_information() + if not addons_info: + return v3_addons_to_skip + addons_dir = os.environ.get("AYON_ADDONS_DIR") + if not addons_dir: + addons_dir = os.path.join( + appdirs.user_data_dir("AYON", "Ynput"), + "addons" + ) + if not os.path.exists(addons_dir): + log.warning("Addons directory does not exists. Path \"{}\"".format( + addons_dir + )) + return v3_addons_to_skip + + for addon_info in addons_info: + addon_name = addon_info["name"] + addon_version = addon_info["version"] + + folder_name = "{}_{}".format(addon_name, addon_version) + addon_dir = os.path.join(addons_dir, folder_name) + if not os.path.exists(addon_dir): + log.debug(( + "No localized client code found for addon {} {}." + ).format(addon_name, addon_version)) + continue + + sys.path.insert(0, addon_dir) + imported_modules = [] + for name in os.listdir(addon_dir): + path = os.path.join(addon_dir, name) + basename, ext = os.path.splitext(name) + is_dir = os.path.isdir(path) + is_py_file = ext.lower() == ".py" + if not is_py_file and not is_dir: + continue + + try: + mod = __import__(basename, fromlist=("",)) + imported_modules.append(mod) + except BaseException: + log.warning( + "Failed to import \"{}\"".format(basename), + exc_info=True + ) + + if not imported_modules: + log.warning("Addon {} {} has no content to import".format( + addon_name, addon_version + )) + continue + + if len(imported_modules) == 1: + mod = imported_modules[0] + addon_alias = getattr(mod, "V3_ALIAS", None) + if not addon_alias: + addon_alias = addon_name + v3_addons_to_skip.append(addon_alias) + new_import_str = "{}.{}".format(modules_key, addon_alias) + + sys.modules[new_import_str] = mod + setattr(openpype_modules, addon_alias, mod) + + else: + log.info("More then one module was imported") + + return v3_addons_to_skip + + def _load_modules(): # Key under which will be modules imported in `sys.modules` modules_key = "openpype_modules" @@ -308,6 +456,12 @@ def _load_modules(): log = Logger.get_logger("ModulesLoader") + ignore_addon_names = [] + if AYON_SERVER_ENABLED: + ignore_addon_names = _load_ayon_addons( + openpype_modules, modules_key, log + ) + # Look for OpenPype modules in paths defined with `get_module_dirs` # - dynamically imported OpenPype modules and addons module_dirs = get_module_dirs() @@ -337,6 +491,10 @@ def _load_modules(): is_in_current_dir = dirpath == current_dir is_in_host_dir = dirpath == hosts_dir + ignored_current_dir_filenames = set(IGNORED_DEFAULT_FILENAMES) + if AYON_SERVER_ENABLED: + ignored_current_dir_filenames |= IGNORED_FILENAMES_IN_AYON + for filename in os.listdir(dirpath): # Ignore filenames if filename in IGNORED_FILENAMES: @@ -344,13 +502,16 @@ def _load_modules(): if ( is_in_current_dir - and filename in IGNORED_DEFAULT_FILENAMES + and filename in ignored_current_dir_filenames ): continue fullpath = os.path.join(dirpath, filename) basename, ext = os.path.splitext(filename) + if basename in ignore_addon_names: + continue + # Validations if os.path.isdir(fullpath): # Check existence of init file diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index e3e94d50cd..23e959d84c 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -19,8 +19,13 @@ import requests import pyblish.api from openpype.pipeline.publish import ( AbstractMetaInstancePlugin, - KnownPublishError + KnownPublishError, + OpenPypePyblishPluginMixin ) +from openpype.pipeline.publish.lib import ( + replace_with_published_scene_path +) +from openpype import AYON_SERVER_ENABLED JSONDecodeError = getattr(json.decoder, "JSONDecodeError", ValueError) @@ -35,7 +40,7 @@ def requests_post(*args, **kwargs): Warning: Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. + of defense SSL is providing, and it is not recommended. """ if 'verify' not in kwargs: @@ -56,7 +61,7 @@ def requests_get(*args, **kwargs): Warning: Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. + of defense SSL is providing, and it is not recommended. """ if 'verify' not in kwargs: @@ -393,16 +398,28 @@ class DeadlineJobInfo(object): for key, value in data.items(): setattr(self, key, value) + def add_render_job_env_var(self): + """Check if in OP or AYON mode and use appropriate env var.""" + if AYON_SERVER_ENABLED: + self.EnvironmentKeyValue["AYON_RENDER_JOB"] = "1" + self.EnvironmentKeyValue["AYON_BUNDLE_NAME"] = ( + os.environ["AYON_BUNDLE_NAME"]) + else: + self.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + @six.add_metaclass(AbstractMetaInstancePlugin) -class AbstractSubmitDeadline(pyblish.api.InstancePlugin): +class AbstractSubmitDeadline(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): """Class abstracting access to Deadline.""" label = "Submit to Deadline" order = pyblish.api.IntegratorOrder + 0.1 + import_reference = False use_published = True asset_dependencies = False + default_priority = 50 def __init__(self, *args, **kwargs): super(AbstractSubmitDeadline, self).__init__(*args, **kwargs) @@ -521,72 +538,8 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): published. """ - instance = self._instance - workfile_instance = self._get_workfile_instance(instance.context) - if workfile_instance is None: - return - - # determine published path from Anatomy. - template_data = workfile_instance.data.get("anatomyData") - rep = workfile_instance.data["representations"][0] - template_data["representation"] = rep.get("name") - template_data["ext"] = rep.get("ext") - template_data["comment"] = None - - anatomy = instance.context.data['anatomy'] - template_obj = anatomy.templates_obj["publish"]["path"] - template_filled = template_obj.format_strict(template_data) - file_path = os.path.normpath(template_filled) - - self.log.info("Using published scene for render {}".format(file_path)) - - if not os.path.exists(file_path): - self.log.error("published scene does not exist!") - raise - - if not replace_in_path: - return file_path - - # now we need to switch scene in expected files - # because token will now point to published - # scene file and that might differ from current one - def _clean_name(path): - return os.path.splitext(os.path.basename(path))[0] - - new_scene = _clean_name(file_path) - orig_scene = _clean_name(instance.context.data["currentFile"]) - expected_files = instance.data.get("expectedFiles") - - if isinstance(expected_files[0], dict): - # we have aovs and we need to iterate over them - new_exp = {} - for aov, files in expected_files[0].items(): - replaced_files = [] - for f in files: - replaced_files.append( - str(f).replace(orig_scene, new_scene) - ) - new_exp[aov] = replaced_files - # [] might be too much here, TODO - instance.data["expectedFiles"] = [new_exp] - else: - new_exp = [] - for f in expected_files: - new_exp.append( - str(f).replace(orig_scene, new_scene) - ) - instance.data["expectedFiles"] = new_exp - - metadata_folder = instance.data.get("publishRenderMetadataFolder") - if metadata_folder: - metadata_folder = metadata_folder.replace(orig_scene, - new_scene) - instance.data["publishRenderMetadataFolder"] = metadata_folder - self.log.info("Scene name was switched {} -> {}".format( - orig_scene, new_scene - )) - - return file_path + return replace_with_published_scene_path( + self._instance, replace_in_path=replace_in_path) def assemble_payload( self, job_info=None, plugin_info=None, aux_files=None): @@ -647,22 +600,3 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): self._instance.data["deadlineSubmissionJob"] = result return result["_id"] - - @staticmethod - def _get_workfile_instance(context): - """Find workfile instance in context""" - for i in context: - - is_workfile = ( - "workfile" in i.data.get("families", []) or - i.data["family"] == "workfile" - ) - if not is_workfile: - continue - - # test if there is instance of workfile waiting - # to be published. - assert i.data.get("publish", True) is True, ( - "Workfile (scene) must be published along") - - return i diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py index 2de6073e29..9b4f89c129 100644 --- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py +++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py @@ -8,6 +8,7 @@ attribute or using default server if that attribute doesn't exists. from maya import cmds import pyblish.api +from openpype.pipeline.publish import KnownPublishError class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): @@ -21,7 +22,9 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): def process(self, instance): instance.data["deadlineUrl"] = self._collect_deadline_url(instance) - self.log.info( + instance.data["deadlineUrl"] = \ + instance.data["deadlineUrl"].strip().rstrip("/") + self.log.debug( "Using {} for submission.".format(instance.data["deadlineUrl"])) def _collect_deadline_url(self, render_instance): @@ -79,13 +82,14 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): if k in default_servers } - msg = ( - "\"{}\" server on instance is not enabled in project settings." - " Enabled project servers:\n{}".format( - instance_server, project_enabled_servers + if instance_server not in project_enabled_servers: + msg = ( + "\"{}\" server on instance is not enabled in project settings." + " Enabled project servers:\n{}".format( + instance_server, project_enabled_servers + ) ) - ) - assert instance_server in project_enabled_servers, msg + raise KnownPublishError(msg) self.log.debug("Using project approved server.") return project_enabled_servers[instance_server] diff --git a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py index 1a0d615dc3..58721efad3 100644 --- a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py +++ b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py @@ -48,3 +48,6 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin): context.data["defaultDeadline"] = deadline_webservice self.log.debug("Overriding from project settings with {}".format( # noqa: E501 deadline_webservice)) + + context.data["defaultDeadline"] = \ + context.data["defaultDeadline"].strip().rstrip("/") diff --git a/openpype/modules/deadline/plugins/publish/collect_pools.py b/openpype/modules/deadline/plugins/publish/collect_pools.py index e221eb00ea..a25b149f11 100644 --- a/openpype/modules/deadline/plugins/publish/collect_pools.py +++ b/openpype/modules/deadline/plugins/publish/collect_pools.py @@ -36,12 +36,17 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin, instance.data["primaryPool"] = ( attr_values.get("primaryPool") or self.primary_pool or "none" ) + if instance.data["primaryPool"] == "-": + instance.data["primaryPool"] = None if not instance.data.get("secondaryPool"): instance.data["secondaryPool"] = ( attr_values.get("secondaryPool") or self.secondary_pool or "none" # noqa ) + if instance.data["secondaryPool"] == "-": + instance.data["secondaryPool"] = None + @classmethod def get_attribute_defs(cls): # TODO: Preferably this would be an enum for the user diff --git a/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml b/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml index 0e7d72910e..aa21df3734 100644 --- a/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml +++ b/openpype/modules/deadline/plugins/publish/help/validate_deadline_pools.xml @@ -1,31 +1,31 @@ - Scene setting + Deadline Pools - ## Invalid Deadline pools found +## Invalid Deadline pools found - Configured pools don't match what is set in Deadline. +Configured pools don't match available pools in Deadline. - {invalid_value_str} +### How to repair? - ### How to repair? +If your instance had deadline pools set on creation, remove or +change them. - If your instance had deadline pools set on creation, remove or - change them. +In other cases inform admin to change them in Settings. - In other cases inform admin to change them in Settings. +Available deadline pools: + +{pools_str} - Available deadline pools {pools_str}. - ### __Detailed Info__ +### __Detailed Info__ - This error is shown when deadline pool is not on Deadline anymore. It - could happen in case of republish old workfile which was created with - previous deadline pools, - or someone changed pools on Deadline side, but didn't modify Openpype - Settings. +This error is shown when a configured pool is not available on Deadline. It +can happen when publishing old workfiles which were created with previous +deadline pools, or someone changed the available pools in Deadline, +but didn't modify Openpype Settings to match the changes. \ No newline at end of file diff --git a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py index 83dd5b49e2..009375e87e 100644 --- a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py @@ -106,8 +106,8 @@ class AfterEffectsSubmitDeadline( if value: dln_job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - dln_job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + dln_job_info.add_render_job_env_var() return dln_job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py b/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py new file mode 100644 index 0000000000..4a7497b075 --- /dev/null +++ b/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py @@ -0,0 +1,181 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to Deadline.""" + +import os +import getpass +import attr +from datetime import datetime + +import bpy + +from openpype.lib import is_running_from_build +from openpype.pipeline import legacy_io +from openpype.pipeline.farm.tools import iter_expected_files +from openpype.tests.lib import is_in_tests + +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo + + +@attr.s +class BlenderPluginInfo(): + SceneFile = attr.ib(default=None) # Input + Version = attr.ib(default=None) # Mandatory for Deadline + SaveFile = attr.ib(default=True) + + +class BlenderSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): + label = "Submit Render to Deadline" + hosts = ["blender"] + families = ["render.farm"] + + use_published = True + priority = 50 + chunk_size = 1 + jobInfo = {} + pluginInfo = {} + group = None + + def get_job_info(self): + job_info = DeadlineJobInfo(Plugin="Blender") + + job_info.update(self.jobInfo) + + instance = self._instance + context = instance.context + + # Always use the original work file name for the Job name even when + # rendering is done from the published Work File. The original work + # file name is clearer because it can also have subversion strings, + # etc. which are stripped for the published file. + src_filepath = context.data["currentFile"] + src_filename = os.path.basename(src_filepath) + + if is_in_tests(): + src_filename += datetime.now().strftime("%d%m%Y%H%M%S") + + job_info.Name = f"{src_filename} - {instance.name}" + job_info.BatchName = src_filename + instance.data.get("blenderRenderPlugin", "Blender") + job_info.UserName = context.data.get("deadlineUser", getpass.getuser()) + + # Deadline requires integers in frame range + frames = "{start}-{end}x{step}".format( + start=int(instance.data["frameStartHandle"]), + end=int(instance.data["frameEndHandle"]), + step=int(instance.data["byFrameStep"]), + ) + job_info.Frames = frames + + job_info.Pool = instance.data.get("primaryPool") + job_info.SecondaryPool = instance.data.get("secondaryPool") + job_info.Comment = context.data.get("comment") + job_info.Priority = instance.data.get("priority", self.priority) + + if self.group != "none" and self.group: + job_info.Group = self.group + + attr_values = self.get_attr_values_from_data(instance.data) + render_globals = instance.data.setdefault("renderGlobals", {}) + machine_list = attr_values.get("machineList", "") + if machine_list: + if attr_values.get("whitelist", True): + machine_list_key = "Whitelist" + else: + machine_list_key = "Blacklist" + render_globals[machine_list_key] = machine_list + + job_info.Priority = attr_values.get("priority") + job_info.ChunkSize = attr_values.get("chunkSize") + + # Add options from RenderGlobals + render_globals = instance.data.get("renderGlobals", {}) + job_info.update(render_globals) + + keys = [ + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "OPENPYPE_SG_USER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV" + "IS_TEST" + ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + + # Add mongo url if it's enabled + if self._instance.context.data.get("deadlinePassMongoUrl"): + keys.append("OPENPYPE_MONGO") + + environment = dict({key: os.environ[key] for key in keys + if key in os.environ}, **legacy_io.Session) + + for key in keys: + value = environment.get(key) + if not value: + continue + job_info.EnvironmentKeyValue[key] = value + + # to recognize job from PYPE for turning Event On/Off + job_info.add_render_job_env_var() + job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" + + # Adding file dependencies. + if self.asset_dependencies: + dependencies = instance.context.data["fileDependencies"] + for dependency in dependencies: + job_info.AssetDependency += dependency + + # Add list of expected files to job + # --------------------------------- + exp = instance.data.get("expectedFiles") + for filepath in iter_expected_files(exp): + job_info.OutputDirectory += os.path.dirname(filepath) + job_info.OutputFilename += os.path.basename(filepath) + + return job_info + + def get_plugin_info(self): + plugin_info = BlenderPluginInfo( + SceneFile=self.scene_path, + Version=bpy.app.version_string, + SaveFile=True, + ) + + plugin_payload = attr.asdict(plugin_info) + + # Patching with pluginInfo from settings + for key, value in self.pluginInfo.items(): + plugin_payload[key] = value + + return plugin_payload + + def process_submission(self): + instance = self._instance + + expected_files = instance.data["expectedFiles"] + if not expected_files: + raise RuntimeError("No Render Elements found!") + + first_file = next(iter_expected_files(expected_files)) + output_dir = os.path.dirname(first_file) + instance.data["outputDir"] = output_dir + instance.data["toBeRenderedOn"] = "deadline" + + payload = self.assemble_payload() + return self.submit(payload) + + def from_published_scene(self): + """ + This is needed to set the correct path for the json metadata. Because + the rendering path is set in the blend file during the collection, + and the path is adjusted to use the published scene, this ensures that + the metadata and the rendered files are in the same location. + """ + return super().from_published_scene(False) diff --git a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py index ee28612b44..47a0a25755 100644 --- a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py @@ -27,7 +27,7 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin): deadline_job_delay = "00:00:08:00" def process(self, instance): - instance.data["toBeRenderedOn"] = "deadline" + context = instance.context # get default deadline webservice url from deadline module @@ -183,10 +183,10 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin): } plugin = payload["JobInfo"]["Plugin"] - self.log.info("using render plugin : {}".format(plugin)) + self.log.debug("using render plugin : {}".format(plugin)) - self.log.info("Submitting..") - self.log.info(json.dumps(payload, indent=4, sort_keys=True)) + self.log.debug("Submitting..") + self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) # adding expectied files to instance.data self.expected_files(instance, render_path) diff --git a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py index a48596c6bf..9a718aa089 100644 --- a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py @@ -6,13 +6,15 @@ import requests import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io from openpype.pipeline.publish import ( OpenPypePyblishPluginMixin ) from openpype.lib import ( BoolDef, - NumberDef + NumberDef, + is_running_from_build ) @@ -34,6 +36,8 @@ class FusionSubmitDeadline( targets = ["local"] # presets + plugin = None + priority = 50 chunk_size = 1 concurrent_tasks = 1 @@ -173,7 +177,7 @@ class FusionSubmitDeadline( "SecondaryPool": instance.data.get("secondaryPool"), "Group": self.group, - "Plugin": "Fusion", + "Plugin": self.plugin, "Frames": "{start}-{end}".format( start=int(instance.data["frameStartHandle"]), end=int(instance.data["frameEndHandle"]) @@ -216,16 +220,34 @@ class FusionSubmitDeadline( # Include critical variables with submission keys = [ - # TODO: This won't work if the slaves don't have access to - # these paths, such as if slaves are running Linux and the - # submitter is on Windows. - "PYTHONPATH", - "OFX_PLUGIN_PATH", - "FUSION9_MasterPrefs" + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV", + "OPENPYPE_LOG_NO_COLORS", + "IS_TEST" ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) + # to recognize render jobs + if AYON_SERVER_ENABLED: + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + render_job_label = "AYON_RENDER_JOB" + else: + render_job_label = "OPENPYPE_RENDER_JOB" + + environment[render_job_label] = "1" + payload["JobInfo"].update({ "EnvironmentKeyValue%d" % index: "{key}={value}".format( key=key, @@ -233,8 +255,8 @@ class FusionSubmitDeadline( ) for index, key in enumerate(environment) }) - self.log.info("Submitting..") - self.log.info(json.dumps(payload, indent=4, sort_keys=True)) + self.log.debug("Submitting..") + self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) # E.g. http://192.168.0.1:8082/api/jobs url = "{}/api/jobs".format(deadline_url) diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py index 84fca11d9d..17e672334c 100644 --- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py @@ -265,7 +265,7 @@ class HarmonySubmitDeadline( job_info.SecondaryPool = self._instance.data.get("secondaryPool") job_info.ChunkSize = self.chunk_size batch_name = os.path.basename(self._instance.data["source"]) - if is_in_tests: + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") job_info.BatchName = batch_name job_info.Department = self.department @@ -299,8 +299,8 @@ class HarmonySubmitDeadline( if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() return job_info @@ -369,7 +369,7 @@ class HarmonySubmitDeadline( # rendering, we need to unzip it. published_scene = Path( self.from_published_scene(False)) - self.log.info(f"Processing {published_scene.as_posix()}") + self.log.debug(f"Processing {published_scene.as_posix()}") xstage_path = self._unzip_scene_file(published_scene) render_path = xstage_path.parent / "renders" diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py b/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py index 68aa653804..39c0c3afe4 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py @@ -162,7 +162,7 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin): ) # Submit - self.log.info("Submitting..") + self.log.debug("Submitting..") self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) # E.g. http://192.168.0.1:8082/api/jobs diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index 254914a850..8f21a920be 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -88,7 +88,6 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "AVALON_APP_NAME", "OPENPYPE_DEV", "OPENPYPE_LOG_NO_COLORS", - "OPENPYPE_VERSION" ] # Add OpenPype version if we are running from build. @@ -106,8 +105,8 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() for i, filepath in enumerate(instance.data["files"]): dirname = os.path.dirname(filepath) @@ -142,4 +141,3 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): # Store output dir for unified publisher (filesequence) output_dir = os.path.dirname(instance.data["files"][0]) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py index b6a30e36b7..073da3019a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py @@ -12,7 +12,9 @@ from openpype.pipeline import ( legacy_io, OpenPypePyblishPluginMixin ) -from openpype.settings import get_project_settings +from openpype.pipeline.publish.lib import ( + replace_with_published_scene_path +) from openpype.hosts.max.api.lib import ( get_current_renderer, get_multipass_setting @@ -20,6 +22,7 @@ from openpype.hosts.max.api.lib import ( from openpype.hosts.max.api.lib_rendersettings import RenderSettings from openpype_modules.deadline import abstract_submit_deadline from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo +from openpype.lib import is_running_from_build @attr.s @@ -110,9 +113,13 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, "AVALON_TASK", "AVALON_APP_NAME", "OPENPYPE_DEV", - "OPENPYPE_VERSION", "IS_TEST" ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + # Add mongo url if it's enabled if self._instance.context.data.get("deadlinePassMongoUrl"): keys.append("OPENPYPE_MONGO") @@ -126,8 +133,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Add list of expected files to job @@ -169,7 +176,6 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, first_file = next(self._iter_expected_files(files)) output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" filename = os.path.basename(filepath) @@ -179,20 +185,18 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, } self.log.debug("Submitting 3dsMax render..") - payload = self._use_published_name(payload_data) + project_settings = instance.context.data["project_settings"] + payload = self._use_published_name(payload_data, project_settings) job_info, plugin_info = payload self.submit(self.assemble_payload(job_info, plugin_info)) - def _use_published_name(self, data): + def _use_published_name(self, data, project_settings): instance = self._instance job_info = copy.deepcopy(self.job_info) plugin_info = copy.deepcopy(self.plugin_info) plugin_data = {} - project_setting = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) - multipass = get_multipass_setting(project_setting) + multipass = get_multipass_setting(project_settings) if multipass: plugin_data["DisableMultipass"] = 0 else: @@ -233,7 +237,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, if renderer == "Redshift_Renderer": plugin_data["redshift_SeparateAovFiles"] = instance.data.get( "separateAovFiles") - + if instance.data["cameras"]: + camera = instance.data["cameras"][0] + plugin_info["Camera0"] = camera + plugin_info["Camera"] = camera + plugin_info["Camera1"] = camera self.log.debug("plugin data:{}".format(plugin_data)) plugin_info.update(plugin_data) @@ -244,7 +252,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, if instance.data["renderer"] == "Redshift_Renderer": self.log.debug("Using Redshift...published scene wont be used..") replace_in_path = False - return replace_in_path + return replace_with_published_scene_path( + instance, replace_in_path) @staticmethod def _iter_expected_files(exp): diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index a6cdcb7e71..7775191b12 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -30,8 +30,16 @@ import attr from maya import cmds -from openpype.pipeline import legacy_io - +from openpype.pipeline import ( + legacy_io, + OpenPypePyblishPluginMixin +) +from openpype.lib import ( + BoolDef, + NumberDef, + TextDef, + EnumDef +) from openpype.hosts.maya.api.lib_rendersettings import RenderSettings from openpype.hosts.maya.api.lib import get_attr_in_layer @@ -39,6 +47,7 @@ from openpype_modules.deadline import abstract_submit_deadline from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo from openpype.tests.lib import is_in_tests from openpype.lib import is_running_from_build +from openpype.pipeline.farm.tools import iter_expected_files def _validate_deadline_bool_value(instance, attribute, value): @@ -92,7 +101,8 @@ class ArnoldPluginInfo(object): ArnoldFile = attr.ib(default=None) -class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): +class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, + OpenPypePyblishPluginMixin): label = "Submit Render to Deadline" hosts = ["maya"] @@ -106,6 +116,24 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): jobInfo = {} pluginInfo = {} group = "none" + strict_error_checking = True + + @classmethod + def apply_settings(cls, project_settings, system_settings): + settings = project_settings["deadline"]["publish"]["MayaSubmitDeadline"] # noqa + + # Take some defaults from settings + cls.asset_dependencies = settings.get("asset_dependencies", + cls.asset_dependencies) + cls.import_reference = settings.get("import_reference", + cls.import_reference) + cls.use_published = settings.get("use_published", cls.use_published) + cls.priority = settings.get("priority", cls.priority) + cls.tile_priority = settings.get("tile_priority", cls.tile_priority) + cls.limit = settings.get("limit", cls.limit) + cls.group = settings.get("group", cls.group) + cls.strict_error_checking = settings.get("strict_error_checking", + cls.strict_error_checking) def get_job_info(self): job_info = DeadlineJobInfo(Plugin="MayaBatch") @@ -151,6 +179,19 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): if self.limit: job_info.LimitGroups = ",".join(self.limit) + attr_values = self.get_attr_values_from_data(instance.data) + render_globals = instance.data.setdefault("renderGlobals", dict()) + machine_list = attr_values.get("machineList", "") + if machine_list: + if attr_values.get("whitelist", True): + machine_list_key = "Whitelist" + else: + machine_list_key = "Blacklist" + render_globals[machine_list_key] = machine_list + + job_info.Priority = attr_values.get("priority") + job_info.ChunkSize = attr_values.get("chunkSize") + # Add options from RenderGlobals render_globals = instance.data.get("renderGlobals", {}) job_info.update(render_globals) @@ -185,8 +226,8 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Adding file dependencies. @@ -198,7 +239,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): # Add list of expected files to job # --------------------------------- exp = instance.data.get("expectedFiles") - for filepath in self._iter_expected_files(exp): + for filepath in iter_expected_files(exp): job_info.OutputDirectory += os.path.dirname(filepath) job_info.OutputFilename += os.path.basename(filepath) @@ -223,8 +264,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "renderSetupIncludeLights", default_rs_include_lights) if rs_include_lights not in {"1", "0", True, False}: rs_include_lights = default_rs_include_lights - strict_error_checking = instance.data.get("strict_error_checking", - True) + + attr_values = self.get_attr_values_from_data(instance.data) + strict_error_checking = attr_values.get("strict_error_checking", + self.strict_error_checking) plugin_info = MayaPluginInfo( SceneFile=self.scene_path, Version=cmds.about(version=True), @@ -247,30 +290,26 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): def process_submission(self): instance = self._instance - context = instance.context filepath = self.scene_path # publish if `use_publish` else workfile # TODO: Avoid the need for this logic here, needed for submit publish # Store output dir for unified publisher (filesequence) expected_files = instance.data["expectedFiles"] - first_file = next(self._iter_expected_files(expected_files)) + first_file = next(iter_expected_files(expected_files)) output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir - instance.data["toBeRenderedOn"] = "deadline" # Patch workfile (only when use_published is enabled) if self.use_published: self._patch_workfile() # Gather needed data ------------------------------------------------ - workspace = context.data["workspaceDir"] - default_render_file = instance.context.data.get('project_settings')\ - .get('maya')\ - .get('RenderSettings')\ - .get('default_render_image_folder') filename = os.path.basename(filepath) - dirname = os.path.join(workspace, default_render_file) + dirname = os.path.join( + cmds.workspace(query=True, rootDirectory=True), + cmds.workspace(fileRuleEntry="images") + ) # Fill in common data to payload ------------------------------------ # TODO: Replace these with collected data from CollectRender @@ -292,12 +331,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): payload = self._get_vray_render_payload(payload_data) - elif "assscene" in instance.data["families"]: - self.log.debug("Submitting Arnold .ass standalone render..") - ass_export_payload = self._get_arnold_export_payload(payload_data) - export_job = self.submit(ass_export_payload) - - payload = self._get_arnold_render_payload(payload_data) else: self.log.debug("Submitting MayaBatch render..") payload = self._get_maya_payload(payload_data) @@ -391,7 +424,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): new_job_info.update(tiles_data["JobInfo"]) new_plugin_info.update(tiles_data["PluginInfo"]) - self.log.info("hashing {} - {}".format(file_index, file)) + self.log.debug("hashing {} - {}".format(file_index, file)) job_hash = hashlib.sha256( ("{}_{}".format(file_index, file)).encode("utf-8")) @@ -407,7 +440,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): ) file_index += 1 - self.log.info( + self.log.debug( "Submitting tile job(s) [{}] ...".format(len(frame_payloads))) # Submit frame tile jobs @@ -422,11 +455,13 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): assembly_job_info.Name += " - Tile Assembly Job" assembly_job_info.Frames = 1 assembly_job_info.MachineLimit = 1 - assembly_job_info.Priority = instance.data.get( - "tile_priority", self.tile_priority - ) + + attr_values = self.get_attr_values_from_data(instance.data) + assembly_job_info.Priority = attr_values.get("tile_priority", + self.tile_priority) assembly_job_info.TileJob = False + # TODO: This should be a new publisher attribute definition pool = instance.context.data["project_settings"]["deadline"] pool = pool["publish"]["ProcessSubmittedJobOnFarm"]["deadline_pool"] assembly_job_info.Pool = pool or instance.data.get("primaryPool", "") @@ -515,11 +550,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): assembly_job_ids = [] num_assemblies = len(assembly_payloads) for i, payload in enumerate(assembly_payloads): - self.log.info( + self.log.debug( "submitting assembly job {} of {}".format(i + 1, num_assemblies) ) - self.log.info(payload) assembly_job_id = self.submit(payload) assembly_job_ids.append(assembly_job_id) @@ -592,53 +626,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): return job_info, attr.asdict(plugin_info) - def _get_arnold_export_payload(self, data): - - try: - from openpype.scripts import export_maya_ass_job - except Exception: - raise AssertionError( - "Expected module 'export_maya_ass_job' to be available") - - module_path = export_maya_ass_job.__file__ - if module_path.endswith(".pyc"): - module_path = module_path[: -len(".pyc")] + ".py" - - script = os.path.normpath(module_path) - - job_info = copy.deepcopy(self.job_info) - job_info.Name = self._job_info_label("Export") - - # Force a single frame Python job - job_info.Plugin = "Python" - job_info.Frames = 1 - - renderlayer = self._instance.data["setMembers"] - - # add required env vars for the export script - envs = { - "AVALON_APP_NAME": os.environ.get("AVALON_APP_NAME"), - "OPENPYPE_ASS_EXPORT_RENDER_LAYER": renderlayer, - "OPENPYPE_ASS_EXPORT_SCENE_FILE": self.scene_path, - "OPENPYPE_ASS_EXPORT_OUTPUT": job_info.OutputFilename[0], - "OPENPYPE_ASS_EXPORT_START": int(self._instance.data["frameStartHandle"]), # noqa - "OPENPYPE_ASS_EXPORT_END": int(self._instance.data["frameEndHandle"]), # noqa - "OPENPYPE_ASS_EXPORT_STEP": 1 - } - for key, value in envs.items(): - if not value: - continue - job_info.EnvironmentKeyValue[key] = value - - plugin_info = PythonPluginInfo( - ScriptFile=script, - Version="3.6", - Arguments="", - SingleFrameOnly="True" - ) - - return job_info, attr.asdict(plugin_info) - def _get_vray_render_payload(self, data): # Job Info @@ -772,16 +759,43 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): end=int(self._instance.data["frameEndHandle"]), ) - @staticmethod - def _iter_expected_files(exp): - if isinstance(exp[0], dict): - for _aov, files in exp[0].items(): - for file in files: - yield file - else: - for file in exp: - yield file + @classmethod + def get_attribute_defs(cls): + defs = super(MayaSubmitDeadline, cls).get_attribute_defs() + defs.extend([ + NumberDef("priority", + label="Priority", + default=cls.default_priority, + decimals=0), + NumberDef("chunkSize", + label="Frames Per Task", + default=1, + decimals=0, + minimum=1, + maximum=1000), + TextDef("machineList", + label="Machine List", + default="", + placeholder="machine1,machine2"), + EnumDef("whitelist", + label="Machine List (Allow/Deny)", + items={ + True: "Allow List", + False: "Deny List", + }, + default=False), + NumberDef("tile_priority", + label="Tile Assembler Priority", + decimals=0, + default=cls.tile_priority), + BoolDef("strict_error_checking", + label="Strict Error Checking", + default=cls.strict_error_checking), + + ]) + + return defs def _format_tiles( filename, diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py index 25f859554f..0d23f44333 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py @@ -1,18 +1,34 @@ import os -import requests +import attr from datetime import datetime from maya import cmds +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io, PublishXmlValidationError -from openpype.settings import get_project_settings from openpype.tests.lib import is_in_tests from openpype.lib import is_running_from_build +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo import pyblish.api -class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): +@attr.s +class MayaPluginInfo(object): + Build = attr.ib(default=None) # Don't force build + StrictErrorChecking = attr.ib(default=True) + + SceneFile = attr.ib(default=None) # Input scene + Version = attr.ib(default=None) # Mandatory for Deadline + ProjectPath = attr.ib(default=None) + + ScriptJob = attr.ib(default=True) + ScriptFilename = attr.ib(default=None) + + +class MayaSubmitRemotePublishDeadline( + abstract_submit_deadline.AbstractSubmitDeadline): """Submit Maya scene to perform a local publish in Deadline. Publishing in Deadline can be helpful for scenes that publish very slow. @@ -36,13 +52,6 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): targets = ["local"] def process(self, instance): - project_name = instance.context.data["projectName"] - # TODO settings can be received from 'context.data["project_settings"]' - settings = get_project_settings(project_name) - # use setting for publish job on farm, no reason to have it separately - deadline_publish_job_sett = (settings["deadline"] - ["publish"] - ["ProcessSubmittedJobOnFarm"]) # Ensure no errors so far if not (all(result["success"] @@ -54,54 +63,39 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): "Skipping submission..") return + super(MayaSubmitRemotePublishDeadline, self).process(instance) + + def get_job_info(self): + instance = self._instance + context = instance.context + + project_name = instance.context.data["projectName"] scene = instance.context.data["currentFile"] scenename = os.path.basename(scene) job_name = "{scene} [PUBLISH]".format(scene=scenename) batch_name = "{code} - {scene}".format(code=project_name, scene=scenename) + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") - # Generate the payload for Deadline submission - payload = { - "JobInfo": { - "Plugin": "MayaBatch", - "BatchName": batch_name, - "Name": job_name, - "UserName": instance.context.data["user"], - "Comment": instance.context.data.get("comment", ""), - # "InitialStatus": state - "Department": deadline_publish_job_sett["deadline_department"], - "ChunkSize": deadline_publish_job_sett["deadline_chunk_size"], - "Priority": deadline_publish_job_sett["deadline_priority"], - "Group": deadline_publish_job_sett["deadline_group"], - "Pool": deadline_publish_job_sett["deadline_pool"], - }, - "PluginInfo": { + job_info = DeadlineJobInfo(Plugin="MayaBatch") + job_info.BatchName = batch_name + job_info.Name = job_name + job_info.UserName = context.data.get("user") + job_info.Comment = context.data.get("comment", "") - "Build": None, # Don't force build - "StrictErrorChecking": True, - "ScriptJob": True, + # use setting for publish job on farm, no reason to have it separately + project_settings = context.data["project_settings"] + deadline_publish_job_sett = project_settings["deadline"]["publish"]["ProcessSubmittedJobOnFarm"] # noqa + job_info.Department = deadline_publish_job_sett["deadline_department"] + job_info.ChunkSize = deadline_publish_job_sett["deadline_chunk_size"] + job_info.Priority = deadline_publish_job_sett["deadline_priority"] + job_info.Group = deadline_publish_job_sett["deadline_group"] + job_info.Pool = deadline_publish_job_sett["deadline_pool"] - # Inputs - "SceneFile": scene, - "ScriptFilename": "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py", # noqa - - # Mandatory for Deadline - "Version": cmds.about(version=True), - - # Resolve relative references - "ProjectPath": cmds.workspace(query=True, - rootDirectory=True), - - }, - - # Mandatory for Deadline, may be empty - "AuxFiles": [] - } - - # Include critical environment variables with submission + api.Session + # Include critical environment variables with submission + Session keys = [ "FTRACK_API_USER", "FTRACK_API_KEY", @@ -117,29 +111,30 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): # TODO replace legacy_io with context.data environment["AVALON_PROJECT"] = project_name - environment["AVALON_ASSET"] = legacy_io.Session["AVALON_ASSET"] - environment["AVALON_TASK"] = legacy_io.Session["AVALON_TASK"] + environment["AVALON_ASSET"] = instance.context.data["asset"] + environment["AVALON_TASK"] = instance.context.data["task"] environment["AVALON_APP_NAME"] = os.environ.get("AVALON_APP_NAME") environment["OPENPYPE_LOG_NO_COLORS"] = "1" - environment["OPENPYPE_REMOTE_JOB"] = "1" environment["OPENPYPE_USERNAME"] = instance.context.data["user"] environment["OPENPYPE_PUBLISH_SUBSET"] = instance.data["subset"] environment["OPENPYPE_REMOTE_PUBLISH"] = "1" - payload["JobInfo"].update({ - "EnvironmentKeyValue%d" % index: "{key}={value}".format( - key=key, - value=environment[key] - ) for index, key in enumerate(environment) - }) + if AYON_SERVER_ENABLED: + environment["AYON_REMOTE_PUBLISH"] = "1" + else: + environment["OPENPYPE_REMOTE_PUBLISH"] = "1" + for key, value in environment.items(): + job_info.EnvironmentKeyValue[key] = value - self.log.info("Submitting Deadline job ...") - deadline_url = instance.context.data["defaultDeadline"] - # if custom one is set in instance, use that - if instance.data.get("deadlineUrl"): - deadline_url = instance.data.get("deadlineUrl") - assert deadline_url, "Requires Deadline Webservice URL" - url = "{}/api/jobs".format(deadline_url) - response = requests.post(url, json=payload, timeout=10) - if not response.ok: - raise Exception(response.text) + def get_plugin_info(self): + + scene = self._instance.context.data["currentFile"] + + plugin_info = MayaPluginInfo() + plugin_info.SceneFile = scene + plugin_info.ScriptFilename = "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py" # noqa + plugin_info.Version = cmds.about(version=True) + plugin_info.ProjectPath = cmds.workspace(query=True, + rootDirectory=True) + + return attr.asdict(plugin_info) diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index 4900231783..0295c2b760 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -8,6 +8,8 @@ import requests import pyblish.api import nuke + +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io from openpype.pipeline.publish import ( OpenPypePyblishPluginMixin @@ -88,7 +90,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, if not instance.data.get("farm"): self.log.debug("Skipping local instance.") return - instance.data["attributeValues"] = self.get_attr_values_from_data( instance.data) @@ -96,7 +97,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, instance.data["suspend_publish"] = instance.data["attributeValues"][ "suspend_publish"] - instance.data["toBeRenderedOn"] = "deadline" families = instance.data["families"] node = instance.data["transientData"]["node"] @@ -121,13 +121,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, render_path = instance.data['path'] script_path = context.data["currentFile"] - for item in context: - if "workfile" in item.data["families"]: - msg = "Workfile (scene) must be published along" - assert item.data["publish"] is True, msg - - template_data = item.data.get("anatomyData") - rep = item.data.get("representations")[0].get("name") + for item_ in context: + if "workfile" in item_.data["family"]: + template_data = item_.data.get("anatomyData") + rep = item_.data.get("representations")[0].get("name") template_data["representation"] = rep template_data["ext"] = rep template_data["comment"] = None @@ -139,19 +136,24 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, "Using published scene for render {}".format(script_path) ) - response = self.payload_submit( - instance, - script_path, - render_path, - node.name(), - submit_frame_start, - submit_frame_end - ) - # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = response.json() - instance.data["outputDir"] = os.path.dirname( - render_path).replace("\\", "/") - instance.data["publishJobState"] = "Suspended" + # only add main rendering job if target is not frames_farm + r_job_response_json = None + if instance.data["render_target"] != "frames_farm": + r_job_response = self.payload_submit( + instance, + script_path, + render_path, + node.name(), + submit_frame_start, + submit_frame_end + ) + r_job_response_json = r_job_response.json() + instance.data["deadlineSubmissionJob"] = r_job_response_json + + # Store output dir for unified publisher (filesequence) + instance.data["outputDir"] = os.path.dirname( + render_path).replace("\\", "/") + instance.data["publishJobState"] = "Suspended" if instance.data.get("bakingNukeScripts"): for baking_script in instance.data["bakingNukeScripts"]: @@ -159,18 +161,20 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, script_path = baking_script["bakeScriptPath"] exe_node_name = baking_script["bakeWriteNodeName"] - resp = self.payload_submit( + b_job_response = self.payload_submit( instance, script_path, render_path, exe_node_name, submit_frame_start, submit_frame_end, - response.json() + r_job_response_json, + baking_submission=True ) # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = resp.json() + instance.data["deadlineSubmissionJob"] = b_job_response.json() + instance.data["publishJobState"] = "Suspended" # add to list of job Id @@ -178,7 +182,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, instance.data["bakingSubmissionJobs"] = [] instance.data["bakingSubmissionJobs"].append( - resp.json()["_id"]) + b_job_response.json()["_id"]) # redefinition of families if "render" in instance.data["family"]: @@ -197,15 +201,35 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, exe_node_name, start_frame, end_frame, - response_data=None + response_data=None, + baking_submission=False, ): + """Submit payload to Deadline + + Args: + instance (pyblish.api.Instance): pyblish instance + script_path (str): path to nuke script + render_path (str): path to rendered images + exe_node_name (str): name of the node to render + start_frame (int): start frame + end_frame (int): end frame + response_data Optional[dict]: response data from + previous submission + baking_submission Optional[bool]: if it's baking submission + + Returns: + requests.Response + """ render_dir = os.path.normpath(os.path.dirname(render_path)) - batch_name = os.path.basename(script_path) - jobname = "%s - %s" % (batch_name, instance.name) + + # batch name + src_filepath = instance.context.data["currentFile"] + batch_name = os.path.basename(src_filepath) + job_name = os.path.basename(render_path) + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") - output_filename_0 = self.preview_fname(render_path) if not response_data: @@ -219,18 +243,15 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, # resolve any limit groups limit_groups = self.get_limit_groups() - self.log.info("Limit groups: `{}`".format(limit_groups)) + self.log.debug("Limit groups: `{}`".format(limit_groups)) payload = { "JobInfo": { # Top-level group name "BatchName": batch_name, - # Asset dependency to wait for at least the scene file to sync. - # "AssetDependency0": script_path, - # Job name, as seen in Monitor - "Name": jobname, + "Name": job_name, # Arbitrary username, for visualisation in Monitor "UserName": self._deadline_user, @@ -292,12 +313,17 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, "AuxFiles": [] } - if response_data.get("_id"): + # TODO: rewrite for baking with sequences + if baking_submission: payload["JobInfo"].update({ "JobType": "Normal", + "ChunkSize": 99999999 + }) + + if response_data.get("_id"): + payload["JobInfo"].update({ "BatchName": response_data["Props"]["Batch"], "JobDependency0": response_data["_id"], - "ChunkSize": 99999999 }) # Include critical environment variables with submission @@ -337,8 +363,14 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, if _path.lower().startswith('openpype_'): environment[_path] = os.environ[_path] - # to recognize job from PYPE for turning Event On/Off - environment["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + if AYON_SERVER_ENABLED: + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + render_job_label = "AYON_RENDER_JOB" + else: + render_job_label = "OPENPYPE_RENDER_JOB" + + environment[render_job_label] = "1" # finally search replace in values of any key if self.env_search_replace_values: @@ -354,10 +386,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, }) plugin = payload["JobInfo"]["Plugin"] - self.log.info("using render plugin : {}".format(plugin)) + self.log.debug("using render plugin : {}".format(plugin)) - self.log.info("Submitting..") - self.log.info(json.dumps(payload, indent=4, sort_keys=True)) + self.log.debug("Submitting..") + self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) # adding expectied files to instance.data self.expected_files( diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 292fe58cca..6ed5819f2b 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -1,56 +1,30 @@ # -*- coding: utf-8 -*- """Submit publishing job to farm.""" - import os import json import re -from copy import copy, deepcopy +from copy import deepcopy import requests import clique import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_last_version_by_subset_name, - get_representations, -) -from openpype.pipeline import ( - get_representation_path, - legacy_io, ) +from openpype.pipeline import publish, legacy_io +from openpype.lib import EnumDef, is_running_from_build from openpype.tests.lib import is_in_tests -from openpype.pipeline.farm.patterning import match_aov_pattern -from openpype.lib import is_running_from_build -from openpype.pipeline import publish +from openpype.pipeline.version_start import get_versioning_start - -def get_resources(project_name, version, extension=None): - """Get the files from the specific version.""" - - # TODO this functions seems to be weird - # - it's looking for representation with one extension or first (any) - # representation from a version? - # - not sure how this should work, maybe it does for specific use cases - # but probably can't be used for all resources from 2D workflows - extensions = None - if extension: - extensions = [extension] - repre_docs = list(get_representations( - project_name, version_ids=[version["_id"]], extensions=extensions - )) - assert repre_docs, "This is a bug" - - representation = repre_docs[0] - directory = get_representation_path(representation) - print("Source: ", directory) - resources = sorted( - [ - os.path.normpath(os.path.join(directory, fname)) - for fname in os.listdir(directory) - ] - ) - - return resources +from openpype.pipeline.farm.pyblish_functions import ( + create_skeleton_instance, + create_instances_for_aov, + attach_instances_to_subset, + prepare_representations, + create_metadata_path +) def get_resource_files(resources, frame_range=None): @@ -81,6 +55,7 @@ def get_resource_files(resources, frame_range=None): class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, + publish.OpenPypePyblishPluginMixin, publish.ColormanagedPyblishPluginMixin): """Process Job submitted on farm. @@ -117,13 +92,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, label = "Submit image sequence jobs to Deadline or Muster" order = pyblish.api.IntegratorOrder + 0.2 icon = "tractor" - deadline_plugin = "OpenPype" + targets = ["local"] hosts = ["fusion", "max", "maya", "nuke", "houdini", - "celaction", "aftereffects", "harmony"] + "celaction", "aftereffects", "harmony", "blender"] - families = ["render.farm", "prerender.farm", + families = ["render.farm", "render.frames_farm", + "prerender.farm", "prerender.frames_farm", "renderlayer", "imagesequence", "vrayscene", "maxrender", "arnold_rop", "mantra_rop", @@ -131,6 +107,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "redshift_rop"] aov_filter = {"maya": [r".*([Bb]eauty).*"], + "blender": [r".*([Bb]eauty).*"], "aftereffects": [r".*"], # for everything from AE "harmony": [r".*"], # for everything from AE "celaction": [r".*"], @@ -146,13 +123,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER", "AVALON_APP_NAME", "OPENPYPE_USERNAME", - "OPENPYPE_SG_USER" + "OPENPYPE_SG_USER", + "KITSU_LOGIN", + "KITSU_PWD" ] - # Add OpenPype version if we are running from build. - if is_running_from_build(): - environ_keys.append("OPENPYPE_VERSION") - # custom deadline attributes deadline_department = "" deadline_pool = "" @@ -183,36 +158,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, # poor man exclusion skip_integration_repre_list = [] - def _create_metadata_path(self, instance): - ins_data = instance.data - # Ensure output dir exists - output_dir = ins_data.get( - "publishRenderMetadataFolder", ins_data["outputDir"]) - - try: - if not os.path.isdir(output_dir): - os.makedirs(output_dir) - except OSError: - # directory is not available - self.log.warning("Path is unreachable: `{}`".format(output_dir)) - - metadata_filename = "{}_metadata.json".format(ins_data["subset"]) - - metadata_path = os.path.join(output_dir, metadata_filename) - - # Convert output dir to `{root}/rest/of/path/...` with Anatomy - success, rootless_mtdt_p = self.anatomy.find_root_template_from_path( - metadata_path) - if not success: - # `rootless_path` is not set to `output_dir` if none of roots match - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(output_dir)) - rootless_mtdt_p = metadata_path - - return metadata_path, rootless_mtdt_p - def _submit_deadline_post_job(self, instance, job, instances): """Submit publish job to Deadline. @@ -227,38 +172,54 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, subset = data["subset"] job_name = "Publish - {subset}".format(subset=subset) + anatomy = instance.context.data['anatomy'] + # instance.data.get("subset") != instances[0]["subset"] # 'Main' vs 'renderMain' override_version = None instance_version = instance.data.get("version") # take this if exists if instance_version != 1: override_version = instance_version + output_dir = self._get_publish_folder( - instance.context.data['anatomy'], + anatomy, deepcopy(instance.data["anatomyData"]), instance.data.get("asset"), instances[0]["subset"], - 'render', + instance.context, + instances[0]["family"], override_version ) # Transfer the environment from the original job to this dependent # job so they use the same environment metadata_path, rootless_metadata_path = \ - self._create_metadata_path(instance) + create_metadata_path(instance, anatomy) environment = { - "AVALON_PROJECT": legacy_io.Session["AVALON_PROJECT"], - "AVALON_ASSET": legacy_io.Session["AVALON_ASSET"], - "AVALON_TASK": legacy_io.Session["AVALON_TASK"], + "AVALON_PROJECT": instance.context.data["projectName"], + "AVALON_ASSET": instance.context.data["asset"], + "AVALON_TASK": instance.context.data["task"], "OPENPYPE_USERNAME": instance.context.data["user"], - "OPENPYPE_PUBLISH_JOB": "1", - "OPENPYPE_RENDER_JOB": "0", - "OPENPYPE_REMOTE_JOB": "0", "OPENPYPE_LOG_NO_COLORS": "1", "IS_TEST": str(int(is_in_tests())) } + if AYON_SERVER_ENABLED: + environment["AYON_PUBLISH_JOB"] = "1" + environment["AYON_RENDER_JOB"] = "0" + environment["AYON_REMOTE_PUBLISH"] = "0" + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + deadline_plugin = "Ayon" + else: + environment["OPENPYPE_PUBLISH_JOB"] = "1" + environment["OPENPYPE_RENDER_JOB"] = "0" + environment["OPENPYPE_REMOTE_PUBLISH"] = "0" + deadline_plugin = "OpenPype" + # Add OpenPype version if we are running from build. + if is_running_from_build(): + self.environ_keys.append("OPENPYPE_VERSION") + # add environments from self.environ_keys for env_key in self.environ_keys: if os.getenv(env_key): @@ -278,6 +239,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, priority = self.deadline_priority or instance.data.get("priority", 50) + instance_settings = self.get_attr_values_from_data(instance.data) + initial_status = instance_settings.get("publishJobState", "Active") + # TODO: Remove this backwards compatibility of `suspend_publish` + if instance.data.get("suspend_publish"): + initial_status = "Suspended" + args = [ "--headless", 'publish', @@ -295,7 +262,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, ) payload = { "JobInfo": { - "Plugin": self.deadline_plugin, + "Plugin": deadline_plugin, "BatchName": job["Props"]["Batch"], "Name": job_name, "UserName": job["Props"]["User"], @@ -304,6 +271,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "Department": self.deadline_department, "ChunkSize": self.deadline_chunk_size, "Priority": priority, + "InitialStatus": initial_status, "Group": self.deadline_group, "Pool": self.deadline_pool or instance.data.get("primaryPool"), @@ -325,20 +293,19 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, self.log.info("Adding tile assembly jobs as dependencies...") job_index = 0 for assembly_id in instance.data.get("assemblySubmissionJobs"): - payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 + payload["JobInfo"]["JobDependency{}".format( + job_index)] = assembly_id # noqa: E501 job_index += 1 elif instance.data.get("bakingSubmissionJobs"): self.log.info("Adding baking submission jobs as dependencies...") job_index = 0 for assembly_id in instance.data["bakingSubmissionJobs"]: - payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 + payload["JobInfo"]["JobDependency{}".format( + job_index)] = assembly_id # noqa: E501 job_index += 1 - else: + elif job.get("_id"): payload["JobInfo"]["JobDependency0"] = job["_id"] - if instance.data.get("suspend_publish"): - payload["JobInfo"]["InitialStatus"] = "Suspended" - for index, (key_, value_) in enumerate(environment.items()): payload["JobInfo"].update( { @@ -351,7 +318,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, # remove secondary pool payload["JobInfo"].pop("SecondaryPool", None) - self.log.info("Submitting Deadline job ...") + self.log.debug("Submitting Deadline publish job ...") url = "{}/api/jobs".format(self.deadline_url) response = requests.post(url, json=payload, timeout=10) @@ -362,413 +329,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, return deadline_publish_job_id - def _copy_extend_frames(self, instance, representation): - """Copy existing frames from latest version. - - This will copy all existing frames from subset's latest version back - to render directory and rename them to what renderer is expecting. - - Arguments: - instance (pyblish.plugin.Instance): instance to get required - data from - representation (dict): presentation to operate on - - """ - import speedcopy - - self.log.info("Preparing to copy ...") - start = instance.data.get("frameStart") - end = instance.data.get("frameEnd") - project_name = legacy_io.active_project() - - # get latest version of subset - # this will stop if subset wasn't published yet - project_name = legacy_io.active_project() - version = get_last_version_by_subset_name( - project_name, - instance.data.get("subset"), - asset_name=instance.data.get("asset") - ) - - # get its files based on extension - subset_resources = get_resources( - project_name, version, representation.get("ext") - ) - r_col, _ = clique.assemble(subset_resources) - - # if override remove all frames we are expecting to be rendered - # so we'll copy only those missing from current render - if instance.data.get("overrideExistingFrame"): - for frame in range(start, end + 1): - if frame not in r_col.indexes: - continue - r_col.indexes.remove(frame) - - # now we need to translate published names from representation - # back. This is tricky, right now we'll just use same naming - # and only switch frame numbers - resource_files = [] - r_filename = os.path.basename( - representation.get("files")[0]) # first file - op = re.search(self.R_FRAME_NUMBER, r_filename) - pre = r_filename[:op.start("frame")] - post = r_filename[op.end("frame"):] - assert op is not None, "padding string wasn't found" - for frame in list(r_col): - fn = re.search(self.R_FRAME_NUMBER, frame) - # silencing linter as we need to compare to True, not to - # type - assert fn is not None, "padding string wasn't found" - # list of tuples (source, destination) - staging = representation.get("stagingDir") - staging = self.anatomy.fill_root(staging) - resource_files.append( - (frame, - os.path.join(staging, - "{}{}{}".format(pre, - fn.group("frame"), - post))) - ) - - # test if destination dir exists and create it if not - output_dir = os.path.dirname(representation.get("files")[0]) - if not os.path.isdir(output_dir): - os.makedirs(output_dir) - - # copy files - for source in resource_files: - speedcopy.copy(source[0], source[1]) - self.log.info(" > {}".format(source[1])) - - self.log.info( - "Finished copying %i files" % len(resource_files)) - - def _create_instances_for_aov( - self, instance_data, exp_files, additional_data, do_not_add_review - ): - """Create instance for each AOV found. - - This will create new instance for every aov it can detect in expected - files list. - - Arguments: - instance_data (pyblish.plugin.Instance): skeleton data for instance - (those needed) later by collector - exp_files (list): list of expected files divided by aovs - additional_data (dict): - do_not_add_review (bool): explicitly skip review - - Returns: - list of instances - - """ - task = os.environ["AVALON_TASK"] - subset = instance_data["subset"] - cameras = instance_data.get("cameras", []) - instances = [] - # go through aovs in expected files - for aov, files in exp_files[0].items(): - cols, rem = clique.assemble(files) - # we shouldn't have any reminders. And if we do, it should - # be just one item for single frame renders. - if not cols and rem: - assert len(rem) == 1, ("Found multiple non related files " - "to render, don't know what to do " - "with them.") - col = rem[0] - ext = os.path.splitext(col)[1].lstrip(".") - else: - # but we really expect only one collection. - # Nothing else make sense. - assert len(cols) == 1, "only one image sequence type is expected" # noqa: E501 - ext = cols[0].tail.lstrip(".") - col = list(cols[0]) - - self.log.debug(col) - # create subset name `familyTaskSubset_AOV` - group_name = 'render{}{}{}{}'.format( - task[0].upper(), task[1:], - subset[0].upper(), subset[1:]) - - cam = [c for c in cameras if c in col.head] - if cam: - if aov: - subset_name = '{}_{}_{}'.format(group_name, cam, aov) - else: - subset_name = '{}_{}'.format(group_name, cam) - else: - if aov: - subset_name = '{}_{}'.format(group_name, aov) - else: - subset_name = '{}'.format(group_name) - - if isinstance(col, (list, tuple)): - staging = os.path.dirname(col[0]) - else: - staging = os.path.dirname(col) - - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - self.log.info("Creating data for: {}".format(subset_name)) - - app = os.environ.get("AVALON_APP", "") - - if isinstance(col, list): - render_file_name = os.path.basename(col[0]) - else: - render_file_name = os.path.basename(col) - aov_patterns = self.aov_filter - - preview = match_aov_pattern(app, aov_patterns, render_file_name) - # toggle preview on if multipart is on - - if instance_data.get("multipartExr"): - self.log.debug("Adding preview tag because its multipartExr") - preview = True - self.log.debug("preview:{}".format(preview)) - new_instance = deepcopy(instance_data) - new_instance["subset"] = subset_name - new_instance["subsetGroup"] = group_name - - preview = preview and not do_not_add_review - if preview: - new_instance["review"] = True - - # create representation - if isinstance(col, (list, tuple)): - files = [os.path.basename(f) for f in col] - else: - files = os.path.basename(col) - - # Copy render product "colorspace" data to representation. - colorspace = "" - products = additional_data["renderProducts"].layer_data.products - for product in products: - if product.productName == aov: - colorspace = product.colorspace - break - - rep = { - "name": ext, - "ext": ext, - "files": files, - "frameStart": int(instance_data.get("frameStartHandle")), - "frameEnd": int(instance_data.get("frameEndHandle")), - # If expectedFile are absolute, we need only filenames - "stagingDir": staging, - "fps": new_instance.get("fps"), - "tags": ["review"] if preview else [], - "colorspaceData": { - "colorspace": colorspace, - "config": { - "path": additional_data["colorspaceConfig"], - "template": additional_data["colorspaceTemplate"] - }, - "display": additional_data["display"], - "view": additional_data["view"] - } - } - - # support conversion from tiled to scanline - if instance_data.get("convertToScanline"): - self.log.info("Adding scanline conversion.") - rep["tags"].append("toScanline") - - # poor man exclusion - if ext in self.skip_integration_repre_list: - rep["tags"].append("delete") - - self._solve_families(new_instance, preview) - - new_instance["representations"] = [rep] - - # if extending frames from existing version, copy files from there - # into our destination directory - if new_instance.get("extendFrames", False): - self._copy_extend_frames(new_instance, rep) - instances.append(new_instance) - self.log.debug("instances:{}".format(instances)) - return instances - - def _get_representations(self, instance_data, exp_files, - do_not_add_review): - """Create representations for file sequences. - - This will return representations of expected files if they are not - in hierarchy of aovs. There should be only one sequence of files for - most cases, but if not - we create representation from each of them. - - Arguments: - instance_data (dict): instance.data for which we are - setting representations - exp_files (list): list of expected files - do_not_add_review (bool): explicitly skip review - - Returns: - list of representations - - """ - representations = [] - host_name = os.environ.get("AVALON_APP", "") - collections, remainders = clique.assemble(exp_files) - - # create representation for every collected sequence - for collection in collections: - ext = collection.tail.lstrip(".") - preview = False - # TODO 'useSequenceForReview' is temporary solution which does - # not work for 100% of cases. We must be able to tell what - # expected files contains more explicitly and from what - # should be review made. - # - "review" tag is never added when is set to 'False' - if instance_data["useSequenceForReview"]: - # toggle preview on if multipart is on - if instance_data.get("multipartExr", False): - self.log.debug( - "Adding preview tag because its multipartExr" - ) - preview = True - else: - render_file_name = list(collection)[0] - # if filtered aov name is found in filename, toggle it for - # preview video rendering - preview = match_aov_pattern( - host_name, self.aov_filter, render_file_name - ) - - staging = os.path.dirname(list(collection)[0]) - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - frame_start = int(instance_data.get("frameStartHandle")) - if instance_data.get("slate"): - frame_start -= 1 - - preview = preview and not do_not_add_review - rep = { - "name": ext, - "ext": ext, - "files": [os.path.basename(f) for f in list(collection)], - "frameStart": frame_start, - "frameEnd": int(instance_data.get("frameEndHandle")), - # If expectedFile are absolute, we need only filenames - "stagingDir": staging, - "fps": instance_data.get("fps"), - "tags": ["review"] if preview else [], - } - - # poor man exclusion - if ext in self.skip_integration_repre_list: - rep["tags"].append("delete") - - if instance_data.get("multipartExr", False): - rep["tags"].append("multipartExr") - - # support conversion from tiled to scanline - if instance_data.get("convertToScanline"): - self.log.info("Adding scanline conversion.") - rep["tags"].append("toScanline") - - representations.append(rep) - - self._solve_families(instance_data, preview) - - # add remainders as representations - for remainder in remainders: - ext = remainder.split(".")[-1] - - staging = os.path.dirname(remainder) - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - rep = { - "name": ext, - "ext": ext, - "files": os.path.basename(remainder), - "stagingDir": staging, - } - - preview = match_aov_pattern( - host_name, self.aov_filter, remainder - ) - preview = preview and not do_not_add_review - if preview: - rep.update({ - "fps": instance_data.get("fps"), - "tags": ["review"] - }) - self._solve_families(instance_data, preview) - - already_there = False - for repre in instance_data.get("representations", []): - # might be added explicitly before by publish_on_farm - already_there = repre.get("files") == rep["files"] - if already_there: - self.log.debug("repre {} already_there".format(repre)) - break - - if not already_there: - representations.append(rep) - - for rep in representations: - # inject colorspace data - self.set_representation_colorspace( - rep, self.context, - colorspace=instance_data["colorspace"] - ) - - return representations - - def _solve_families(self, instance, preview=False): - families = instance.get("families") - - # if we have one representation with preview tag - # flag whole instance for review and for ftrack - if preview: - if "ftrack" not in families: - if os.environ.get("FTRACK_SERVER"): - self.log.debug( - "Adding \"ftrack\" to families because of preview tag." - ) - families.append("ftrack") - if "review" not in families: - self.log.debug( - "Adding \"review\" to families because of preview tag." - ) - families.append("review") - instance["families"] = families def process(self, instance): # type: (pyblish.api.Instance) -> None """Process plugin. - Detect type of renderfarm submission and create and post dependent job - in case of Deadline. It creates json file with metadata needed for + Detect type of render farm submission and create and post dependent + job in case of Deadline. It creates json file with metadata needed for publishing in directory of render. Args: @@ -784,7 +351,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, self.context = context self.anatomy = instance.context.data["anatomy"] - asset = data.get("asset") or legacy_io.Session["AVALON_ASSET"] + asset = data.get("asset") or context.data["asset"] subset = data.get("subset") start = instance.data.get("frameStart") @@ -843,7 +410,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, do_not_add_review = False if data.get("review"): families.append("review") - elif data.get("review") == False: + elif data.get("review") is False: self.log.debug("Instance has review explicitly disabled.") do_not_add_review = True @@ -871,7 +438,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "useSequenceForReview": data.get("useSequenceForReview", True), # map inputVersions `ObjectId` -> `str` so json supports it "inputVersions": list(map(str, data.get("inputVersions", []))), - "colorspace": instance.data.get("colorspace") + "colorspace": instance.data.get("colorspace"), + "stagingDir_persistent": instance.data.get( + "stagingDir_persistent", False + ) } # skip locking version if we are creating v01 @@ -921,9 +491,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, assert data.get("expectedFiles"), ("Submission from old Pype version" " - missing expectedFiles") + anatomy = instance.context.data["anatomy"] + + instance_skeleton_data = create_skeleton_instance( + instance, families_transfer=self.families_transfer, + instance_transfer=self.instance_transfer) """ - if content of `expectedFiles` are dictionaries, we will handle - it as list of AOVs, creating instance from every one of them. + if content of `expectedFiles` list are dictionaries, we will handle + it as list of AOVs, creating instance for every one of them. Example: -------- @@ -945,7 +520,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, This will create instances for `beauty` and `Z` subset adding those files to their respective representations. - If we've got only list of files, we collect all filesequences. + If we have only list of files, we collect all file sequences. More then one doesn't probably make sense, but we'll handle it like creating one instance with multiple representations. @@ -962,58 +537,26 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, This will result in one instance with two representations: `foo` and `xxx` """ + do_not_add_review = False + if instance.data.get("review") is False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True - self.log.info(data.get("expectedFiles")) - - if isinstance(data.get("expectedFiles")[0], dict): - # we cannot attach AOVs to other subsets as we consider every - # AOV subset of its own. - - additional_data = { - "renderProducts": instance.data["renderProducts"], - "colorspaceConfig": instance.data["colorspaceConfig"], - "display": instance.data["colorspaceDisplay"], - "view": instance.data["colorspaceView"] - } - - # Get templated path from absolute config path. - anatomy = instance.context.data["anatomy"] - colorspaceTemplate = instance.data["colorspaceConfig"] - success, rootless_staging_dir = ( - anatomy.find_root_template_from_path(colorspaceTemplate) - ) - if success: - colorspaceTemplate = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(colorspaceTemplate)) - additional_data["colorspaceTemplate"] = colorspaceTemplate - - if len(data.get("attachTo")) > 0: - assert len(data.get("expectedFiles")[0].keys()) == 1, ( - "attaching multiple AOVs or renderable cameras to " - "subset is not supported") - - # create instances for every AOV we found in expected files. - # note: this is done for every AOV and every render camere (if - # there are multiple renderable cameras in scene) - instances = self._create_instances_for_aov( - instance_skeleton_data, - data.get("expectedFiles"), - additional_data, - do_not_add_review - ) - self.log.info("got {} instance{}".format( - len(instances), - "s" if len(instances) > 1 else "")) - + if isinstance(instance.data.get("expectedFiles")[0], dict): + instances = create_instances_for_aov( + instance, instance_skeleton_data, + self.aov_filter, self.skip_integration_repre_list, + do_not_add_review) else: - representations = self._get_representations( + representations = prepare_representations( instance_skeleton_data, - data.get("expectedFiles"), - do_not_add_review + instance.data.get("expectedFiles"), + anatomy, + self.aov_filter, + self.skip_integration_repre_list, + do_not_add_review, + instance.context, + self ) if "representations" not in instance_skeleton_data.keys(): @@ -1023,25 +566,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, instance_skeleton_data["representations"] += representations instances = [instance_skeleton_data] - # if we are attaching to other subsets, create copy of existing - # instances, change data to match its subset and replace - # existing instances with modified data + # attach instances to subset if instance.data.get("attachTo"): - self.log.info("Attaching render to subset:") - new_instances = [] - for at in instance.data.get("attachTo"): - for i in instances: - new_i = copy(i) - new_i["version"] = at.get("version") - new_i["subset"] = at.get("subset") - new_i["family"] = at.get("family") - new_i["append"] = True - # don't set subsetGroup if we are attaching - new_i.pop("subsetGroup") - new_instances.append(new_i) - self.log.info(" - {} / v{}".format( - at.get("subset"), at.get("version"))) - instances = new_instances + instances = attach_instances_to_subset( + instance.data.get("attachTo"), instances + ) r''' SUBMiT PUBLiSH JOB 2 D34DLiN3 ____ @@ -1056,11 +585,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, render_job = None submission_type = "" if instance.data.get("toBeRenderedOn") == "deadline": - render_job = data.pop("deadlineSubmissionJob", None) + render_job = instance.data.pop("deadlineSubmissionJob", None) submission_type = "deadline" if instance.data.get("toBeRenderedOn") == "muster": - render_job = data.pop("musterSubmissionJob", None) + render_job = instance.data.pop("musterSubmissionJob", None) submission_type = "muster" if not render_job and instance.data.get("tileRendering") is False: @@ -1071,7 +600,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, import getpass render_job = {} - self.log.info("Faking job data ...") + self.log.debug("Faking job data ...") render_job["Props"] = {} # Render job doesn't exist because we do not have prior submission. # We still use data from it so lets fake it. @@ -1082,10 +611,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, render_job["Props"]["Batch"] = instance.data.get( "jobBatchName") else: - render_job["Props"]["Batch"] = os.path.splitext( - os.path.basename(context.data.get("currentFile")))[0] + batch = os.path.splitext(os.path.basename( + instance.context.data.get("currentFile")))[0] + render_job["Props"]["Batch"] = batch # User is deadline user - render_job["Props"]["User"] = context.data.get( + render_job["Props"]["User"] = instance.context.data.get( "deadlineUser", getpass.getuser()) render_job["Props"]["Env"] = { @@ -1094,6 +624,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER": os.environ.get("FTRACK_SERVER"), } + deadline_publish_job_id = None if submission_type == "deadline": # get default deadline webservice url from deadline module self.deadline_url = instance.context.data["defaultDeadline"] @@ -1111,15 +642,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, # publish job file publish_job = { - "asset": asset, - "frameStart": start, - "frameEnd": end, - "fps": context.data.get("fps", None), - "source": source, - "user": context.data["user"], - "version": context.data["version"], # this is workfile version - "intent": context.data.get("intent"), - "comment": context.data.get("comment"), + "asset": instance_skeleton_data["asset"], + "frameStart": instance_skeleton_data["frameStart"], + "frameEnd": instance_skeleton_data["frameEnd"], + "fps": instance_skeleton_data["fps"], + "source": instance_skeleton_data["source"], + "user": instance.context.data["user"], + "version": instance.context.data["version"], # workfile version + "intent": instance.context.data.get("intent"), + "comment": instance.context.data.get("comment"), "job": render_job or None, "session": legacy_io.Session.copy(), "instances": instances @@ -1129,7 +660,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, publish_job["deadline_publish_job_id"] = deadline_publish_job_id # add audio to metadata file if available - audio_file = context.data.get("audioFile") + audio_file = instance.context.data.get("audioFile") if audio_file and os.path.isfile(audio_file): publish_job.update({"audio": audio_file}) @@ -1142,57 +673,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, } publish_job.update({"ftrack": ftrack}) - metadata_path, rootless_metadata_path = self._create_metadata_path( - instance) + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, anatomy) - self.log.info("Writing json file: {}".format(metadata_path)) with open(metadata_path, "w") as f: json.dump(publish_job, f, indent=4, sort_keys=True) - def _extend_frames(self, asset, subset, start, end): - """Get latest version of asset nad update frame range. - - Based on minimum and maximuma values. - - Arguments: - asset (str): asset name - subset (str): subset name - start (int): start frame - end (int): end frame - - Returns: - (int, int): upddate frame start/end - - """ - # Frame comparison - prev_start = None - prev_end = None - - project_name = legacy_io.active_project() - version = get_last_version_by_subset_name( - project_name, - subset, - asset_name=asset - ) - - # Set prev start / end frames for comparison - if not prev_start and not prev_end: - prev_start = version["data"]["frameStart"] - prev_end = version["data"]["frameEnd"] - - updated_start = min(start, prev_start) - updated_end = max(end, prev_end) - - self.log.info( - "Updating start / end frame : " - "{} - {}".format(updated_start, updated_end) - ) - - return updated_start, updated_end - def _get_publish_folder(self, anatomy, template_data, - asset, subset, - family='render', version=None): + asset, subset, context, + family, version=None): """ Extracted logic to pre-calculate real publish folder, which is calculated in IntegrateNew inside of Deadline process. @@ -1217,8 +706,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, be stored based on 'publish' template """ + + project_name = context.data["projectName"] if not version: - project_name = legacy_io.active_project() version = get_last_version_by_subset_name( project_name, subset, @@ -1227,13 +717,32 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, if version: version = int(version["name"]) + 1 else: - version = 1 + version = get_versioning_start( + project_name, + template_data["app"], + task_name=template_data["task"]["name"], + task_type=template_data["task"]["type"], + family="render", + subset=subset, + project_settings=context.data["project_settings"] + ) + + host_name = context.data["hostName"] + task_info = template_data.get("task") or {} + + template_name = publish.get_publish_template_name( + project_name, + host_name, + family, + task_info.get("name"), + task_info.get("type"), + ) template_data["subset"] = subset - template_data["family"] = "render" + template_data["family"] = family template_data["version"] = version - render_templates = anatomy.templates_obj["render"] + render_templates = anatomy.templates_obj[template_name] if "folder" in render_templates: publish_folder = render_templates["folder"].format_strict( template_data @@ -1241,7 +750,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, else: # solve deprecated situation when `folder` key is not underneath # `publish` anatomy - project_name = legacy_io.Session["AVALON_PROJECT"] self.log.warning(( "Deprecation warning: Anatomy does not have set `folder`" " key underneath `publish` (in global of for project `{}`)." @@ -1251,3 +759,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, publish_folder = os.path.dirname(file_path) return publish_folder + + @classmethod + def get_attribute_defs(cls): + return [ + EnumDef("publishJobState", + label="Publish Job State", + items=["Active", "Suspended"], + default="Active") + ] diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py index d5016a4d82..a7b300beff 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py @@ -1,8 +1,7 @@ -import os -import requests - import pyblish.api +from openpype_modules.deadline.abstract_submit_deadline import requests_get + class ValidateDeadlineConnection(pyblish.api.InstancePlugin): """Validate Deadline Web Service is running""" @@ -10,7 +9,10 @@ class ValidateDeadlineConnection(pyblish.api.InstancePlugin): label = "Validate Deadline Web Service" order = pyblish.api.ValidatorOrder hosts = ["maya", "nuke"] - families = ["renderlayer"] + families = ["renderlayer", "render"] + + # cache + responses = {} def process(self, instance): # get default deadline webservice url from deadline module @@ -18,28 +20,16 @@ class ValidateDeadlineConnection(pyblish.api.InstancePlugin): # if custom one is set in instance, use that if instance.data.get("deadlineUrl"): deadline_url = instance.data.get("deadlineUrl") - self.log.info( - "We have deadline URL on instance {}".format( - deadline_url)) + self.log.debug( + "We have deadline URL on instance {}".format(deadline_url) + ) assert deadline_url, "Requires Deadline Webservice URL" - # Check response - response = self._requests_get(deadline_url) + if deadline_url not in self.responses: + self.responses[deadline_url] = requests_get(deadline_url) + + response = self.responses[deadline_url] assert response.ok, "Response must be ok" assert response.text.startswith("Deadline Web Service "), ( "Web service did not respond with 'Deadline Web Service'" ) - - def _requests_get(self, *args, **kwargs): - """ Wrapper for requests, disabling SSL certificate validation if - DONT_VERIFY_SSL environment variable is found. This is useful when - Deadline or Muster server are running with self-signed certificates - and their certificate is not added to trusted certificates on - client machines. - - WARNING: disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - """ - if 'verify' not in kwargs: - kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa - return requests.get(*args, **kwargs) diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py index e1c0595830..949caff7d8 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py @@ -19,38 +19,64 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin, order = pyblish.api.ValidatorOrder families = ["rendering", "render.farm", + "render.frames_farm", "renderFarm", "renderlayer", "maxrender"] optional = True + # cache + pools_per_url = {} + def process(self, instance): + if not self.is_active(instance.data): + return + if not instance.data.get("farm"): self.log.debug("Skipping local instance.") return - # get default deadline webservice url from deadline module - deadline_url = instance.context.data["defaultDeadline"] - self.log.info("deadline_url::{}".format(deadline_url)) - pools = DeadlineModule.get_deadline_pools(deadline_url, log=self.log) - self.log.info("pools::{}".format(pools)) - - formatting_data = { - "pools_str": ",".join(pools) - } + deadline_url = self.get_deadline_url(instance) + pools = self.get_pools(deadline_url) + invalid_pools = {} primary_pool = instance.data.get("primaryPool") if primary_pool and primary_pool not in pools: - msg = "Configured primary '{}' not present on Deadline".format( - instance.data["primaryPool"]) - formatting_data["invalid_value_str"] = msg - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data) + invalid_pools["primary"] = primary_pool secondary_pool = instance.data.get("secondaryPool") if secondary_pool and secondary_pool not in pools: - msg = "Configured secondary '{}' not present on Deadline".format( - instance.data["secondaryPool"]) - formatting_data["invalid_value_str"] = msg - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data) + invalid_pools["secondary"] = secondary_pool + + if invalid_pools: + message = "\n".join( + "{} pool '{}' not available on Deadline".format(key.title(), + pool) + for key, pool in invalid_pools.items() + ) + raise PublishXmlValidationError( + plugin=self, + message=message, + formatting_data={"pools_str": ", ".join(pools)} + ) + + def get_deadline_url(self, instance): + # get default deadline webservice url from deadline module + deadline_url = instance.context.data["defaultDeadline"] + if instance.data.get("deadlineUrl"): + # if custom one is set in instance, use that + deadline_url = instance.data.get("deadlineUrl") + return deadline_url + + def get_pools(self, deadline_url): + if deadline_url not in self.pools_per_url: + self.log.debug( + "Querying available pools for Deadline url: {}".format( + deadline_url) + ) + pools = DeadlineModule.get_deadline_pools(deadline_url, + log=self.log) + self.log.info("Available pools: {}".format(pools)) + self.pools_per_url[deadline_url] = pools + + return self.pools_per_url[deadline_url] diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py index ff4be677e7..5d37e7357e 100644 --- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py +++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py @@ -20,8 +20,19 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): allow_user_override = True def process(self, instance): - self.instance = instance - frame_list = self._get_frame_list(instance.data["render_job_id"]) + """Process all the nodes in the instance""" + + # get dependency jobs ids for retrieving frame list + dependent_job_ids = self._get_dependent_job_ids(instance) + + if not dependent_job_ids: + self.log.warning("No dependent jobs found for instance: {}" + "".format(instance)) + return + + # get list of frames from dependent jobs + frame_list = self._get_dependent_jobs_frames( + instance, dependent_job_ids) for repre in instance.data["representations"]: expected_files = self._get_expected_files(repre) @@ -59,7 +70,10 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): # Update the representation expected files self.log.info("Update range from actual job range " "to frame list: {}".format(frame_list)) - repre["files"] = sorted(job_expected_files) + # single item files must be string not list + repre["files"] = (sorted(job_expected_files) + if len(job_expected_files) > 1 else + list(job_expected_files)[0]) # Update the expected files expected_files = job_expected_files @@ -78,26 +92,45 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): ) ) - def _get_frame_list(self, original_job_id): + def _get_dependent_job_ids(self, instance): + """Returns list of dependent job ids from instance metadata.json + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + (list): list of dependent job ids + + """ + dependent_job_ids = [] + + # job_id collected from metadata.json + original_job_id = instance.data["render_job_id"] + + dependent_job_ids_env = os.environ.get("RENDER_JOB_IDS") + if dependent_job_ids_env: + dependent_job_ids = dependent_job_ids_env.split(',') + elif original_job_id: + dependent_job_ids = [original_job_id] + + return dependent_job_ids + + def _get_dependent_jobs_frames(self, instance, dependent_job_ids): """Returns list of frame ranges from all render job. Render job might be re-submitted so job_id in metadata.json could be invalid. GlobalJobPreload injects current job id to RENDER_JOB_IDS. Args: - original_job_id (str) + instance (pyblish.api.Instance): pyblish instance + dependent_job_ids (list): list of dependent job ids Returns: (list) """ all_frame_lists = [] - render_job_ids = os.environ.get("RENDER_JOB_IDS") - if render_job_ids: - render_job_ids = render_job_ids.split(',') - else: # fallback - render_job_ids = [original_job_id] - for job_id in render_job_ids: - job_info = self._get_job_info(job_id) + for job_id in dependent_job_ids: + job_info = self._get_job_info(instance, job_id) frame_list = job_info["Props"].get("Frames") if frame_list: all_frame_lists.extend(frame_list.split(',')) @@ -152,18 +185,25 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): return file_name_template, frame_placeholder - def _get_job_info(self, job_id): + def _get_job_info(self, instance, job_id): """Calls DL for actual job info for 'job_id' Might be different than job info saved in metadata.json if user manually changes job pre/during rendering. + Args: + instance (pyblish.api.Instance): pyblish instance + job_id (str): Deadline job id + + Returns: + (dict): Job info from Deadline + """ # get default deadline webservice url from deadline module - deadline_url = self.instance.context.data["defaultDeadline"] + deadline_url = instance.context.data["defaultDeadline"] # if custom one is set in instance, use that - if self.instance.data.get("deadlineUrl"): - deadline_url = self.instance.data.get("deadlineUrl") + if instance.data.get("deadlineUrl"): + deadline_url = instance.data.get("deadlineUrl") assert deadline_url, "Requires Deadline Webservice URL" url = "{}/api/jobs?JobID={}".format(deadline_url, job_id) diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico new file mode 100644 index 0000000000..aea977a125 Binary files /dev/null and b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico differ diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options new file mode 100644 index 0000000000..1fbe1ef299 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options @@ -0,0 +1,9 @@ +[Arguments] +Type=string +Label=Arguments +Category=Python Options +CategoryOrder=0 +Index=1 +Description=The arguments to pass to the script. If no arguments are required, leave this blank. +Required=false +DisableIfBlank=true diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param new file mode 100644 index 0000000000..8ba044ff81 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param @@ -0,0 +1,35 @@ +[About] +Type=label +Label=About +Category=About Plugin +CategoryOrder=-1 +Index=0 +Default=Ayon Plugin for Deadline +Description=Not configurable + +[AyonExecutable] +Type=multilinemultifilename +Label=Ayon Executable +Category=Ayon Executables +CategoryOrder=1 +Index=0 +Default= +Description=The path to the Ayon executable. Enter alternative paths on separate lines. + +[AyonServerUrl] +Type=string +Label=Ayon Server Url +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=Url to Ayon server + +[AyonApiKey] +Type=password +Label=Ayon API key +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=API key for service account on Ayon Server diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py new file mode 100644 index 0000000000..2c55e7c951 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py @@ -0,0 +1,156 @@ +#!/usr/bin/env python3 + +from System.IO import Path +from System.Text.RegularExpressions import Regex + +from Deadline.Plugins import PluginType, DeadlinePlugin +from Deadline.Scripting import ( + StringUtils, + FileUtils, + DirectoryUtils, + RepositoryUtils +) + +import re +import os +import platform + + +###################################################################### +# This is the function that Deadline calls to get an instance of the +# main DeadlinePlugin class. +###################################################################### +def GetDeadlinePlugin(): + return AyonDeadlinePlugin() + + +def CleanupDeadlinePlugin(deadlinePlugin): + deadlinePlugin.Cleanup() + + +class AyonDeadlinePlugin(DeadlinePlugin): + """ + Standalone plugin for publishing from Ayon + + Calls Ayonexecutable 'ayon_console' from first correctly found + file based on plugin configuration. Uses 'publish' command and passes + path to metadata json file, which contains all needed information + for publish process. + """ + def __init__(self): + super().__init__() + self.InitializeProcessCallback += self.InitializeProcess + self.RenderExecutableCallback += self.RenderExecutable + self.RenderArgumentCallback += self.RenderArgument + + def Cleanup(self): + for stdoutHandler in self.StdoutHandlers: + del stdoutHandler.HandleCallback + + del self.InitializeProcessCallback + del self.RenderExecutableCallback + del self.RenderArgumentCallback + + def InitializeProcess(self): + self.PluginType = PluginType.Simple + self.StdoutHandling = True + + self.SingleFramesOnly = self.GetBooleanPluginInfoEntryWithDefault( + "SingleFramesOnly", False) + self.LogInfo("Single Frames Only: %s" % self.SingleFramesOnly) + + self.AddStdoutHandlerCallback( + ".*Progress: (\d+)%.*").HandleCallback += self.HandleProgress + + def RenderExecutable(self): + job = self.GetJob() + + # set required env vars for Ayon + # cannot be in InitializeProcess as it is too soon + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + + for env, val in environment.items(): + self.SetProcessEnvironmentVariable(env, val) + + exe_list = self.GetConfigEntry("AyonExecutable") + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + + expanded_paths = [] + for path in exe_list.split(";"): + if path.startswith("~"): + path = os.path.expanduser(path) + expanded_paths.append(path) + exe = FileUtils.SearchFileList(";".join(expanded_paths)) + + if exe == "": + self.FailRender( + "Ayon executable was not found " + + "in the semicolon separated list " + + "\"" + ";".join(exe_list) + "\". " + + "The path to the render executable can be configured " + + "from the Plugin Configuration in the Deadline Monitor.") + return exe + + def RenderArgument(self): + arguments = str(self.GetPluginInfoEntryWithDefault("Arguments", "")) + arguments = RepositoryUtils.CheckPathMapping(arguments) + + arguments = re.sub(r"<(?i)STARTFRAME>", str(self.GetStartFrame()), + arguments) + arguments = re.sub(r"<(?i)ENDFRAME>", str(self.GetEndFrame()), + arguments) + arguments = re.sub(r"<(?i)QUOTE>", "\"", arguments) + + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)STARTFRAME%([0-9]+)>", + self.GetStartFrame()) + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)ENDFRAME%([0-9]+)>", + self.GetEndFrame()) + + count = 0 + for filename in self.GetAuxiliaryFilenames(): + localAuxFile = Path.Combine(self.GetJobsDataDirectory(), filename) + arguments = re.sub(r"<(?i)AUXFILE" + str(count) + r">", + localAuxFile.replace("\\", "/"), arguments) + count += 1 + + return arguments + + def ReplacePaddedFrame(self, arguments, pattern, frame): + frameRegex = Regex(pattern) + while True: + frameMatch = frameRegex.Match(arguments) + if not frameMatch.Success: + break + paddingSize = int(frameMatch.Groups[1].Value) + if paddingSize > 0: + padding = StringUtils.ToZeroPaddedString( + frame, paddingSize, False) + else: + padding = str(frame) + arguments = arguments.replace( + frameMatch.Groups[0].Value, padding) + + return arguments + + def HandleProgress(self): + progress = float(self.GetRegexMatch(1)) + self.SetProgress(progress) diff --git a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py index 15226bb773..e9b81369ca 100644 --- a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py +++ b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py @@ -355,6 +355,13 @@ def inject_openpype_environment(deadlinePlugin): " AVALON_TASK, AVALON_APP_NAME" )) + openpype_mongo = job.GetJobEnvironmentKeyValue("OPENPYPE_MONGO") + if openpype_mongo: + # inject env var for OP extractenvironments + # SetEnvironmentVariable is important, not SetProcessEnv... + deadlinePlugin.SetEnvironmentVariable("OPENPYPE_MONGO", + openpype_mongo) + if not os.environ.get("OPENPYPE_MONGO"): print(">>> Missing OPENPYPE_MONGO env var, process won't work") @@ -378,6 +385,12 @@ def inject_openpype_environment(deadlinePlugin): for key, value in contents.items(): deadlinePlugin.SetProcessEnvironmentVariable(key, value) + if "PATH" in contents: + # Set os.environ[PATH] so studio settings' path entries + # can be used to define search path for executables. + print(f">>> Setting 'PATH' Environment to: {contents['PATH']}") + os.environ["PATH"] = contents["PATH"] + script_url = job.GetJobPluginInfoKeyValue("ScriptFilename") if script_url: script_url = script_url.format(**contents).replace("\\", "/") @@ -398,6 +411,164 @@ def inject_openpype_environment(deadlinePlugin): raise +def inject_ayon_environment(deadlinePlugin): + """ Pull env vars from Ayon and push them to rendering process. + + Used for correct paths, configuration from OpenPype etc. + """ + job = deadlinePlugin.GetJob() + + print(">>> Injecting Ayon environments ...") + try: + exe_list = get_ayon_executable() + exe = FileUtils.SearchFileList(exe_list) + + if not exe: + raise RuntimeError(( + "Ayon executable was not found in the semicolon " + "separated list \"{}\"." + "The path to the render executable can be configured" + " from the Plugin Configuration in the Deadline Monitor." + ).format(";".join(exe_list))) + + print("--- Ayon executable: {}".format(exe)) + + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + if not ayon_bundle_name: + raise RuntimeError("Missing env var in job properties " + "AYON_BUNDLE_NAME") + + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + + if not all([ayon_server_url, ayon_api_key]): + raise RuntimeError(( + "Missing required values for server url and api key. " + "Please fill in Ayon Deadline plugin or provide by " + "AYON_SERVER_URL and AYON_API_KEY" + )) + + # tempfile.TemporaryFile cannot be used because of locking + temp_file_name = "{}_{}.json".format( + datetime.utcnow().strftime('%Y%m%d%H%M%S%f'), + str(uuid.uuid1()) + ) + export_url = os.path.join(tempfile.gettempdir(), temp_file_name) + print(">>> Temporary path: {}".format(export_url)) + + args = [ + "--headless", + "extractenvironments", + export_url + ] + + add_kwargs = { + "project": job.GetJobEnvironmentKeyValue("AVALON_PROJECT"), + "asset": job.GetJobEnvironmentKeyValue("AVALON_ASSET"), + "task": job.GetJobEnvironmentKeyValue("AVALON_TASK"), + "app": job.GetJobEnvironmentKeyValue("AVALON_APP_NAME"), + "envgroup": "farm", + } + + if job.GetJobEnvironmentKeyValue('IS_TEST'): + args.append("--automatic-tests") + + if all(add_kwargs.values()): + for key, value in add_kwargs.items(): + args.extend(["--{}".format(key), value]) + else: + raise RuntimeError(( + "Missing required env vars: AVALON_PROJECT, AVALON_ASSET," + " AVALON_TASK, AVALON_APP_NAME" + )) + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + for env, val in environment.items(): + deadlinePlugin.SetEnvironmentVariable(env, val) + + args_str = subprocess.list2cmdline(args) + print(">>> Executing: {} {}".format(exe, args_str)) + process_exitcode = deadlinePlugin.RunProcess( + exe, args_str, os.path.dirname(exe), -1 + ) + + if process_exitcode != 0: + raise RuntimeError( + "Failed to run Ayon process to extract environments." + ) + + print(">>> Loading file ...") + with open(export_url) as fp: + contents = json.load(fp) + + for key, value in contents.items(): + deadlinePlugin.SetProcessEnvironmentVariable(key, value) + + if "PATH" in contents: + # Set os.environ[PATH] so studio settings' path entries + # can be used to define search path for executables. + print(f">>> Setting 'PATH' Environment to: {contents['PATH']}") + os.environ["PATH"] = contents["PATH"] + + script_url = job.GetJobPluginInfoKeyValue("ScriptFilename") + if script_url: + script_url = script_url.format(**contents).replace("\\", "/") + print(">>> Setting script path {}".format(script_url)) + job.SetJobPluginInfoKeyValue("ScriptFilename", script_url) + + print(">>> Removing temporary file") + os.remove(export_url) + + print(">> Injection end.") + except Exception as e: + if hasattr(e, "output"): + print(">>> Exception {}".format(e.output)) + import traceback + print(traceback.format_exc()) + print("!!! Injection failed.") + RepositoryUtils.FailJob(job) + raise + + +def get_ayon_executable(): + """Return OpenPype Executable from Event Plug-in Settings + + Returns: + (list) of paths + Raises: + (RuntimeError) if no path configured at all + """ + config = RepositoryUtils.GetPluginConfig("Ayon") + exe_list = config.GetConfigEntryWithDefault("AyonExecutable", "") + + if not exe_list: + raise RuntimeError("Path to Ayon executable not configured." + "Please set it in Ayon Deadline Plugin.") + + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + + # Expand user paths + expanded_paths = [] + for path in exe_list.split(";"): + if path.startswith("~"): + path = os.path.expanduser(path) + expanded_paths.append(path) + return ";".join(expanded_paths) + + def inject_render_job_id(deadlinePlugin): """Inject dependency ids to publish process as env var for validation.""" print(">>> Injecting render job id ...") @@ -422,16 +593,29 @@ def __main__(deadlinePlugin): openpype_publish_job = \ job.GetJobEnvironmentKeyValue('OPENPYPE_PUBLISH_JOB') or '0' openpype_remote_job = \ - job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_JOB') or '0' + job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_PUBLISH') or '0' - print("--- Job type - render {}".format(openpype_render_job)) - print("--- Job type - publish {}".format(openpype_publish_job)) - print("--- Job type - remote {}".format(openpype_remote_job)) if openpype_publish_job == '1' and openpype_render_job == '1': raise RuntimeError("Misconfiguration. Job couldn't be both " + "render and publish.") if openpype_publish_job == '1': inject_render_job_id(deadlinePlugin) - elif openpype_render_job == '1' or openpype_remote_job == '1': + if openpype_render_job == '1' or openpype_remote_job == '1': inject_openpype_environment(deadlinePlugin) + + ayon_render_job = \ + job.GetJobEnvironmentKeyValue('AYON_RENDER_JOB') or '0' + ayon_publish_job = \ + job.GetJobEnvironmentKeyValue('AYON_PUBLISH_JOB') or '0' + ayon_remote_job = \ + job.GetJobEnvironmentKeyValue('AYON_REMOTE_PUBLISH') or '0' + + if ayon_publish_job == '1' and ayon_render_job == '1': + raise RuntimeError("Misconfiguration. Job couldn't be both " + + "render and publish.") + + if ayon_publish_job == '1': + inject_render_job_id(deadlinePlugin) + if ayon_render_job == '1' or ayon_remote_job == '1': + inject_ayon_environment(deadlinePlugin) diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param index ff2949766c..43a54a464e 100644 --- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param +++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param @@ -77,4 +77,22 @@ CategoryOrder=0 Index=4 Label=Harmony 20 Render Executable Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. -Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium \ No newline at end of file +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium + +[Harmony_RenderExecutable_21] +Type=multilinemultifilename +Category=Render Executables +CategoryOrder=0 +Index=4 +Label=Harmony 21 Render Executable +Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 21 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_21/lnx86_64/bin/HarmonyPremium + +[Harmony_RenderExecutable_22] +Type=multilinemultifilename +Category=Render Executables +CategoryOrder=0 +Index=4 +Label=Harmony 22 Render Executable +Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines. +Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 22 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 22 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_22/lnx86_64/bin/HarmonyPremium diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py index 0615af95dd..32ed76b58d 100644 --- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py @@ -1,3 +1,4 @@ +#!/usr/bin/env python3 from System import * from System.Diagnostics import * from System.IO import * @@ -8,13 +9,14 @@ from Deadline.Scripting import * def GetDeadlinePlugin(): return HarmonyOpenPypePlugin() - + def CleanupDeadlinePlugin( deadlinePlugin ): deadlinePlugin.Cleanup() - + class HarmonyOpenPypePlugin( DeadlinePlugin ): def __init__( self ): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -24,11 +26,11 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): print("Cleanup") for stdoutHandler in self.StdoutHandlers: del stdoutHandler.HandleCallback - + del self.InitializeProcessCallback del self.RenderExecutableCallback del self.RenderArgumentCallback - + def CheckExitCode( self, exitCode ): print("check code") if exitCode != 0: @@ -36,20 +38,20 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): self.LogInfo( "Renderer reported an error with error code 100. This will be ignored, since the option to ignore it is specified in the Job Properties." ) else: self.FailRender( "Renderer returned non-zero error code %d. Check the renderer's output." % exitCode ) - + def InitializeProcess( self ): self.PluginType = PluginType.Simple self.StdoutHandling = True self.PopupHandling = True - + self.AddStdoutHandlerCallback( "Rendered frame ([0-9]+)" ).HandleCallback += self.HandleStdoutProgress - + def HandleStdoutProgress( self ): startFrame = self.GetStartFrame() endFrame = self.GetEndFrame() if( endFrame - startFrame + 1 != 0 ): self.SetProgress( 100 * ( int(self.GetRegexMatch(1)) - startFrame + 1 ) / ( endFrame - startFrame + 1 ) ) - + def RenderExecutable( self ): version = int( self.GetPluginInfoEntry( "Version" ) ) exe = "" @@ -58,7 +60,7 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): if( exe == "" ): self.FailRender( "Harmony render executable was not found in the configured separated list \"" + exeList + "\". The path to the render executable can be configured from the Plugin Configuration in the Deadline Monitor." ) return exe - + def RenderArgument( self ): renderArguments = "-batch" @@ -72,20 +74,20 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): resolutionX = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionX", -1 ) resolutionY = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionY", -1 ) fov = self.GetFloatPluginInfoEntryWithDefault( "FieldOfView", -1 ) - + if resolutionX > 0 and resolutionY > 0 and fov > 0: renderArguments += " -res " + str( resolutionX ) + " " + str( resolutionY ) + " " + str( fov ) - + camera = self.GetPluginInfoEntryWithDefault( "Camera", "" ) - + if not camera == "": renderArguments += " -camera " + camera - + startFrame = str( self.GetStartFrame() ) endFrame = str( self.GetEndFrame() ) - + renderArguments += " -frames " + startFrame + " " + endFrame - + if not self.GetBooleanPluginInfoEntryWithDefault( "IsDatabase", False ): sceneFilename = self.GetPluginInfoEntryWithDefault( "SceneFile", self.GetDataFilename() ) sceneFilename = RepositoryUtils.CheckPathMapping( sceneFilename ) @@ -99,12 +101,12 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): renderArguments += " -scene " + scene version = self.GetPluginInfoEntryWithDefault( "SceneVersion", "" ) renderArguments += " -version " + version - + #tempSceneDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) - #preRenderScript = + #preRenderScript = rendernodeNum = 0 scriptBuilder = StringBuilder() - + while True: nodeName = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Node", "" ) if nodeName == "": @@ -115,35 +117,35 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): nodeLeadingZero = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "LeadingZero", "" ) nodeFormat = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Format", "" ) nodeStartFrame = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "StartFrame", "" ) - + if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingName\", 1, \"" + nodePath + "\" );") - + if not nodeLeadingZero == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"leadingZeros\", 1, \"" + nodeLeadingZero + "\" );") - + if not nodeFormat == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingType\", 1, \"" + nodeFormat + "\" );") - + if not nodeStartFrame == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"start\", 1, \"" + nodeStartFrame + "\" );") - + if nodeType == "Movie": nodePath = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Path", "" ) if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"moviePath\", 1, \"" + nodePath + "\" );") - + rendernodeNum += 1 - + tempDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) preRenderScriptName = Path.Combine( tempDirectory, "preRenderScript.txt" ) - + File.WriteAllText( preRenderScriptName, scriptBuilder.ToString() ) - + preRenderInlineScript = self.GetPluginInfoEntryWithDefault( "PreRenderInlineScript", "" ) if preRenderInlineScript: renderArguments += " -preRenderInlineScript \"" + preRenderInlineScript +"\"" - + renderArguments += " -preRenderScript \"" + preRenderScriptName +"\"" - + return renderArguments diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py index 6e1b973fb9..004c58d346 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py @@ -38,6 +38,7 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin): for publish process. """ def __init__(self): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -107,7 +108,7 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin): "Scanning for compatible requested " f"version {requested_version}")) dir_list = self.GetConfigEntry("OpenPypeInstallationDirs") - + # clean '\ ' for MacOS pasting if platform.system().lower() == "darwin": dir_list = dir_list.replace("\\ ", " ") diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py index b51daffbc8..9641c16d20 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py @@ -249,6 +249,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): def __init__(self): """Init.""" + super().__init__() self.InitializeProcessCallback += self.initialize_process self.RenderExecutableCallback += self.render_executable self.RenderArgumentCallback += self.render_argument diff --git a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py index df9147bdf7..442206feba 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py @@ -40,6 +40,7 @@ class SyncToAvalonServer(ServerAction): #: Action description. description = "Send data from Ftrack to Avalon" role_list = {"Pypeclub", "Administrator", "Project Manager"} + settings_key = "sync_to_avalon" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -48,11 +49,16 @@ class SyncToAvalonServer(ServerAction): def discover(self, session, entities, event): """ Validation """ # Check if selection is valid + is_valid = False for ent in event["data"]["selection"]: # Ignore entities that are not tasks or projects if ent["entityType"].lower() in ["show", "task"]: - return True - return False + is_valid = True + break + + if is_valid: + is_valid = self.valid_roles(session, entities, event) + return is_valid def launch(self, session, in_entities, event): self.log.debug("{}: Creating job".format(self.label)) diff --git a/openpype/modules/ftrack/ftrack_module.py b/openpype/modules/ftrack/ftrack_module.py index d61b5f0b26..b5152ff9c4 100644 --- a/openpype/modules/ftrack/ftrack_module.py +++ b/openpype/modules/ftrack/ftrack_module.py @@ -123,18 +123,7 @@ class FtrackModule( # Add Python 2 modules python_paths = [ # `python-ftrack-api` - os.path.join(python_2_vendor, "ftrack-python-api", "source"), - # `arrow` - os.path.join(python_2_vendor, "arrow"), - # `builtins` from `python-future` - # - `python-future` is strict Python 2 module that cause crashes - # of Python 3 scripts executed through OpenPype - # (burnin script etc.) - os.path.join(python_2_vendor, "builtins"), - # `backports.functools_lru_cache` - os.path.join( - python_2_vendor, "backports.functools_lru_cache" - ) + os.path.join(python_2_vendor, "ftrack-python-api", "source") ] # Load PYTHONPATH from current launch context diff --git a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py index 86ecffd5b8..ac4e499e41 100644 --- a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py +++ b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py @@ -2,11 +2,12 @@ import os import ftrack_api from openpype.settings import get_project_settings -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostFtrackHook(PostLaunchHook): order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py index e13b7e65cd..bea76718ca 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py @@ -1,8 +1,6 @@ import logging import pyblish.api -from openpype.pipeline import legacy_io - class CollectFtrackApi(pyblish.api.ContextPlugin): """ Collects an ftrack session and the current task id. """ @@ -24,9 +22,9 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): self.log.debug("Ftrack user: \"{0}\"".format(session.api_user)) # Collect task - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] + project_name = context.data["projectName"] + asset_name = context.data["asset"] + task_name = context.data["task"] # Find project entity project_query = 'Project where full_name is "{0}"'.format(project_name) @@ -46,19 +44,25 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): self.log.debug("Project found: {0}".format(project_entity)) + task_object_type = session.query( + "ObjectType where name is 'Task'").one() + task_object_type_id = task_object_type["id"] asset_entity = None if asset_name: # Find asset entity entity_query = ( - 'TypedContext where project_id is "{0}"' - ' and name is "{1}"' - ).format(project_entity["id"], asset_name) + "TypedContext where project_id is '{}'" + " and name is '{}'" + " and object_type_id != '{}'" + ).format( + project_entity["id"], + asset_name, + task_object_type_id + ) self.log.debug("Asset entity query: < {0} >".format(entity_query)) asset_entities = [] for entity in session.query(entity_query).all(): - # Skip tasks - if entity.entity_type.lower() != "task": - asset_entities.append(entity) + asset_entities.append(entity) if len(asset_entities) == 0: raise AssertionError(( @@ -105,10 +109,19 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): context.data["ftrackEntity"] = asset_entity context.data["ftrackTask"] = task_entity - self.per_instance_process(context, asset_entity, task_entity) + self.per_instance_process( + context, + asset_entity, + task_entity, + task_object_type_id + ) def per_instance_process( - self, context, context_asset_entity, context_task_entity + self, + context, + context_asset_entity, + context_task_entity, + task_object_type_id ): context_task_name = None context_asset_name = None @@ -184,23 +197,27 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): session = context.data["ftrackSession"] project_entity = context.data["ftrackProject"] - asset_names = set() - for asset_name in instance_by_asset_and_task.keys(): - asset_names.add(asset_name) + asset_names = set(instance_by_asset_and_task.keys()) joined_asset_names = ",".join([ "\"{}\"".format(name) for name in asset_names ]) - entities = session.query(( - "TypedContext where project_id is \"{}\" and name in ({})" - ).format(project_entity["id"], joined_asset_names)).all() + entities = session.query( + ( + "TypedContext where project_id is \"{}\" and name in ({})" + " and object_type_id != '{}'" + ).format( + project_entity["id"], + joined_asset_names, + task_object_type_id + ) + ).all() entities_by_name = { entity["name"]: entity for entity in entities } - for asset_name, by_task_data in instance_by_asset_and_task.items(): entity = entities_by_name.get(asset_name) task_entity_by_name = {} diff --git a/openpype/modules/ftrack/plugins/publish/collect_username.py b/openpype/modules/ftrack/plugins/publish/collect_username.py index 798f3960a8..0c7c0a57be 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_username.py +++ b/openpype/modules/ftrack/plugins/publish/collect_username.py @@ -33,7 +33,7 @@ class CollectUsernameForWebpublish(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.0015 label = "Collect ftrack username" hosts = ["webpublisher", "photoshop"] - targets = ["remotepublish", "filespublish", "tvpaint_worker"] + targets = ["webpublish"] def process(self, context): self.log.info("{}".format(self.__class__.__name__)) diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py index deb8b414f0..858c0bb2d6 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py @@ -11,10 +11,8 @@ Provides: """ import os -import sys import collections -import six import pyblish.api import clique @@ -29,8 +27,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): def process(self, instance): component_list = instance.data.get("ftrackComponentsList") if not component_list: - self.log.info( - "Instance don't have components to integrate to Ftrack." + self.log.debug( + "Instance doesn't have components to integrate to Ftrack." " Skipping." ) return @@ -39,7 +37,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): task_entity, parent_entity = self.get_instance_entities( instance, context) if parent_entity is None: - self.log.info(( + self.log.debug(( "Skipping ftrack integration. Instance \"{}\" does not" " have specified ftrack entities." ).format(str(instance))) @@ -325,7 +323,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): "type_id": asset_type_id, "context_id": parent_id } - self.log.info("Created new Asset with data: {}.".format(asset_data)) + self.log.debug("Created new Asset with data: {}.".format(asset_data)) session.create("Asset", asset_data) session.commit() return self._query_asset(session, asset_name, asset_type_id, parent_id) @@ -355,7 +353,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): status_name = asset_version_data.pop("status_name", None) # Try query asset version by criteria (asset id and version) - version = asset_version_data.get("version") or 0 + version = asset_version_data.get("version") or "0" asset_version_entity = self._query_asset_version( session, version, asset_id ) @@ -386,7 +384,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): if comment: new_asset_version_data["comment"] = comment - self.log.info("Created new AssetVersion with data {}".format( + self.log.debug("Created new AssetVersion with data {}".format( new_asset_version_data )) session.create("AssetVersion", new_asset_version_data) @@ -557,7 +555,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): location=location ) data["component"] = component_entity - self.log.info( + self.log.debug( ( "Created new Component with path: {0}, data: {1}," " metadata: {2}, location: {3}" diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py index 6ed02bc8b6..ceaff8ff54 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py @@ -40,7 +40,7 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin): comment = instance.data["comment"] if not comment: - self.log.info("Comment is not set.") + self.log.debug("Comment is not set.") else: self.log.debug("Comment is set to `{}`".format(comment)) diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py index 6e82897d89..10b7932cdf 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py @@ -47,7 +47,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin): app_label = context.data["appLabel"] comment = instance.data["comment"] if not comment: - self.log.info("Comment is not set.") + self.log.debug("Comment is not set.") else: self.log.debug("Comment is set to `{}`".format(comment)) @@ -127,14 +127,14 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin): note_text = StringTemplate.format_template(template, format_data) if not note_text.solved: - self.log.warning(( + self.log.debug(( "Note template require more keys then can be provided." "\nTemplate: {}\nMissing values for keys:{}\nData: {}" ).format(template, note_text.missing_keys, format_data)) continue if not note_text: - self.log.info(( + self.log.debug(( "Note for AssetVersion {} would be empty. Skipping." "\nTemplate: {}\nData: {}" ).format(asset_version["id"], template, format_data)) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/.gitignore b/openpype/modules/ftrack/python2_vendor/arrow/.gitignore deleted file mode 100644 index 0448d0cf0c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/.gitignore +++ /dev/null @@ -1,211 +0,0 @@ -README.rst.new - -# Small entry point file for debugging tasks -test.py - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -pip-wheel-metadata/ -share/python-wheels/ -*.egg-info/ -.installed.cfg -*.egg - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.nox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 -db.sqlite3-journal - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# IPython -profile_default/ -ipython_config.py - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -local/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ -.dmypy.json -dmypy.json - -# Pyre type checker -.pyre/ - -# Swap -[._]*.s[a-v][a-z] -[._]*.sw[a-p] -[._]s[a-rt-v][a-z] -[._]ss[a-gi-z] -[._]sw[a-p] - -# Session -Session.vim -Sessionx.vim - -# Temporary -.netrwhist -*~ -# Auto-generated tag files -tags -# Persistent undo -[._]*.un~ - -.idea/ -.vscode/ - -# General -.DS_Store -.AppleDouble -.LSOverride - -# Icon must end with two \r -Icon - - -# Thumbnails -._* - -# Files that might appear in the root of a volume -.DocumentRevisions-V100 -.fseventsd -.Spotlight-V100 -.TemporaryItems -.Trashes -.VolumeIcon.icns -.com.apple.timemachine.donotpresent - -# Directories potentially created on remote AFP share -.AppleDB -.AppleDesktop -Network Trash Folder -Temporary Items -.apdisk - -*~ - -# temporary files which can be created if a process still has a handle open of a deleted file -.fuse_hidden* - -# KDE directory preferences -.directory - -# Linux trash folder which might appear on any partition or disk -.Trash-* - -# .nfs files are created when an open file is removed but is still being accessed -.nfs* - -# Windows thumbnail cache files -Thumbs.db -Thumbs.db:encryptable -ehthumbs.db -ehthumbs_vista.db - -# Dump file -*.stackdump - -# Folder config file -[Dd]esktop.ini - -# Recycle Bin used on file shares -$RECYCLE.BIN/ - -# Windows Installer files -*.cab -*.msi -*.msix -*.msm -*.msp - -# Windows shortcuts -*.lnk diff --git a/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml b/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml deleted file mode 100644 index 1f5128595b..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml +++ /dev/null @@ -1,41 +0,0 @@ -default_language_version: - python: python3 -repos: - - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v3.2.0 - hooks: - - id: trailing-whitespace - - id: end-of-file-fixer - - id: fix-encoding-pragma - exclude: ^arrow/_version.py - - id: requirements-txt-fixer - - id: check-ast - - id: check-yaml - - id: check-case-conflict - - id: check-docstring-first - - id: check-merge-conflict - - id: debug-statements - - repo: https://github.com/timothycrosley/isort - rev: 5.4.2 - hooks: - - id: isort - - repo: https://github.com/asottile/pyupgrade - rev: v2.7.2 - hooks: - - id: pyupgrade - - repo: https://github.com/pre-commit/pygrep-hooks - rev: v1.6.0 - hooks: - - id: python-no-eval - - id: python-check-blanket-noqa - - id: rst-backticks - - repo: https://github.com/psf/black - rev: 20.8b1 - hooks: - - id: black - args: [--safe, --quiet] - - repo: https://gitlab.com/pycqa/flake8 - rev: 3.8.3 - hooks: - - id: flake8 - additional_dependencies: [flake8-bugbear] diff --git a/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst b/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst deleted file mode 100644 index 0b55a4522c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst +++ /dev/null @@ -1,598 +0,0 @@ -Changelog -========= - -0.17.0 (2020-10-2) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. This is the last major release to support Python 2.7 and Python 3.5. -- [NEW] Arrow now properly handles imaginary datetimes during DST shifts. For example: - -..code-block:: python - >>> just_before = arrow.get(2013, 3, 31, 1, 55, tzinfo="Europe/Paris") - >>> just_before.shift(minutes=+10) - - -..code-block:: python - >>> before = arrow.get("2018-03-10 23:00:00", "YYYY-MM-DD HH:mm:ss", tzinfo="US/Pacific") - >>> after = arrow.get("2018-03-11 04:00:00", "YYYY-MM-DD HH:mm:ss", tzinfo="US/Pacific") - >>> result=[(t, t.to("utc")) for t in arrow.Arrow.range("hour", before, after)] - >>> for r in result: - ... print(r) - ... - (, ) - (, ) - (, ) - (, ) - (, ) - -- [NEW] Added ``humanize`` week granularity translation for Tagalog. -- [CHANGE] Calls to the ``timestamp`` property now emit a ``DeprecationWarning``. In a future release, ``timestamp`` will be changed to a method to align with Python's datetime module. If you would like to continue using the property, please change your code to use the ``int_timestamp`` or ``float_timestamp`` properties instead. -- [CHANGE] Expanded and improved Catalan locale. -- [FIX] Fixed a bug that caused ``Arrow.range()`` to incorrectly cut off ranges in certain scenarios when using month, quarter, or year endings. -- [FIX] Fixed a bug that caused day of week token parsing to be case sensitive. -- [INTERNAL] A number of functions were reordered in arrow.py for better organization and grouping of related methods. This change will have no impact on usage. -- [INTERNAL] A minimum tox version is now enforced for compatibility reasons. Contributors must use tox >3.18.0 going forward. - -0.16.0 (2020-08-23) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. The 0.16.x and 0.17.x releases are the last to support Python 2.7 and 3.5. -- [NEW] Implemented `PEP 495 `_ to handle ambiguous datetimes. This is achieved by the addition of the ``fold`` attribute for Arrow objects. For example: - -.. code-block:: python - - >>> before = Arrow(2017, 10, 29, 2, 0, tzinfo='Europe/Stockholm') - - >>> before.fold - 0 - >>> before.ambiguous - True - >>> after = Arrow(2017, 10, 29, 2, 0, tzinfo='Europe/Stockholm', fold=1) - - >>> after = before.replace(fold=1) - - -- [NEW] Added ``normalize_whitespace`` flag to ``arrow.get``. This is useful for parsing log files and/or any files that may contain inconsistent spacing. For example: - -.. code-block:: python - - >>> arrow.get("Jun 1 2005 1:33PM", "MMM D YYYY H:mmA", normalize_whitespace=True) - - >>> arrow.get("2013-036 \t 04:05:06Z", normalize_whitespace=True) - - -0.15.8 (2020-07-23) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. The 0.15.x, 0.16.x, and 0.17.x releases are the last to support Python 2.7 and 3.5. -- [NEW] Added ``humanize`` week granularity translation for Czech. -- [FIX] ``arrow.get`` will now pick sane defaults when weekdays are passed with particular token combinations, see `#446 `_. -- [INTERNAL] Moved arrow to an organization. The repo can now be found `here `_. -- [INTERNAL] Started issuing deprecation warnings for Python 2.7 and 3.5. -- [INTERNAL] Added Python 3.9 to CI pipeline. - -0.15.7 (2020-06-19) -------------------- - -- [NEW] Added a number of built-in format strings. See the `docs `_ for a complete list of supported formats. For example: - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw.format(arrow.FORMAT_COOKIE) - 'Wednesday, 27-May-2020 10:30:35 UTC' - -- [NEW] Arrow is now fully compatible with Python 3.9 and PyPy3. -- [NEW] Added Makefile, tox.ini, and requirements.txt files to the distribution bundle. -- [NEW] Added French Canadian and Swahili locales. -- [NEW] Added ``humanize`` week granularity translation for Hebrew, Greek, Macedonian, Swedish, Slovak. -- [FIX] ms and μs timestamps are now normalized in ``arrow.get()``, ``arrow.fromtimestamp()``, and ``arrow.utcfromtimestamp()``. For example: - -.. code-block:: python - - >>> ts = 1591161115194556 - >>> arw = arrow.get(ts) - - >>> arw.timestamp - 1591161115 - -- [FIX] Refactored and updated Macedonian, Hebrew, Korean, and Portuguese locales. - -0.15.6 (2020-04-29) -------------------- - -- [NEW] Added support for parsing and formatting `ISO 8601 week dates `_ via a new token ``W``, for example: - -.. code-block:: python - - >>> arrow.get("2013-W29-6", "W") - - >>> utc=arrow.utcnow() - >>> utc - - >>> utc.format("W") - '2020-W04-4' - -- [NEW] Formatting with ``x`` token (microseconds) is now possible, for example: - -.. code-block:: python - - >>> dt = arrow.utcnow() - >>> dt.format("x") - '1585669870688329' - >>> dt.format("X") - '1585669870' - -- [NEW] Added ``humanize`` week granularity translation for German, Italian, Polish & Taiwanese locales. -- [FIX] Consolidated and simplified German locales. -- [INTERNAL] Moved testing suite from nosetest/Chai to pytest/pytest-mock. -- [INTERNAL] Converted xunit-style setup and teardown functions in tests to pytest fixtures. -- [INTERNAL] Setup Github Actions for CI alongside Travis. -- [INTERNAL] Help support Arrow's future development by donating to the project on `Open Collective `_. - -0.15.5 (2020-01-03) -------------------- - -- [WARN] Python 2 reached EOL on 2020-01-01. arrow will **drop support** for Python 2 in a future release to be decided (see `#739 `_). -- [NEW] Added bounds parameter to ``span_range``, ``interval`` and ``span`` methods. This allows you to include or exclude the start and end values. -- [NEW] ``arrow.get()`` can now create arrow objects from a timestamp with a timezone, for example: - -.. code-block:: python - - >>> arrow.get(1367900664, tzinfo=tz.gettz('US/Pacific')) - - -- [NEW] ``humanize`` can now combine multiple levels of granularity, for example: - -.. code-block:: python - - >>> later140 = arrow.utcnow().shift(seconds=+8400) - >>> later140.humanize(granularity="minute") - 'in 139 minutes' - >>> later140.humanize(granularity=["hour", "minute"]) - 'in 2 hours and 19 minutes' - -- [NEW] Added Hong Kong locale (``zh_hk``). -- [NEW] Added ``humanize`` week granularity translation for Dutch. -- [NEW] Numbers are now displayed when using the seconds granularity in ``humanize``. -- [CHANGE] ``range`` now supports both the singular and plural forms of the ``frames`` argument (e.g. day and days). -- [FIX] Improved parsing of strings that contain punctuation. -- [FIX] Improved behaviour of ``humanize`` when singular seconds are involved. - -0.15.4 (2019-11-02) -------------------- - -- [FIX] Fixed an issue that caused package installs to fail on Conda Forge. - -0.15.3 (2019-11-02) -------------------- - -- [NEW] ``factory.get()`` can now create arrow objects from a ISO calendar tuple, for example: - -.. code-block:: python - - >>> arrow.get((2013, 18, 7)) - - -- [NEW] Added a new token ``x`` to allow parsing of integer timestamps with milliseconds and microseconds. -- [NEW] Formatting now supports escaping of characters using the same syntax as parsing, for example: - -.. code-block:: python - - >>> arw = arrow.now() - >>> fmt = "YYYY-MM-DD h [h] m" - >>> arw.format(fmt) - '2019-11-02 3 h 32' - -- [NEW] Added ``humanize`` week granularity translations for Chinese, Spanish and Vietnamese. -- [CHANGE] Added ``ParserError`` to module exports. -- [FIX] Added support for midnight at end of day. See `#703 `_ for details. -- [INTERNAL] Created Travis build for macOS. -- [INTERNAL] Test parsing and formatting against full timezone database. - -0.15.2 (2019-09-14) -------------------- - -- [NEW] Added ``humanize`` week granularity translations for Portuguese and Brazilian Portuguese. -- [NEW] Embedded changelog within docs and added release dates to versions. -- [FIX] Fixed a bug that caused test failures on Windows only, see `#668 `_ for details. - -0.15.1 (2019-09-10) -------------------- - -- [NEW] Added ``humanize`` week granularity translations for Japanese. -- [FIX] Fixed a bug that caused Arrow to fail when passed a negative timestamp string. -- [FIX] Fixed a bug that caused Arrow to fail when passed a datetime object with ``tzinfo`` of type ``StaticTzInfo``. - -0.15.0 (2019-09-08) -------------------- - -- [NEW] Added support for DDD and DDDD ordinal date tokens. The following functionality is now possible: ``arrow.get("1998-045")``, ``arrow.get("1998-45", "YYYY-DDD")``, ``arrow.get("1998-045", "YYYY-DDDD")``. -- [NEW] ISO 8601 basic format for dates and times is now supported (e.g. ``YYYYMMDDTHHmmssZ``). -- [NEW] Added ``humanize`` week granularity translations for French, Russian and Swiss German locales. -- [CHANGE] Timestamps of type ``str`` are no longer supported **without a format string** in the ``arrow.get()`` method. This change was made to support the ISO 8601 basic format and to address bugs such as `#447 `_. - -The following will NOT work in v0.15.0: - -.. code-block:: python - - >>> arrow.get("1565358758") - >>> arrow.get("1565358758.123413") - -The following will work in v0.15.0: - -.. code-block:: python - - >>> arrow.get("1565358758", "X") - >>> arrow.get("1565358758.123413", "X") - >>> arrow.get(1565358758) - >>> arrow.get(1565358758.123413) - -- [CHANGE] When a meridian token (a|A) is passed and no meridians are available for the specified locale (e.g. unsupported or untranslated) a ``ParserError`` is raised. -- [CHANGE] The timestamp token (``X``) will now match float timestamps of type ``str``: ``arrow.get(“1565358758.123415”, “X”)``. -- [CHANGE] Strings with leading and/or trailing whitespace will no longer be parsed without a format string. Please see `the docs `_ for ways to handle this. -- [FIX] The timestamp token (``X``) will now only match on strings that **strictly contain integers and floats**, preventing incorrect matches. -- [FIX] Most instances of ``arrow.get()`` returning an incorrect ``Arrow`` object from a partial parsing match have been eliminated. The following issue have been addressed: `#91 `_, `#196 `_, `#396 `_, `#434 `_, `#447 `_, `#456 `_, `#519 `_, `#538 `_, `#560 `_. - -0.14.7 (2019-09-04) -------------------- - -- [CHANGE] ``ArrowParseWarning`` will no longer be printed on every call to ``arrow.get()`` with a datetime string. The purpose of the warning was to start a conversation about the upcoming 0.15.0 changes and we appreciate all the feedback that the community has given us! - -0.14.6 (2019-08-28) -------------------- - -- [NEW] Added support for ``week`` granularity in ``Arrow.humanize()``. For example, ``arrow.utcnow().shift(weeks=-1).humanize(granularity="week")`` outputs "a week ago". This change introduced two new untranslated words, ``week`` and ``weeks``, to all locale dictionaries, so locale contributions are welcome! -- [NEW] Fully translated the Brazilian Portugese locale. -- [CHANGE] Updated the Macedonian locale to inherit from a Slavic base. -- [FIX] Fixed a bug that caused ``arrow.get()`` to ignore tzinfo arguments of type string (e.g. ``arrow.get(tzinfo="Europe/Paris")``). -- [FIX] Fixed a bug that occurred when ``arrow.Arrow()`` was instantiated with a ``pytz`` tzinfo object. -- [FIX] Fixed a bug that caused Arrow to fail when passed a sub-second token, that when rounded, had a value greater than 999999 (e.g. ``arrow.get("2015-01-12T01:13:15.9999995")``). Arrow should now accurately propagate the rounding for large sub-second tokens. - -0.14.5 (2019-08-09) -------------------- - -- [NEW] Added Afrikaans locale. -- [CHANGE] Removed deprecated ``replace`` shift functionality. Users looking to pass plural properties to the ``replace`` function to shift values should use ``shift`` instead. -- [FIX] Fixed bug that occurred when ``factory.get()`` was passed a locale kwarg. - -0.14.4 (2019-07-30) -------------------- - -- [FIX] Fixed a regression in 0.14.3 that prevented a tzinfo argument of type string to be passed to the ``get()`` function. Functionality such as ``arrow.get("2019072807", "YYYYMMDDHH", tzinfo="UTC")`` should work as normal again. -- [CHANGE] Moved ``backports.functools_lru_cache`` dependency from ``extra_requires`` to ``install_requires`` for ``Python 2.7`` installs to fix `#495 `_. - -0.14.3 (2019-07-28) -------------------- - -- [NEW] Added full support for Python 3.8. -- [CHANGE] Added warnings for upcoming factory.get() parsing changes in 0.15.0. Please see `#612 `_ for full details. -- [FIX] Extensive refactor and update of documentation. -- [FIX] factory.get() can now construct from kwargs. -- [FIX] Added meridians to Spanish Locale. - -0.14.2 (2019-06-06) -------------------- - -- [CHANGE] Travis CI builds now use tox to lint and run tests. -- [FIX] Fixed UnicodeDecodeError on certain locales (#600). - -0.14.1 (2019-06-06) -------------------- - -- [FIX] Fixed ``ImportError: No module named 'dateutil'`` (#598). - -0.14.0 (2019-06-06) -------------------- - -- [NEW] Added provisional support for Python 3.8. -- [CHANGE] Removed support for EOL Python 3.4. -- [FIX] Updated setup.py with modern Python standards. -- [FIX] Upgraded dependencies to latest versions. -- [FIX] Enabled flake8 and black on travis builds. -- [FIX] Formatted code using black and isort. - -0.13.2 (2019-05-30) -------------------- - -- [NEW] Add is_between method. -- [FIX] Improved humanize behaviour for near zero durations (#416). -- [FIX] Correct humanize behaviour with future days (#541). -- [FIX] Documentation updates. -- [FIX] Improvements to German Locale. - -0.13.1 (2019-02-17) -------------------- - -- [NEW] Add support for Python 3.7. -- [CHANGE] Remove deprecation decorators for Arrow.range(), Arrow.span_range() and Arrow.interval(), all now return generators, wrap with list() to get old behavior. -- [FIX] Documentation and docstring updates. - -0.13.0 (2019-01-09) -------------------- - -- [NEW] Added support for Python 3.6. -- [CHANGE] Drop support for Python 2.6/3.3. -- [CHANGE] Return generator instead of list for Arrow.range(), Arrow.span_range() and Arrow.interval(). -- [FIX] Make arrow.get() work with str & tzinfo combo. -- [FIX] Make sure special RegEx characters are escaped in format string. -- [NEW] Added support for ZZZ when formatting. -- [FIX] Stop using datetime.utcnow() in internals, use datetime.now(UTC) instead. -- [FIX] Return NotImplemented instead of TypeError in arrow math internals. -- [NEW] Added Estonian Locale. -- [FIX] Small fixes to Greek locale. -- [FIX] TagalogLocale improvements. -- [FIX] Added test requirements to setup. -- [FIX] Improve docs for get, now and utcnow methods. -- [FIX] Correct typo in depreciation warning. - -0.12.1 ------- - -- [FIX] Allow universal wheels to be generated and reliably installed. -- [FIX] Make humanize respect only_distance when granularity argument is also given. - -0.12.0 ------- - -- [FIX] Compatibility fix for Python 2.x - -0.11.0 ------- - -- [FIX] Fix grammar of ArabicLocale -- [NEW] Add Nepali Locale -- [FIX] Fix month name + rename AustriaLocale -> AustrianLocale -- [FIX] Fix typo in Basque Locale -- [FIX] Fix grammar in PortugueseBrazilian locale -- [FIX] Remove pip --user-mirrors flag -- [NEW] Add Indonesian Locale - -0.10.0 ------- - -- [FIX] Fix getattr off by one for quarter -- [FIX] Fix negative offset for UTC -- [FIX] Update arrow.py - -0.9.0 ------ - -- [NEW] Remove duplicate code -- [NEW] Support gnu date iso 8601 -- [NEW] Add support for universal wheels -- [NEW] Slovenian locale -- [NEW] Slovak locale -- [NEW] Romanian locale -- [FIX] respect limit even if end is defined range -- [FIX] Separate replace & shift functions -- [NEW] Added tox -- [FIX] Fix supported Python versions in documentation -- [NEW] Azerbaijani locale added, locale issue fixed in Turkish. -- [FIX] Format ParserError's raise message - -0.8.0 ------ - -- [] - -0.7.1 ------ - -- [NEW] Esperanto locale (batisteo) - -0.7.0 ------ - -- [FIX] Parse localized strings #228 (swistakm) -- [FIX] Modify tzinfo parameter in ``get`` api #221 (bottleimp) -- [FIX] Fix Czech locale (PrehistoricTeam) -- [FIX] Raise TypeError when adding/subtracting non-dates (itsmeolivia) -- [FIX] Fix pytz conversion error (Kudo) -- [FIX] Fix overzealous time truncation in span_range (kdeldycke) -- [NEW] Humanize for time duration #232 (ybrs) -- [NEW] Add Thai locale (sipp11) -- [NEW] Adding Belarusian (be) locale (oire) -- [NEW] Search date in strings (beenje) -- [NEW] Note that arrow's tokens differ from strptime's. (offby1) - -0.6.0 ------ - -- [FIX] Added support for Python 3 -- [FIX] Avoid truncating oversized epoch timestamps. Fixes #216. -- [FIX] Fixed month abbreviations for Ukrainian -- [FIX] Fix typo timezone -- [FIX] A couple of dialect fixes and two new languages -- [FIX] Spanish locale: ``Miercoles`` should have acute accent -- [Fix] Fix Finnish grammar -- [FIX] Fix typo in 'Arrow.floor' docstring -- [FIX] Use read() utility to open README -- [FIX] span_range for week frame -- [NEW] Add minimal support for fractional seconds longer than six digits. -- [NEW] Adding locale support for Marathi (mr) -- [NEW] Add count argument to span method -- [NEW] Improved docs - -0.5.1 - 0.5.4 -------------- - -- [FIX] test the behavior of simplejson instead of calling for_json directly (tonyseek) -- [FIX] Add Hebrew Locale (doodyparizada) -- [FIX] Update documentation location (andrewelkins) -- [FIX] Update setup.py Development Status level (andrewelkins) -- [FIX] Case insensitive month match (cshowe) - -0.5.0 ------ - -- [NEW] struct_time addition. (mhworth) -- [NEW] Version grep (eirnym) -- [NEW] Default to ISO 8601 format (emonty) -- [NEW] Raise TypeError on comparison (sniekamp) -- [NEW] Adding Macedonian(mk) locale (krisfremen) -- [FIX] Fix for ISO seconds and fractional seconds (sdispater) (andrewelkins) -- [FIX] Use correct Dutch wording for "hours" (wbolster) -- [FIX] Complete the list of english locales (indorilftw) -- [FIX] Change README to reStructuredText (nyuszika7h) -- [FIX] Parse lower-cased 'h' (tamentis) -- [FIX] Slight modifications to Dutch locale (nvie) - -0.4.4 ------ - -- [NEW] Include the docs in the released tarball -- [NEW] Czech localization Czech localization for Arrow -- [NEW] Add fa_ir to locales -- [FIX] Fixes parsing of time strings with a final Z -- [FIX] Fixes ISO parsing and formatting for fractional seconds -- [FIX] test_fromtimestamp sp -- [FIX] some typos fixed -- [FIX] removed an unused import statement -- [FIX] docs table fix -- [FIX] Issue with specify 'X' template and no template at all to arrow.get -- [FIX] Fix "import" typo in docs/index.rst -- [FIX] Fix unit tests for zero passed -- [FIX] Update layout.html -- [FIX] In Norwegian and new Norwegian months and weekdays should not be capitalized -- [FIX] Fixed discrepancy between specifying 'X' to arrow.get and specifying no template - -0.4.3 ------ - -- [NEW] Turkish locale (Emre) -- [NEW] Arabic locale (Mosab Ahmad) -- [NEW] Danish locale (Holmars) -- [NEW] Icelandic locale (Holmars) -- [NEW] Hindi locale (Atmb4u) -- [NEW] Malayalam locale (Atmb4u) -- [NEW] Finnish locale (Stormpat) -- [NEW] Portuguese locale (Danielcorreia) -- [NEW] ``h`` and ``hh`` strings are now supported (Averyonghub) -- [FIX] An incorrect inflection in the Polish locale has been fixed (Avalanchy) -- [FIX] ``arrow.get`` now properly handles ``Date`` (Jaapz) -- [FIX] Tests are now declared in ``setup.py`` and the manifest (Pypingou) -- [FIX] ``__version__`` has been added to ``__init__.py`` (Sametmax) -- [FIX] ISO 8601 strings can be parsed without a separator (Ivandiguisto / Root) -- [FIX] Documentation is now more clear regarding some inputs on ``arrow.get`` (Eriktaubeneck) -- [FIX] Some documentation links have been fixed (Vrutsky) -- [FIX] Error messages for parse errors are now more descriptive (Maciej Albin) -- [FIX] The parser now correctly checks for separators in strings (Mschwager) - -0.4.2 ------ - -- [NEW] Factory ``get`` method now accepts a single ``Arrow`` argument. -- [NEW] Tokens SSSS, SSSSS and SSSSSS are supported in parsing. -- [NEW] ``Arrow`` objects have a ``float_timestamp`` property. -- [NEW] Vietnamese locale (Iu1nguoi) -- [NEW] Factory ``get`` method now accepts a list of format strings (Dgilland) -- [NEW] A MANIFEST.in file has been added (Pypingou) -- [NEW] Tests can be run directly from ``setup.py`` (Pypingou) -- [FIX] Arrow docs now list 'day of week' format tokens correctly (Rudolphfroger) -- [FIX] Several issues with the Korean locale have been resolved (Yoloseem) -- [FIX] ``humanize`` now correctly returns unicode (Shvechikov) -- [FIX] ``Arrow`` objects now pickle / unpickle correctly (Yoloseem) - -0.4.1 ------ - -- [NEW] Table / explanation of formatting & parsing tokens in docs -- [NEW] Brazilian locale (Augusto2112) -- [NEW] Dutch locale (OrangeTux) -- [NEW] Italian locale (Pertux) -- [NEW] Austrain locale (LeChewbacca) -- [NEW] Tagalog locale (Marksteve) -- [FIX] Corrected spelling and day numbers in German locale (LeChewbacca) -- [FIX] Factory ``get`` method should now handle unicode strings correctly (Bwells) -- [FIX] Midnight and noon should now parse and format correctly (Bwells) - -0.4.0 ------ - -- [NEW] Format-free ISO 8601 parsing in factory ``get`` method -- [NEW] Support for 'week' / 'weeks' in ``span``, ``range``, ``span_range``, ``floor`` and ``ceil`` -- [NEW] Support for 'weeks' in ``replace`` -- [NEW] Norwegian locale (Martinp) -- [NEW] Japanese locale (CortYuming) -- [FIX] Timezones no longer show the wrong sign when formatted (Bean) -- [FIX] Microseconds are parsed correctly from strings (Bsidhom) -- [FIX] Locale day-of-week is no longer off by one (Cynddl) -- [FIX] Corrected plurals of Ukrainian and Russian nouns (Catchagain) -- [CHANGE] Old 0.1 ``arrow`` module method removed -- [CHANGE] Dropped timestamp support in ``range`` and ``span_range`` (never worked correctly) -- [CHANGE] Dropped parsing of single string as tz string in factory ``get`` method (replaced by ISO 8601) - -0.3.5 ------ - -- [NEW] French locale (Cynddl) -- [NEW] Spanish locale (Slapresta) -- [FIX] Ranges handle multiple timezones correctly (Ftobia) - -0.3.4 ------ - -- [FIX] Humanize no longer sometimes returns the wrong month delta -- [FIX] ``__format__`` works correctly with no format string - -0.3.3 ------ - -- [NEW] Python 2.6 support -- [NEW] Initial support for locale-based parsing and formatting -- [NEW] ArrowFactory class, now proxied as the module API -- [NEW] ``factory`` api method to obtain a factory for a custom type -- [FIX] Python 3 support and tests completely ironed out - -0.3.2 ------ - -- [NEW] Python 3+ support - -0.3.1 ------ - -- [FIX] The old ``arrow`` module function handles timestamps correctly as it used to - -0.3.0 ------ - -- [NEW] ``Arrow.replace`` method -- [NEW] Accept timestamps, datetimes and Arrows for datetime inputs, where reasonable -- [FIX] ``range`` and ``span_range`` respect end and limit parameters correctly -- [CHANGE] Arrow objects are no longer mutable -- [CHANGE] Plural attribute name semantics altered: single -> absolute, plural -> relative -- [CHANGE] Plural names no longer supported as properties (e.g. ``arrow.utcnow().years``) - -0.2.1 ------ - -- [NEW] Support for localized humanization -- [NEW] English, Russian, Greek, Korean, Chinese locales - -0.2.0 ------ - -- **REWRITE** -- [NEW] Date parsing -- [NEW] Date formatting -- [NEW] ``floor``, ``ceil`` and ``span`` methods -- [NEW] ``datetime`` interface implementation -- [NEW] ``clone`` method -- [NEW] ``get``, ``now`` and ``utcnow`` API methods - -0.1.6 ------ - -- [NEW] Humanized time deltas -- [NEW] ``__eq__`` implemented -- [FIX] Issues with conversions related to daylight savings time resolved -- [CHANGE] ``__str__`` uses ISO formatting - -0.1.5 ------ - -- **Started tracking changes** -- [NEW] Parsing of ISO-formatted time zone offsets (e.g. '+02:30', '-05:00') -- [NEW] Resolved some issues with timestamps and delta / Olson time zones diff --git a/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in b/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in deleted file mode 100644 index d9955ed96a..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in +++ /dev/null @@ -1,3 +0,0 @@ -include LICENSE CHANGELOG.rst README.rst Makefile requirements.txt tox.ini -recursive-include tests *.py -recursive-include docs *.py *.rst *.bat Makefile diff --git a/openpype/modules/ftrack/python2_vendor/arrow/Makefile b/openpype/modules/ftrack/python2_vendor/arrow/Makefile deleted file mode 100644 index f294985dc6..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/Makefile +++ /dev/null @@ -1,44 +0,0 @@ -.PHONY: auto test docs clean - -auto: build38 - -build27: PYTHON_VER = python2.7 -build35: PYTHON_VER = python3.5 -build36: PYTHON_VER = python3.6 -build37: PYTHON_VER = python3.7 -build38: PYTHON_VER = python3.8 -build39: PYTHON_VER = python3.9 - -build27 build35 build36 build37 build38 build39: clean - virtualenv venv --python=$(PYTHON_VER) - . venv/bin/activate; \ - pip install -r requirements.txt; \ - pre-commit install - -test: - rm -f .coverage coverage.xml - . venv/bin/activate; pytest - -lint: - . venv/bin/activate; pre-commit run --all-files --show-diff-on-failure - -docs: - rm -rf docs/_build - . venv/bin/activate; cd docs; make html - -clean: clean-dist - rm -rf venv .pytest_cache ./**/__pycache__ - rm -f .coverage coverage.xml ./**/*.pyc - -clean-dist: - rm -rf dist build .egg .eggs arrow.egg-info - -build-dist: - . venv/bin/activate; \ - pip install -U setuptools twine wheel; \ - python setup.py sdist bdist_wheel - -upload-dist: - . venv/bin/activate; twine upload dist/* - -publish: test clean-dist build-dist upload-dist clean-dist diff --git a/openpype/modules/ftrack/python2_vendor/arrow/README.rst b/openpype/modules/ftrack/python2_vendor/arrow/README.rst deleted file mode 100644 index 69f6c50d81..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/README.rst +++ /dev/null @@ -1,133 +0,0 @@ -Arrow: Better dates & times for Python -====================================== - -.. start-inclusion-marker-do-not-remove - -.. image:: https://github.com/arrow-py/arrow/workflows/tests/badge.svg?branch=master - :alt: Build Status - :target: https://github.com/arrow-py/arrow/actions?query=workflow%3Atests+branch%3Amaster - -.. image:: https://codecov.io/gh/arrow-py/arrow/branch/master/graph/badge.svg - :alt: Coverage - :target: https://codecov.io/gh/arrow-py/arrow - -.. image:: https://img.shields.io/pypi/v/arrow.svg - :alt: PyPI Version - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/pypi/pyversions/arrow.svg - :alt: Supported Python Versions - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/pypi/l/arrow.svg - :alt: License - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/badge/code%20style-black-000000.svg - :alt: Code Style: Black - :target: https://github.com/psf/black - - -**Arrow** is a Python library that offers a sensible and human-friendly approach to creating, manipulating, formatting and converting dates, times and timestamps. It implements and updates the datetime type, plugging gaps in functionality and providing an intelligent module API that supports many common creation scenarios. Simply put, it helps you work with dates and times with fewer imports and a lot less code. - -Arrow is named after the `arrow of time `_ and is heavily inspired by `moment.js `_ and `requests `_. - -Why use Arrow over built-in modules? ------------------------------------- - -Python's standard library and some other low-level modules have near-complete date, time and timezone functionality, but don't work very well from a usability perspective: - -- Too many modules: datetime, time, calendar, dateutil, pytz and more -- Too many types: date, time, datetime, tzinfo, timedelta, relativedelta, etc. -- Timezones and timestamp conversions are verbose and unpleasant -- Timezone naivety is the norm -- Gaps in functionality: ISO 8601 parsing, timespans, humanization - -Features --------- - -- Fully-implemented, drop-in replacement for datetime -- Supports Python 2.7, 3.5, 3.6, 3.7, 3.8 and 3.9 -- Timezone-aware and UTC by default -- Provides super-simple creation options for many common input scenarios -- :code:`shift` method with support for relative offsets, including weeks -- Formats and parses strings automatically -- Wide support for ISO 8601 -- Timezone conversion -- Timestamp available as a property -- Generates time spans, ranges, floors and ceilings for time frames ranging from microsecond to year -- Humanizes and supports a growing list of contributed locales -- Extensible for your own Arrow-derived types - -Quick Start ------------ - -Installation -~~~~~~~~~~~~ - -To install Arrow, use `pip `_ or `pipenv `_: - -.. code-block:: console - - $ pip install -U arrow - -Example Usage -~~~~~~~~~~~~~ - -.. code-block:: python - - >>> import arrow - >>> arrow.get('2013-05-11T21:23:58.970460+07:00') - - - >>> utc = arrow.utcnow() - >>> utc - - - >>> utc = utc.shift(hours=-1) - >>> utc - - - >>> local = utc.to('US/Pacific') - >>> local - - - >>> local.timestamp - 1368303838 - - >>> local.format() - '2013-05-11 13:23:58 -07:00' - - >>> local.format('YYYY-MM-DD HH:mm:ss ZZ') - '2013-05-11 13:23:58 -07:00' - - >>> local.humanize() - 'an hour ago' - - >>> local.humanize(locale='ko_kr') - '1시간 전' - -.. end-inclusion-marker-do-not-remove - -Documentation -------------- - -For full documentation, please visit `arrow.readthedocs.io `_. - -Contributing ------------- - -Contributions are welcome for both code and localizations (adding and updating locales). Begin by gaining familiarity with the Arrow library and its features. Then, jump into contributing: - -#. Find an issue or feature to tackle on the `issue tracker `_. Issues marked with the `"good first issue" label `_ may be a great place to start! -#. Fork `this repository `_ on GitHub and begin making changes in a branch. -#. Add a few tests to ensure that the bug was fixed or the feature works as expected. -#. Run the entire test suite and linting checks by running one of the following commands: :code:`tox` (if you have `tox `_ installed) **OR** :code:`make build38 && make test && make lint` (if you do not have Python 3.8 installed, replace :code:`build38` with the latest Python version on your system). -#. Submit a pull request and await feedback 😃. - -If you have any questions along the way, feel free to ask them `here `_. - -Support Arrow -------------- - -`Open Collective `_ is an online funding platform that provides tools to raise money and share your finances with full transparency. It is the platform of choice for individuals and companies to make one-time or recurring donations directly to the project. If you are interested in making a financial contribution, please visit the `Arrow collective `_. diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile b/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile deleted file mode 100644 index d4bb2cbb9e..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py b/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py deleted file mode 100644 index aaf3c50822..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py +++ /dev/null @@ -1,62 +0,0 @@ -# -*- coding: utf-8 -*- - -# -- Path setup -------------------------------------------------------------- - -import io -import os -import sys - -sys.path.insert(0, os.path.abspath("..")) - -about = {} -with io.open("../arrow/_version.py", "r", encoding="utf-8") as f: - exec(f.read(), about) - -# -- Project information ----------------------------------------------------- - -project = u"Arrow 🏹" -copyright = "2020, Chris Smith" -author = "Chris Smith" - -release = about["__version__"] - -# -- General configuration --------------------------------------------------- - -extensions = ["sphinx.ext.autodoc"] - -templates_path = [] - -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -master_doc = "index" -source_suffix = ".rst" -pygments_style = "sphinx" - -language = None - -# -- Options for HTML output ------------------------------------------------- - -html_theme = "alabaster" -html_theme_path = [] -html_static_path = [] - -html_show_sourcelink = False -html_show_sphinx = False -html_show_copyright = True - -# https://alabaster.readthedocs.io/en/latest/customization.html -html_theme_options = { - "description": "Arrow is a sensible and human-friendly approach to dates, times and timestamps.", - "github_user": "arrow-py", - "github_repo": "arrow", - "github_banner": True, - "show_related": False, - "show_powered_by": False, - "github_button": True, - "github_type": "star", - "github_count": "true", # must be a string -} - -html_sidebars = { - "**": ["about.html", "localtoc.html", "relations.html", "searchbox.html"] -} diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst b/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst deleted file mode 100644 index e2830b04f3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst +++ /dev/null @@ -1,566 +0,0 @@ -Arrow: Better dates & times for Python -====================================== - -Release v\ |release| (`Installation`_) (`Changelog `_) - -.. include:: ../README.rst - :start-after: start-inclusion-marker-do-not-remove - :end-before: end-inclusion-marker-do-not-remove - -User's Guide ------------- - -Creation -~~~~~~~~ - -Get 'now' easily: - -.. code-block:: python - - >>> arrow.utcnow() - - - >>> arrow.now() - - - >>> arrow.now('US/Pacific') - - -Create from timestamps (:code:`int` or :code:`float`): - -.. code-block:: python - - >>> arrow.get(1367900664) - - - >>> arrow.get(1367900664.152325) - - -Use a naive or timezone-aware datetime, or flexibly specify a timezone: - -.. code-block:: python - - >>> arrow.get(datetime.utcnow()) - - - >>> arrow.get(datetime(2013, 5, 5), 'US/Pacific') - - - >>> from dateutil import tz - >>> arrow.get(datetime(2013, 5, 5), tz.gettz('US/Pacific')) - - - >>> arrow.get(datetime.now(tz.gettz('US/Pacific'))) - - -Parse from a string: - -.. code-block:: python - - >>> arrow.get('2013-05-05 12:30:45', 'YYYY-MM-DD HH:mm:ss') - - -Search a date in a string: - -.. code-block:: python - - >>> arrow.get('June was born in May 1980', 'MMMM YYYY') - - -Some ISO 8601 compliant strings are recognized and parsed without a format string: - - >>> arrow.get('2013-09-30T15:34:00.000-07:00') - - -Arrow objects can be instantiated directly too, with the same arguments as a datetime: - -.. code-block:: python - - >>> arrow.get(2013, 5, 5) - - - >>> arrow.Arrow(2013, 5, 5) - - -Properties -~~~~~~~~~~ - -Get a datetime or timestamp representation: - -.. code-block:: python - - >>> a = arrow.utcnow() - >>> a.datetime - datetime.datetime(2013, 5, 7, 4, 38, 15, 447644, tzinfo=tzutc()) - - >>> a.timestamp - 1367901495 - -Get a naive datetime, and tzinfo: - -.. code-block:: python - - >>> a.naive - datetime.datetime(2013, 5, 7, 4, 38, 15, 447644) - - >>> a.tzinfo - tzutc() - -Get any datetime value: - -.. code-block:: python - - >>> a.year - 2013 - -Call datetime functions that return properties: - -.. code-block:: python - - >>> a.date() - datetime.date(2013, 5, 7) - - >>> a.time() - datetime.time(4, 38, 15, 447644) - -Replace & Shift -~~~~~~~~~~~~~~~ - -Get a new :class:`Arrow ` object, with altered attributes, just as you would with a datetime: - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw - - - >>> arw.replace(hour=4, minute=40) - - -Or, get one with attributes shifted forward or backward: - -.. code-block:: python - - >>> arw.shift(weeks=+3) - - -Even replace the timezone without altering other attributes: - -.. code-block:: python - - >>> arw.replace(tzinfo='US/Pacific') - - -Move between the earlier and later moments of an ambiguous time: - -.. code-block:: python - - >>> paris_transition = arrow.Arrow(2019, 10, 27, 2, tzinfo="Europe/Paris", fold=0) - >>> paris_transition - - >>> paris_transition.ambiguous - True - >>> paris_transition.replace(fold=1) - - -Format -~~~~~~ - -.. code-block:: python - - >>> arrow.utcnow().format('YYYY-MM-DD HH:mm:ss ZZ') - '2013-05-07 05:23:16 -00:00' - -Convert -~~~~~~~ - -Convert from UTC to other timezones by name or tzinfo: - -.. code-block:: python - - >>> utc = arrow.utcnow() - >>> utc - - - >>> utc.to('US/Pacific') - - - >>> utc.to(tz.gettz('US/Pacific')) - - -Or using shorthand: - -.. code-block:: python - - >>> utc.to('local') - - - >>> utc.to('local').to('utc') - - - -Humanize -~~~~~~~~ - -Humanize relative to now: - -.. code-block:: python - - >>> past = arrow.utcnow().shift(hours=-1) - >>> past.humanize() - 'an hour ago' - -Or another Arrow, or datetime: - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(hours=2) - >>> future.humanize(present) - 'in 2 hours' - -Indicate time as relative or include only the distance - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(hours=2) - >>> future.humanize(present) - 'in 2 hours' - >>> future.humanize(present, only_distance=True) - '2 hours' - - -Indicate a specific time granularity (or multiple): - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(minutes=66) - >>> future.humanize(present, granularity="minute") - 'in 66 minutes' - >>> future.humanize(present, granularity=["hour", "minute"]) - 'in an hour and 6 minutes' - >>> present.humanize(future, granularity=["hour", "minute"]) - 'an hour and 6 minutes ago' - >>> future.humanize(present, only_distance=True, granularity=["hour", "minute"]) - 'an hour and 6 minutes' - -Support for a growing number of locales (see ``locales.py`` for supported languages): - -.. code-block:: python - - - >>> future = arrow.utcnow().shift(hours=1) - >>> future.humanize(a, locale='ru') - 'через 2 час(а,ов)' - - -Ranges & Spans -~~~~~~~~~~~~~~ - -Get the time span of any unit: - -.. code-block:: python - - >>> arrow.utcnow().span('hour') - (, ) - -Or just get the floor and ceiling: - -.. code-block:: python - - >>> arrow.utcnow().floor('hour') - - - >>> arrow.utcnow().ceil('hour') - - -You can also get a range of time spans: - -.. code-block:: python - - >>> start = datetime(2013, 5, 5, 12, 30) - >>> end = datetime(2013, 5, 5, 17, 15) - >>> for r in arrow.Arrow.span_range('hour', start, end): - ... print r - ... - (, ) - (, ) - (, ) - (, ) - (, ) - -Or just iterate over a range of time: - -.. code-block:: python - - >>> start = datetime(2013, 5, 5, 12, 30) - >>> end = datetime(2013, 5, 5, 17, 15) - >>> for r in arrow.Arrow.range('hour', start, end): - ... print repr(r) - ... - - - - - - -.. toctree:: - :maxdepth: 2 - -Factories -~~~~~~~~~ - -Use factories to harness Arrow's module API for a custom Arrow-derived type. First, derive your type: - -.. code-block:: python - - >>> class CustomArrow(arrow.Arrow): - ... - ... def days_till_xmas(self): - ... - ... xmas = arrow.Arrow(self.year, 12, 25) - ... - ... if self > xmas: - ... xmas = xmas.shift(years=1) - ... - ... return (xmas - self).days - - -Then get and use a factory for it: - -.. code-block:: python - - >>> factory = arrow.ArrowFactory(CustomArrow) - >>> custom = factory.utcnow() - >>> custom - >>> - - >>> custom.days_till_xmas() - >>> 211 - -Supported Tokens -~~~~~~~~~~~~~~~~ - -Use the following tokens for parsing and formatting. Note that they are **not** the same as the tokens for `strptime `_: - -+--------------------------------+--------------+-------------------------------------------+ -| |Token |Output | -+================================+==============+===========================================+ -|**Year** |YYYY |2000, 2001, 2002 ... 2012, 2013 | -+--------------------------------+--------------+-------------------------------------------+ -| |YY |00, 01, 02 ... 12, 13 | -+--------------------------------+--------------+-------------------------------------------+ -|**Month** |MMMM |January, February, March ... [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |MMM |Jan, Feb, Mar ... [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |MM |01, 02, 03 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -| |M |1, 2, 3 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Year** |DDDD |001, 002, 003 ... 364, 365 | -+--------------------------------+--------------+-------------------------------------------+ -| |DDD |1, 2, 3 ... 364, 365 | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Month** |DD |01, 02, 03 ... 30, 31 | -+--------------------------------+--------------+-------------------------------------------+ -| |D |1, 2, 3 ... 30, 31 | -+--------------------------------+--------------+-------------------------------------------+ -| |Do |1st, 2nd, 3rd ... 30th, 31st | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Week** |dddd |Monday, Tuesday, Wednesday ... [#t2]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |ddd |Mon, Tue, Wed ... [#t2]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |d |1, 2, 3 ... 6, 7 | -+--------------------------------+--------------+-------------------------------------------+ -|**ISO week date** |W |2011-W05-4, 2019-W17 | -+--------------------------------+--------------+-------------------------------------------+ -|**Hour** |HH |00, 01, 02 ... 23, 24 | -+--------------------------------+--------------+-------------------------------------------+ -| |H |0, 1, 2 ... 23, 24 | -+--------------------------------+--------------+-------------------------------------------+ -| |hh |01, 02, 03 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -| |h |1, 2, 3 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -|**AM / PM** |A |AM, PM, am, pm [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |a |am, pm [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**Minute** |mm |00, 01, 02 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -| |m |0, 1, 2 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -|**Second** |ss |00, 01, 02 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -| |s |0, 1, 2 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -|**Sub-second** |S... |0, 02, 003, 000006, 123123123123... [#t3]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**Timezone** |ZZZ |Asia/Baku, Europe/Warsaw, GMT ... [#t4]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |ZZ |-07:00, -06:00 ... +06:00, +07:00, +08, Z | -+--------------------------------+--------------+-------------------------------------------+ -| |Z |-0700, -0600 ... +0600, +0700, +08, Z | -+--------------------------------+--------------+-------------------------------------------+ -|**Seconds Timestamp** |X |1381685817, 1381685817.915482 ... [#t5]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**ms or µs Timestamp** |x |1569980330813, 1569980330813221 | -+--------------------------------+--------------+-------------------------------------------+ - -.. rubric:: Footnotes - -.. [#t1] localization support for parsing and formatting -.. [#t2] localization support only for formatting -.. [#t3] the result is truncated to microseconds, with `half-to-even rounding `_. -.. [#t4] timezone names from `tz database `_ provided via dateutil package, note that abbreviations such as MST, PDT, BRST are unlikely to parse due to ambiguity. Use the full IANA zone name instead (Asia/Shanghai, Europe/London, America/Chicago etc). -.. [#t5] this token cannot be used for parsing timestamps out of natural language strings due to compatibility reasons - -Built-in Formats -++++++++++++++++ - -There are several formatting standards that are provided as built-in tokens. - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw.format(arrow.FORMAT_ATOM) - '2020-05-27 10:30:35+00:00' - >>> arw.format(arrow.FORMAT_COOKIE) - 'Wednesday, 27-May-2020 10:30:35 UTC' - >>> arw.format(arrow.FORMAT_RSS) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC822) - 'Wed, 27 May 20 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC850) - 'Wednesday, 27-May-20 10:30:35 UTC' - >>> arw.format(arrow.FORMAT_RFC1036) - 'Wed, 27 May 20 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC1123) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC2822) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC3339) - '2020-05-27 10:30:35+00:00' - >>> arw.format(arrow.FORMAT_W3C) - '2020-05-27 10:30:35+00:00' - -Escaping Formats -~~~~~~~~~~~~~~~~ - -Tokens, phrases, and regular expressions in a format string can be escaped when parsing and formatting by enclosing them within square brackets. - -Tokens & Phrases -++++++++++++++++ - -Any `token `_ or phrase can be escaped as follows: - -.. code-block:: python - - >>> fmt = "YYYY-MM-DD h [h] m" - >>> arw = arrow.get("2018-03-09 8 h 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 h 40' - - >>> fmt = "YYYY-MM-DD h [hello] m" - >>> arw = arrow.get("2018-03-09 8 hello 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 hello 40' - - >>> fmt = "YYYY-MM-DD h [hello world] m" - >>> arw = arrow.get("2018-03-09 8 hello world 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 hello world 40' - -This can be useful for parsing dates in different locales such as French, in which it is common to format time strings as "8 h 40" rather than "8:40". - -Regular Expressions -+++++++++++++++++++ - -You can also escape regular expressions by enclosing them within square brackets. In the following example, we are using the regular expression :code:`\s+` to match any number of whitespace characters that separate the tokens. This is useful if you do not know the number of spaces between tokens ahead of time (e.g. in log files). - -.. code-block:: python - - >>> fmt = r"ddd[\s+]MMM[\s+]DD[\s+]HH:mm:ss[\s+]YYYY" - >>> arrow.get("Mon Sep 08 16:41:45 2014", fmt) - - - >>> arrow.get("Mon \tSep 08 16:41:45 2014", fmt) - - - >>> arrow.get("Mon Sep 08 16:41:45 2014", fmt) - - -Punctuation -~~~~~~~~~~~ - -Date and time formats may be fenced on either side by one punctuation character from the following list: ``, . ; : ? ! " \` ' [ ] { } ( ) < >`` - -.. code-block:: python - - >>> arrow.get("Cool date: 2019-10-31T09:12:45.123456+04:30.", "YYYY-MM-DDTHH:mm:ss.SZZ") - - - >>> arrow.get("Tomorrow (2019-10-31) is Halloween!", "YYYY-MM-DD") - - - >>> arrow.get("Halloween is on 2019.10.31.", "YYYY.MM.DD") - - - >>> arrow.get("It's Halloween tomorrow (2019-10-31)!", "YYYY-MM-DD") - # Raises exception because there are multiple punctuation marks following the date - -Redundant Whitespace -~~~~~~~~~~~~~~~~~~~~ - -Redundant whitespace characters (spaces, tabs, and newlines) can be normalized automatically by passing in the ``normalize_whitespace`` flag to ``arrow.get``: - -.. code-block:: python - - >>> arrow.get('\t \n 2013-05-05T12:30:45.123456 \t \n', normalize_whitespace=True) - - - >>> arrow.get('2013-05-05 T \n 12:30:45\t123456', 'YYYY-MM-DD T HH:mm:ss S', normalize_whitespace=True) - - -API Guide ---------- - -arrow.arrow -~~~~~~~~~~~ - -.. automodule:: arrow.arrow - :members: - -arrow.factory -~~~~~~~~~~~~~ - -.. automodule:: arrow.factory - :members: - -arrow.api -~~~~~~~~~ - -.. automodule:: arrow.api - :members: - -arrow.locale -~~~~~~~~~~~~ - -.. automodule:: arrow.locales - :members: - :undoc-members: - -Release History ---------------- - -.. toctree:: - :maxdepth: 2 - - releases diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat b/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat deleted file mode 100644 index 922152e96a..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat +++ /dev/null @@ -1,35 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build -) -set SOURCEDIR=. -set BUILDDIR=_build - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The 'sphinx-build' command was not found. Make sure you have Sphinx - echo.installed, then set the SPHINXBUILD environment variable to point - echo.to the full path of the 'sphinx-build' executable. Alternatively you - echo.may add the Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% - -:end -popd diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst b/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst deleted file mode 100644 index 22e1e59c8c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst +++ /dev/null @@ -1,3 +0,0 @@ -.. _releases: - -.. include:: ../CHANGELOG.rst diff --git a/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt b/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt deleted file mode 100644 index df565d8384..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt +++ /dev/null @@ -1,14 +0,0 @@ -backports.functools_lru_cache==1.6.1; python_version == "2.7" -dateparser==0.7.* -pre-commit==1.21.*; python_version <= "3.5" -pre-commit==2.6.*; python_version >= "3.6" -pytest==4.6.*; python_version == "2.7" -pytest==6.0.*; python_version >= "3.5" -pytest-cov==2.10.* -pytest-mock==2.0.*; python_version == "2.7" -pytest-mock==3.2.*; python_version >= "3.5" -python-dateutil==2.8.* -pytz==2019.* -simplejson==3.17.* -sphinx==1.8.*; python_version == "2.7" -sphinx==3.2.*; python_version >= "3.5" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg b/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg deleted file mode 100644 index 2a9acf13da..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg +++ /dev/null @@ -1,2 +0,0 @@ -[bdist_wheel] -universal = 1 diff --git a/openpype/modules/ftrack/python2_vendor/arrow/setup.py b/openpype/modules/ftrack/python2_vendor/arrow/setup.py deleted file mode 100644 index dc4f0e77d5..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/setup.py +++ /dev/null @@ -1,50 +0,0 @@ -# -*- coding: utf-8 -*- -import io - -from setuptools import setup - -with io.open("README.rst", "r", encoding="utf-8") as f: - readme = f.read() - -about = {} -with io.open("arrow/_version.py", "r", encoding="utf-8") as f: - exec(f.read(), about) - -setup( - name="arrow", - version=about["__version__"], - description="Better dates & times for Python", - long_description=readme, - long_description_content_type="text/x-rst", - url="https://arrow.readthedocs.io", - author="Chris Smith", - author_email="crsmithdev@gmail.com", - license="Apache 2.0", - packages=["arrow"], - zip_safe=False, - python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", - install_requires=[ - "python-dateutil>=2.7.0", - "backports.functools_lru_cache>=1.2.1;python_version=='2.7'", - ], - classifiers=[ - "Development Status :: 4 - Beta", - "Intended Audience :: Developers", - "License :: OSI Approved :: Apache Software License", - "Topic :: Software Development :: Libraries :: Python Modules", - "Programming Language :: Python :: 2", - "Programming Language :: Python :: 2.7", - "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.5", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Programming Language :: Python :: 3.9", - ], - keywords="arrow date time datetime timestamp timezone humanize", - project_urls={ - "Repository": "https://github.com/arrow-py/arrow", - "Bug Reports": "https://github.com/arrow-py/arrow/issues", - "Documentation": "https://arrow.readthedocs.io", - }, -) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py deleted file mode 100644 index 5bc8a4af2e..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py +++ /dev/null @@ -1,76 +0,0 @@ -# -*- coding: utf-8 -*- -from datetime import datetime - -import pytest -from dateutil import tz as dateutil_tz - -from arrow import arrow, factory, formatter, locales, parser - - -@pytest.fixture(scope="class") -def time_utcnow(request): - request.cls.arrow = arrow.Arrow.utcnow() - - -@pytest.fixture(scope="class") -def time_2013_01_01(request): - request.cls.now = arrow.Arrow.utcnow() - request.cls.arrow = arrow.Arrow(2013, 1, 1) - request.cls.datetime = datetime(2013, 1, 1) - - -@pytest.fixture(scope="class") -def time_2013_02_03(request): - request.cls.arrow = arrow.Arrow(2013, 2, 3, 12, 30, 45, 1) - - -@pytest.fixture(scope="class") -def time_2013_02_15(request): - request.cls.datetime = datetime(2013, 2, 15, 3, 41, 22, 8923) - request.cls.arrow = arrow.Arrow.fromdatetime(request.cls.datetime) - - -@pytest.fixture(scope="class") -def time_1975_12_25(request): - request.cls.datetime = datetime( - 1975, 12, 25, 14, 15, 16, tzinfo=dateutil_tz.gettz("America/New_York") - ) - request.cls.arrow = arrow.Arrow.fromdatetime(request.cls.datetime) - - -@pytest.fixture(scope="class") -def arrow_formatter(request): - request.cls.formatter = formatter.DateTimeFormatter() - - -@pytest.fixture(scope="class") -def arrow_factory(request): - request.cls.factory = factory.ArrowFactory() - - -@pytest.fixture(scope="class") -def lang_locales(request): - request.cls.locales = locales._locales - - -@pytest.fixture(scope="class") -def lang_locale(request): - # As locale test classes are prefixed with Test, we are dynamically getting the locale by the test class name. - # TestEnglishLocale -> EnglishLocale - name = request.cls.__name__[4:] - request.cls.locale = locales.get_locale_by_class_name(name) - - -@pytest.fixture(scope="class") -def dt_parser(request): - request.cls.parser = parser.DateTimeParser() - - -@pytest.fixture(scope="class") -def dt_parser_regex(request): - request.cls.format_regex = parser.DateTimeParser._FORMAT_RE - - -@pytest.fixture(scope="class") -def tzinfo_parser(request): - request.cls.parser = parser.TzinfoParser() diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py deleted file mode 100644 index 9b19a27cd9..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py +++ /dev/null @@ -1,28 +0,0 @@ -# -*- coding: utf-8 -*- -import arrow - - -class TestModule: - def test_get(self, mocker): - mocker.patch("arrow.api._factory.get", return_value="result") - - assert arrow.api.get() == "result" - - def test_utcnow(self, mocker): - mocker.patch("arrow.api._factory.utcnow", return_value="utcnow") - - assert arrow.api.utcnow() == "utcnow" - - def test_now(self, mocker): - mocker.patch("arrow.api._factory.now", tz="tz", return_value="now") - - assert arrow.api.now("tz") == "now" - - def test_factory(self): - class MockCustomArrowClass(arrow.Arrow): - pass - - result = arrow.api.factory(MockCustomArrowClass) - - assert isinstance(result, arrow.factory.ArrowFactory) - assert isinstance(result.utcnow(), MockCustomArrowClass) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py deleted file mode 100644 index b0bd20a5e3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py +++ /dev/null @@ -1,2150 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals - -import calendar -import pickle -import sys -import time -from datetime import date, datetime, timedelta - -import dateutil -import pytest -import pytz -import simplejson as json -from dateutil import tz -from dateutil.relativedelta import FR, MO, SA, SU, TH, TU, WE - -from arrow import arrow - -from .utils import assert_datetime_equality - - -class TestTestArrowInit: - def test_init_bad_input(self): - - with pytest.raises(TypeError): - arrow.Arrow(2013) - - with pytest.raises(TypeError): - arrow.Arrow(2013, 2) - - with pytest.raises(ValueError): - arrow.Arrow(2013, 2, 2, 12, 30, 45, 9999999) - - def test_init(self): - - result = arrow.Arrow(2013, 2, 2) - self.expected = datetime(2013, 2, 2, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12) - self.expected = datetime(2013, 2, 2, 12, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30) - self.expected = datetime(2013, 2, 2, 12, 30, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30, 45) - self.expected = datetime(2013, 2, 2, 12, 30, 45, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30, 45, 999999) - self.expected = datetime(2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - self.expected = datetime( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == self.expected - - # regression tests for issue #626 - def test_init_pytz_timezone(self): - - result = arrow.Arrow( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=pytz.timezone("Europe/Paris") - ) - self.expected = datetime( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == self.expected - assert_datetime_equality(result._datetime, self.expected, 1) - - def test_init_with_fold(self): - before = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm") - after = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm", fold=1) - - assert hasattr(before, "fold") - assert hasattr(after, "fold") - - # PEP-495 requires the comparisons below to be true - assert before == after - assert before.utcoffset() != after.utcoffset() - - -class TestTestArrowFactory: - def test_now(self): - - result = arrow.Arrow.now() - - assert_datetime_equality( - result._datetime, datetime.now().replace(tzinfo=tz.tzlocal()) - ) - - def test_utcnow(self): - - result = arrow.Arrow.utcnow() - - assert_datetime_equality( - result._datetime, datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - assert result.fold == 0 - - def test_fromtimestamp(self): - - timestamp = time.time() - - result = arrow.Arrow.fromtimestamp(timestamp) - assert_datetime_equality( - result._datetime, datetime.now().replace(tzinfo=tz.tzlocal()) - ) - - result = arrow.Arrow.fromtimestamp(timestamp, tzinfo=tz.gettz("Europe/Paris")) - assert_datetime_equality( - result._datetime, - datetime.fromtimestamp(timestamp, tz.gettz("Europe/Paris")), - ) - - result = arrow.Arrow.fromtimestamp(timestamp, tzinfo="Europe/Paris") - assert_datetime_equality( - result._datetime, - datetime.fromtimestamp(timestamp, tz.gettz("Europe/Paris")), - ) - - with pytest.raises(ValueError): - arrow.Arrow.fromtimestamp("invalid timestamp") - - def test_utcfromtimestamp(self): - - timestamp = time.time() - - result = arrow.Arrow.utcfromtimestamp(timestamp) - assert_datetime_equality( - result._datetime, datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - with pytest.raises(ValueError): - arrow.Arrow.utcfromtimestamp("invalid timestamp") - - def test_fromdatetime(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1) - - result = arrow.Arrow.fromdatetime(dt) - - assert result._datetime == dt.replace(tzinfo=tz.tzutc()) - - def test_fromdatetime_dt_tzinfo(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1, tzinfo=tz.gettz("US/Pacific")) - - result = arrow.Arrow.fromdatetime(dt) - - assert result._datetime == dt.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_fromdatetime_tzinfo_arg(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1) - - result = arrow.Arrow.fromdatetime(dt, tz.gettz("US/Pacific")) - - assert result._datetime == dt.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_fromdate(self): - - dt = date(2013, 2, 3) - - result = arrow.Arrow.fromdate(dt, tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 2, 3, tzinfo=tz.gettz("US/Pacific")) - - def test_strptime(self): - - formatted = datetime(2013, 2, 3, 12, 30, 45).strftime("%Y-%m-%d %H:%M:%S") - - result = arrow.Arrow.strptime(formatted, "%Y-%m-%d %H:%M:%S") - assert result._datetime == datetime(2013, 2, 3, 12, 30, 45, tzinfo=tz.tzutc()) - - result = arrow.Arrow.strptime( - formatted, "%Y-%m-%d %H:%M:%S", tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == datetime( - 2013, 2, 3, 12, 30, 45, tzinfo=tz.gettz("Europe/Paris") - ) - - -@pytest.mark.usefixtures("time_2013_02_03") -class TestTestArrowRepresentation: - def test_repr(self): - - result = self.arrow.__repr__() - - assert result == "".format(self.arrow._datetime.isoformat()) - - def test_str(self): - - result = self.arrow.__str__() - - assert result == self.arrow._datetime.isoformat() - - def test_hash(self): - - result = self.arrow.__hash__() - - assert result == self.arrow._datetime.__hash__() - - def test_format(self): - - result = "{:YYYY-MM-DD}".format(self.arrow) - - assert result == "2013-02-03" - - def test_bare_format(self): - - result = self.arrow.format() - - assert result == "2013-02-03 12:30:45+00:00" - - def test_format_no_format_string(self): - - result = "{}".format(self.arrow) - - assert result == str(self.arrow) - - def test_clone(self): - - result = self.arrow.clone() - - assert result is not self.arrow - assert result._datetime == self.arrow._datetime - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowAttribute: - def test_getattr_base(self): - - with pytest.raises(AttributeError): - self.arrow.prop - - def test_getattr_week(self): - - assert self.arrow.week == 1 - - def test_getattr_quarter(self): - # start dates - q1 = arrow.Arrow(2013, 1, 1) - q2 = arrow.Arrow(2013, 4, 1) - q3 = arrow.Arrow(2013, 8, 1) - q4 = arrow.Arrow(2013, 10, 1) - assert q1.quarter == 1 - assert q2.quarter == 2 - assert q3.quarter == 3 - assert q4.quarter == 4 - - # end dates - q1 = arrow.Arrow(2013, 3, 31) - q2 = arrow.Arrow(2013, 6, 30) - q3 = arrow.Arrow(2013, 9, 30) - q4 = arrow.Arrow(2013, 12, 31) - assert q1.quarter == 1 - assert q2.quarter == 2 - assert q3.quarter == 3 - assert q4.quarter == 4 - - def test_getattr_dt_value(self): - - assert self.arrow.year == 2013 - - def test_tzinfo(self): - - self.arrow.tzinfo = tz.gettz("PST") - assert self.arrow.tzinfo == tz.gettz("PST") - - def test_naive(self): - - assert self.arrow.naive == self.arrow._datetime.replace(tzinfo=None) - - def test_timestamp(self): - - assert self.arrow.timestamp == calendar.timegm( - self.arrow._datetime.utctimetuple() - ) - - with pytest.warns(DeprecationWarning): - self.arrow.timestamp - - def test_int_timestamp(self): - - assert self.arrow.int_timestamp == calendar.timegm( - self.arrow._datetime.utctimetuple() - ) - - def test_float_timestamp(self): - - result = self.arrow.float_timestamp - self.arrow.timestamp - - assert result == self.arrow.microsecond - - def test_getattr_fold(self): - - # UTC is always unambiguous - assert self.now.fold == 0 - - ambiguous_dt = arrow.Arrow( - 2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm", fold=1 - ) - assert ambiguous_dt.fold == 1 - - with pytest.raises(AttributeError): - ambiguous_dt.fold = 0 - - def test_getattr_ambiguous(self): - - assert not self.now.ambiguous - - ambiguous_dt = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm") - - assert ambiguous_dt.ambiguous - - def test_getattr_imaginary(self): - - assert not self.now.imaginary - - imaginary_dt = arrow.Arrow(2013, 3, 31, 2, 30, tzinfo="Europe/Paris") - - assert imaginary_dt.imaginary - - -@pytest.mark.usefixtures("time_utcnow") -class TestArrowComparison: - def test_eq(self): - - assert self.arrow == self.arrow - assert self.arrow == self.arrow.datetime - assert not (self.arrow == "abc") - - def test_ne(self): - - assert not (self.arrow != self.arrow) - assert not (self.arrow != self.arrow.datetime) - assert self.arrow != "abc" - - def test_gt(self): - - arrow_cmp = self.arrow.shift(minutes=1) - - assert not (self.arrow > self.arrow) - assert not (self.arrow > self.arrow.datetime) - - with pytest.raises(TypeError): - self.arrow > "abc" - - assert self.arrow < arrow_cmp - assert self.arrow < arrow_cmp.datetime - - def test_ge(self): - - with pytest.raises(TypeError): - self.arrow >= "abc" - - assert self.arrow >= self.arrow - assert self.arrow >= self.arrow.datetime - - def test_lt(self): - - arrow_cmp = self.arrow.shift(minutes=1) - - assert not (self.arrow < self.arrow) - assert not (self.arrow < self.arrow.datetime) - - with pytest.raises(TypeError): - self.arrow < "abc" - - assert self.arrow < arrow_cmp - assert self.arrow < arrow_cmp.datetime - - def test_le(self): - - with pytest.raises(TypeError): - self.arrow <= "abc" - - assert self.arrow <= self.arrow - assert self.arrow <= self.arrow.datetime - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowMath: - def test_add_timedelta(self): - - result = self.arrow.__add__(timedelta(days=1)) - - assert result._datetime == datetime(2013, 1, 2, tzinfo=tz.tzutc()) - - def test_add_other(self): - - with pytest.raises(TypeError): - self.arrow + 1 - - def test_radd(self): - - result = self.arrow.__radd__(timedelta(days=1)) - - assert result._datetime == datetime(2013, 1, 2, tzinfo=tz.tzutc()) - - def test_sub_timedelta(self): - - result = self.arrow.__sub__(timedelta(days=1)) - - assert result._datetime == datetime(2012, 12, 31, tzinfo=tz.tzutc()) - - def test_sub_datetime(self): - - result = self.arrow.__sub__(datetime(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=11) - - def test_sub_arrow(self): - - result = self.arrow.__sub__(arrow.Arrow(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=11) - - def test_sub_other(self): - - with pytest.raises(TypeError): - self.arrow - object() - - def test_rsub_datetime(self): - - result = self.arrow.__rsub__(datetime(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=-11) - - def test_rsub_other(self): - - with pytest.raises(TypeError): - timedelta(days=1) - self.arrow - - -@pytest.mark.usefixtures("time_utcnow") -class TestArrowDatetimeInterface: - def test_date(self): - - result = self.arrow.date() - - assert result == self.arrow._datetime.date() - - def test_time(self): - - result = self.arrow.time() - - assert result == self.arrow._datetime.time() - - def test_timetz(self): - - result = self.arrow.timetz() - - assert result == self.arrow._datetime.timetz() - - def test_astimezone(self): - - other_tz = tz.gettz("US/Pacific") - - result = self.arrow.astimezone(other_tz) - - assert result == self.arrow._datetime.astimezone(other_tz) - - def test_utcoffset(self): - - result = self.arrow.utcoffset() - - assert result == self.arrow._datetime.utcoffset() - - def test_dst(self): - - result = self.arrow.dst() - - assert result == self.arrow._datetime.dst() - - def test_timetuple(self): - - result = self.arrow.timetuple() - - assert result == self.arrow._datetime.timetuple() - - def test_utctimetuple(self): - - result = self.arrow.utctimetuple() - - assert result == self.arrow._datetime.utctimetuple() - - def test_toordinal(self): - - result = self.arrow.toordinal() - - assert result == self.arrow._datetime.toordinal() - - def test_weekday(self): - - result = self.arrow.weekday() - - assert result == self.arrow._datetime.weekday() - - def test_isoweekday(self): - - result = self.arrow.isoweekday() - - assert result == self.arrow._datetime.isoweekday() - - def test_isocalendar(self): - - result = self.arrow.isocalendar() - - assert result == self.arrow._datetime.isocalendar() - - def test_isoformat(self): - - result = self.arrow.isoformat() - - assert result == self.arrow._datetime.isoformat() - - def test_simplejson(self): - - result = json.dumps({"v": self.arrow.for_json()}, for_json=True) - - assert json.loads(result)["v"] == self.arrow._datetime.isoformat() - - def test_ctime(self): - - result = self.arrow.ctime() - - assert result == self.arrow._datetime.ctime() - - def test_strftime(self): - - result = self.arrow.strftime("%Y") - - assert result == self.arrow._datetime.strftime("%Y") - - -class TestArrowFalsePositiveDst: - """These tests relate to issues #376 and #551. - The key points in both issues are that arrow will assign a UTC timezone if none is provided and - .to() will change other attributes to be correct whereas .replace() only changes the specified attribute. - - Issue 376 - >>> arrow.get('2016-11-06').to('America/New_York').ceil('day') - < Arrow [2016-11-05T23:59:59.999999-04:00] > - - Issue 551 - >>> just_before = arrow.get('2018-11-04T01:59:59.999999') - >>> just_before - 2018-11-04T01:59:59.999999+00:00 - >>> just_after = just_before.shift(microseconds=1) - >>> just_after - 2018-11-04T02:00:00+00:00 - >>> just_before_eastern = just_before.replace(tzinfo='US/Eastern') - >>> just_before_eastern - 2018-11-04T01:59:59.999999-04:00 - >>> just_after_eastern = just_after.replace(tzinfo='US/Eastern') - >>> just_after_eastern - 2018-11-04T02:00:00-05:00 - """ - - def test_dst(self): - self.before_1 = arrow.Arrow( - 2016, 11, 6, 3, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_2 = arrow.Arrow(2016, 11, 6, tzinfo=tz.gettz("America/New_York")) - self.after_1 = arrow.Arrow(2016, 11, 6, 4, tzinfo=tz.gettz("America/New_York")) - self.after_2 = arrow.Arrow( - 2016, 11, 6, 23, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_3 = arrow.Arrow( - 2018, 11, 4, 3, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_4 = arrow.Arrow(2018, 11, 4, tzinfo=tz.gettz("America/New_York")) - self.after_3 = arrow.Arrow(2018, 11, 4, 4, tzinfo=tz.gettz("America/New_York")) - self.after_4 = arrow.Arrow( - 2018, 11, 4, 23, 59, tzinfo=tz.gettz("America/New_York") - ) - assert self.before_1.day == self.before_2.day - assert self.after_1.day == self.after_2.day - assert self.before_3.day == self.before_4.day - assert self.after_3.day == self.after_4.day - - -class TestArrowConversion: - def test_to(self): - - dt_from = datetime.now() - arrow_from = arrow.Arrow.fromdatetime(dt_from, tz.gettz("US/Pacific")) - - self.expected = dt_from.replace(tzinfo=tz.gettz("US/Pacific")).astimezone( - tz.tzutc() - ) - - assert arrow_from.to("UTC").datetime == self.expected - assert arrow_from.to(tz.tzutc()).datetime == self.expected - - # issue #368 - def test_to_pacific_then_utc(self): - result = arrow.Arrow(2018, 11, 4, 1, tzinfo="-08:00").to("US/Pacific").to("UTC") - assert result == arrow.Arrow(2018, 11, 4, 9) - - # issue #368 - def test_to_amsterdam_then_utc(self): - result = arrow.Arrow(2016, 10, 30).to("Europe/Amsterdam") - assert result.utcoffset() == timedelta(seconds=7200) - - # regression test for #690 - def test_to_israel_same_offset(self): - - result = arrow.Arrow(2019, 10, 27, 2, 21, 1, tzinfo="+03:00").to("Israel") - expected = arrow.Arrow(2019, 10, 27, 1, 21, 1, tzinfo="Israel") - - assert result == expected - assert result.utcoffset() != expected.utcoffset() - - # issue 315 - def test_anchorage_dst(self): - before = arrow.Arrow(2016, 3, 13, 1, 59, tzinfo="America/Anchorage") - after = arrow.Arrow(2016, 3, 13, 2, 1, tzinfo="America/Anchorage") - - assert before.utcoffset() != after.utcoffset() - - # issue 476 - def test_chicago_fall(self): - - result = arrow.Arrow(2017, 11, 5, 2, 1, tzinfo="-05:00").to("America/Chicago") - expected = arrow.Arrow(2017, 11, 5, 1, 1, tzinfo="America/Chicago") - - assert result == expected - assert result.utcoffset() != expected.utcoffset() - - def test_toronto_gap(self): - - before = arrow.Arrow(2011, 3, 13, 6, 30, tzinfo="UTC").to("America/Toronto") - after = arrow.Arrow(2011, 3, 13, 7, 30, tzinfo="UTC").to("America/Toronto") - - assert before.datetime.replace(tzinfo=None) == datetime(2011, 3, 13, 1, 30) - assert after.datetime.replace(tzinfo=None) == datetime(2011, 3, 13, 3, 30) - - assert before.utcoffset() != after.utcoffset() - - def test_sydney_gap(self): - - before = arrow.Arrow(2012, 10, 6, 15, 30, tzinfo="UTC").to("Australia/Sydney") - after = arrow.Arrow(2012, 10, 6, 16, 30, tzinfo="UTC").to("Australia/Sydney") - - assert before.datetime.replace(tzinfo=None) == datetime(2012, 10, 7, 1, 30) - assert after.datetime.replace(tzinfo=None) == datetime(2012, 10, 7, 3, 30) - - assert before.utcoffset() != after.utcoffset() - - -class TestArrowPickling: - def test_pickle_and_unpickle(self): - - dt = arrow.Arrow.utcnow() - - pickled = pickle.dumps(dt) - - unpickled = pickle.loads(pickled) - - assert unpickled == dt - - -class TestArrowReplace: - def test_not_attr(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(abc=1) - - def test_replace(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.replace(year=2012) == arrow.Arrow(2012, 5, 5, 12, 30, 45) - assert arw.replace(month=1) == arrow.Arrow(2013, 1, 5, 12, 30, 45) - assert arw.replace(day=1) == arrow.Arrow(2013, 5, 1, 12, 30, 45) - assert arw.replace(hour=1) == arrow.Arrow(2013, 5, 5, 1, 30, 45) - assert arw.replace(minute=1) == arrow.Arrow(2013, 5, 5, 12, 1, 45) - assert arw.replace(second=1) == arrow.Arrow(2013, 5, 5, 12, 30, 1) - - def test_replace_tzinfo(self): - - arw = arrow.Arrow.utcnow().to("US/Eastern") - - result = arw.replace(tzinfo=tz.gettz("US/Pacific")) - - assert result == arw.datetime.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_replace_fold(self): - - before = arrow.Arrow(2017, 11, 5, 1, tzinfo="America/New_York") - after = before.replace(fold=1) - - assert before.fold == 0 - assert after.fold == 1 - assert before == after - assert before.utcoffset() != after.utcoffset() - - def test_replace_fold_and_other(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.replace(fold=1, minute=50) == arrow.Arrow(2013, 5, 5, 12, 50, 45) - assert arw.replace(minute=50, fold=1) == arrow.Arrow(2013, 5, 5, 12, 50, 45) - - def test_replace_week(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(week=1) - - def test_replace_quarter(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(quarter=1) - - def test_replace_quarter_and_fold(self): - with pytest.raises(AttributeError): - arrow.utcnow().replace(fold=1, quarter=1) - - with pytest.raises(AttributeError): - arrow.utcnow().replace(quarter=1, fold=1) - - def test_replace_other_kwargs(self): - - with pytest.raises(AttributeError): - arrow.utcnow().replace(abc="def") - - -class TestArrowShift: - def test_not_attr(self): - - now = arrow.Arrow.utcnow() - - with pytest.raises(AttributeError): - now.shift(abc=1) - - with pytest.raises(AttributeError): - now.shift(week=1) - - def test_shift(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.shift(years=1) == arrow.Arrow(2014, 5, 5, 12, 30, 45) - assert arw.shift(quarters=1) == arrow.Arrow(2013, 8, 5, 12, 30, 45) - assert arw.shift(quarters=1, months=1) == arrow.Arrow(2013, 9, 5, 12, 30, 45) - assert arw.shift(months=1) == arrow.Arrow(2013, 6, 5, 12, 30, 45) - assert arw.shift(weeks=1) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - assert arw.shift(days=1) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(hours=1) == arrow.Arrow(2013, 5, 5, 13, 30, 45) - assert arw.shift(minutes=1) == arrow.Arrow(2013, 5, 5, 12, 31, 45) - assert arw.shift(seconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 46) - assert arw.shift(microseconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 45, 1) - - # Remember: Python's weekday 0 is Monday - assert arw.shift(weekday=0) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=1) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=2) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=3) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=4) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=5) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=6) == arw - - with pytest.raises(IndexError): - arw.shift(weekday=7) - - # Use dateutil.relativedelta's convenient day instances - assert arw.shift(weekday=MO) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(0)) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(1)) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(2)) == arrow.Arrow(2013, 5, 13, 12, 30, 45) - assert arw.shift(weekday=TU) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(0)) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(1)) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(2)) == arrow.Arrow(2013, 5, 14, 12, 30, 45) - assert arw.shift(weekday=WE) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(0)) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(1)) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(2)) == arrow.Arrow(2013, 5, 15, 12, 30, 45) - assert arw.shift(weekday=TH) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(0)) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(1)) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(2)) == arrow.Arrow(2013, 5, 16, 12, 30, 45) - assert arw.shift(weekday=FR) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(0)) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(1)) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(2)) == arrow.Arrow(2013, 5, 17, 12, 30, 45) - assert arw.shift(weekday=SA) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(0)) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(1)) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(2)) == arrow.Arrow(2013, 5, 18, 12, 30, 45) - assert arw.shift(weekday=SU) == arw - assert arw.shift(weekday=SU(0)) == arw - assert arw.shift(weekday=SU(1)) == arw - assert arw.shift(weekday=SU(2)) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - - def test_shift_negative(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.shift(years=-1) == arrow.Arrow(2012, 5, 5, 12, 30, 45) - assert arw.shift(quarters=-1) == arrow.Arrow(2013, 2, 5, 12, 30, 45) - assert arw.shift(quarters=-1, months=-1) == arrow.Arrow(2013, 1, 5, 12, 30, 45) - assert arw.shift(months=-1) == arrow.Arrow(2013, 4, 5, 12, 30, 45) - assert arw.shift(weeks=-1) == arrow.Arrow(2013, 4, 28, 12, 30, 45) - assert arw.shift(days=-1) == arrow.Arrow(2013, 5, 4, 12, 30, 45) - assert arw.shift(hours=-1) == arrow.Arrow(2013, 5, 5, 11, 30, 45) - assert arw.shift(minutes=-1) == arrow.Arrow(2013, 5, 5, 12, 29, 45) - assert arw.shift(seconds=-1) == arrow.Arrow(2013, 5, 5, 12, 30, 44) - assert arw.shift(microseconds=-1) == arrow.Arrow(2013, 5, 5, 12, 30, 44, 999999) - - # Not sure how practical these negative weekdays are - assert arw.shift(weekday=-1) == arw.shift(weekday=SU) - assert arw.shift(weekday=-2) == arw.shift(weekday=SA) - assert arw.shift(weekday=-3) == arw.shift(weekday=FR) - assert arw.shift(weekday=-4) == arw.shift(weekday=TH) - assert arw.shift(weekday=-5) == arw.shift(weekday=WE) - assert arw.shift(weekday=-6) == arw.shift(weekday=TU) - assert arw.shift(weekday=-7) == arw.shift(weekday=MO) - - with pytest.raises(IndexError): - arw.shift(weekday=-8) - - assert arw.shift(weekday=MO(-1)) == arrow.Arrow(2013, 4, 29, 12, 30, 45) - assert arw.shift(weekday=TU(-1)) == arrow.Arrow(2013, 4, 30, 12, 30, 45) - assert arw.shift(weekday=WE(-1)) == arrow.Arrow(2013, 5, 1, 12, 30, 45) - assert arw.shift(weekday=TH(-1)) == arrow.Arrow(2013, 5, 2, 12, 30, 45) - assert arw.shift(weekday=FR(-1)) == arrow.Arrow(2013, 5, 3, 12, 30, 45) - assert arw.shift(weekday=SA(-1)) == arrow.Arrow(2013, 5, 4, 12, 30, 45) - assert arw.shift(weekday=SU(-1)) == arw - assert arw.shift(weekday=SU(-2)) == arrow.Arrow(2013, 4, 28, 12, 30, 45) - - def test_shift_quarters_bug(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - # The value of the last-read argument was used instead of the ``quarters`` argument. - # Recall that the keyword argument dict, like all dicts, is unordered, so only certain - # combinations of arguments would exhibit this. - assert arw.shift(quarters=0, years=1) == arrow.Arrow(2014, 5, 5, 12, 30, 45) - assert arw.shift(quarters=0, months=1) == arrow.Arrow(2013, 6, 5, 12, 30, 45) - assert arw.shift(quarters=0, weeks=1) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - assert arw.shift(quarters=0, days=1) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(quarters=0, hours=1) == arrow.Arrow(2013, 5, 5, 13, 30, 45) - assert arw.shift(quarters=0, minutes=1) == arrow.Arrow(2013, 5, 5, 12, 31, 45) - assert arw.shift(quarters=0, seconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 46) - assert arw.shift(quarters=0, microseconds=1) == arrow.Arrow( - 2013, 5, 5, 12, 30, 45, 1 - ) - - def test_shift_positive_imaginary(self): - - # Avoid shifting into imaginary datetimes, take into account DST and other timezone changes. - - new_york = arrow.Arrow(2017, 3, 12, 1, 30, tzinfo="America/New_York") - assert new_york.shift(hours=+1) == arrow.Arrow( - 2017, 3, 12, 3, 30, tzinfo="America/New_York" - ) - - # pendulum example - paris = arrow.Arrow(2013, 3, 31, 1, 50, tzinfo="Europe/Paris") - assert paris.shift(minutes=+20) == arrow.Arrow( - 2013, 3, 31, 3, 10, tzinfo="Europe/Paris" - ) - - canberra = arrow.Arrow(2018, 10, 7, 1, 30, tzinfo="Australia/Canberra") - assert canberra.shift(hours=+1) == arrow.Arrow( - 2018, 10, 7, 3, 30, tzinfo="Australia/Canberra" - ) - - kiev = arrow.Arrow(2018, 3, 25, 2, 30, tzinfo="Europe/Kiev") - assert kiev.shift(hours=+1) == arrow.Arrow( - 2018, 3, 25, 4, 30, tzinfo="Europe/Kiev" - ) - - # Edge case, the entire day of 2011-12-30 is imaginary in this zone! - apia = arrow.Arrow(2011, 12, 29, 23, tzinfo="Pacific/Apia") - assert apia.shift(hours=+2) == arrow.Arrow( - 2011, 12, 31, 1, tzinfo="Pacific/Apia" - ) - - def test_shift_negative_imaginary(self): - - new_york = arrow.Arrow(2011, 3, 13, 3, 30, tzinfo="America/New_York") - assert new_york.shift(hours=-1) == arrow.Arrow( - 2011, 3, 13, 3, 30, tzinfo="America/New_York" - ) - assert new_york.shift(hours=-2) == arrow.Arrow( - 2011, 3, 13, 1, 30, tzinfo="America/New_York" - ) - - london = arrow.Arrow(2019, 3, 31, 2, tzinfo="Europe/London") - assert london.shift(hours=-1) == arrow.Arrow( - 2019, 3, 31, 2, tzinfo="Europe/London" - ) - assert london.shift(hours=-2) == arrow.Arrow( - 2019, 3, 31, 0, tzinfo="Europe/London" - ) - - # edge case, crossing the international dateline - apia = arrow.Arrow(2011, 12, 31, 1, tzinfo="Pacific/Apia") - assert apia.shift(hours=-2) == arrow.Arrow( - 2011, 12, 31, 23, tzinfo="Pacific/Apia" - ) - - @pytest.mark.skipif( - dateutil.__version__ < "2.7.1", reason="old tz database (2018d needed)" - ) - def test_shift_kiritimati(self): - # corrected 2018d tz database release, will fail in earlier versions - - kiritimati = arrow.Arrow(1994, 12, 30, 12, 30, tzinfo="Pacific/Kiritimati") - assert kiritimati.shift(days=+1) == arrow.Arrow( - 1995, 1, 1, 12, 30, tzinfo="Pacific/Kiritimati" - ) - - @pytest.mark.skipif( - sys.version_info < (3, 6), reason="unsupported before python 3.6" - ) - def shift_imaginary_seconds(self): - # offset has a seconds component - monrovia = arrow.Arrow(1972, 1, 6, 23, tzinfo="Africa/Monrovia") - assert monrovia.shift(hours=+1, minutes=+30) == arrow.Arrow( - 1972, 1, 7, 1, 14, 30, tzinfo="Africa/Monrovia" - ) - - -class TestArrowRange: - def test_year(self): - - result = list( - arrow.Arrow.range( - "year", datetime(2013, 1, 2, 3, 4, 5), datetime(2016, 4, 5, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2014, 1, 2, 3, 4, 5), - arrow.Arrow(2015, 1, 2, 3, 4, 5), - arrow.Arrow(2016, 1, 2, 3, 4, 5), - ] - - def test_quarter(self): - - result = list( - arrow.Arrow.range( - "quarter", datetime(2013, 2, 3, 4, 5, 6), datetime(2013, 5, 6, 7, 8, 9) - ) - ) - - assert result == [ - arrow.Arrow(2013, 2, 3, 4, 5, 6), - arrow.Arrow(2013, 5, 3, 4, 5, 6), - ] - - def test_month(self): - - result = list( - arrow.Arrow.range( - "month", datetime(2013, 2, 3, 4, 5, 6), datetime(2013, 5, 6, 7, 8, 9) - ) - ) - - assert result == [ - arrow.Arrow(2013, 2, 3, 4, 5, 6), - arrow.Arrow(2013, 3, 3, 4, 5, 6), - arrow.Arrow(2013, 4, 3, 4, 5, 6), - arrow.Arrow(2013, 5, 3, 4, 5, 6), - ] - - def test_week(self): - - result = list( - arrow.Arrow.range( - "week", datetime(2013, 9, 1, 2, 3, 4), datetime(2013, 10, 1, 2, 3, 4) - ) - ) - - assert result == [ - arrow.Arrow(2013, 9, 1, 2, 3, 4), - arrow.Arrow(2013, 9, 8, 2, 3, 4), - arrow.Arrow(2013, 9, 15, 2, 3, 4), - arrow.Arrow(2013, 9, 22, 2, 3, 4), - arrow.Arrow(2013, 9, 29, 2, 3, 4), - ] - - def test_day(self): - - result = list( - arrow.Arrow.range( - "day", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 5, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 3, 3, 4, 5), - arrow.Arrow(2013, 1, 4, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 3, 4, 5), - ] - - def test_hour(self): - - result = list( - arrow.Arrow.range( - "hour", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 4, 4, 5), - arrow.Arrow(2013, 1, 2, 5, 4, 5), - arrow.Arrow(2013, 1, 2, 6, 4, 5), - ] - - result = list( - arrow.Arrow.range( - "hour", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 4, 5) - ) - ) - - assert result == [arrow.Arrow(2013, 1, 2, 3, 4, 5)] - - def test_minute(self): - - result = list( - arrow.Arrow.range( - "minute", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 3, 5, 5), - arrow.Arrow(2013, 1, 2, 3, 6, 5), - arrow.Arrow(2013, 1, 2, 3, 7, 5), - ] - - def test_second(self): - - result = list( - arrow.Arrow.range( - "second", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 4, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 3, 4, 6), - arrow.Arrow(2013, 1, 2, 3, 4, 7), - arrow.Arrow(2013, 1, 2, 3, 4, 8), - ] - - def test_arrow(self): - - result = list( - arrow.Arrow.range( - "day", - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 6, 7, 8), - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 3, 3, 4, 5), - arrow.Arrow(2013, 1, 4, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 3, 4, 5), - ] - - def test_naive_tz(self): - - result = arrow.Arrow.range( - "year", datetime(2013, 1, 2, 3), datetime(2016, 4, 5, 6), "US/Pacific" - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Pacific") - - def test_aware_same_tz(self): - - result = arrow.Arrow.range( - "day", - arrow.Arrow(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")), - arrow.Arrow(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Pacific") - - def test_aware_different_tz(self): - - result = arrow.Arrow.range( - "day", - datetime(2013, 1, 1, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Eastern") - - def test_aware_tz(self): - - result = arrow.Arrow.range( - "day", - datetime(2013, 1, 1, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - tz=tz.gettz("US/Central"), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Central") - - def test_imaginary(self): - # issue #72, avoid duplication in utc column - - before = arrow.Arrow(2018, 3, 10, 23, tzinfo="US/Pacific") - after = arrow.Arrow(2018, 3, 11, 4, tzinfo="US/Pacific") - - pacific_range = [t for t in arrow.Arrow.range("hour", before, after)] - utc_range = [t.to("utc") for t in arrow.Arrow.range("hour", before, after)] - - assert len(pacific_range) == len(set(pacific_range)) - assert len(utc_range) == len(set(utc_range)) - - def test_unsupported(self): - - with pytest.raises(AttributeError): - next(arrow.Arrow.range("abc", datetime.utcnow(), datetime.utcnow())) - - def test_range_over_months_ending_on_different_days(self): - # regression test for issue #842 - result = list(arrow.Arrow.range("month", datetime(2015, 1, 31), limit=4)) - assert result == [ - arrow.Arrow(2015, 1, 31), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 31), - arrow.Arrow(2015, 4, 30), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 1, 30), limit=3)) - assert result == [ - arrow.Arrow(2015, 1, 30), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 30), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 2, 28), limit=3)) - assert result == [ - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 28), - arrow.Arrow(2015, 4, 28), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 3, 31), limit=3)) - assert result == [ - arrow.Arrow(2015, 3, 31), - arrow.Arrow(2015, 4, 30), - arrow.Arrow(2015, 5, 31), - ] - - def test_range_over_quarter_months_ending_on_different_days(self): - result = list(arrow.Arrow.range("quarter", datetime(2014, 11, 30), limit=3)) - assert result == [ - arrow.Arrow(2014, 11, 30), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 5, 30), - ] - - def test_range_over_year_maintains_end_date_across_leap_year(self): - result = list(arrow.Arrow.range("year", datetime(2012, 2, 29), limit=5)) - assert result == [ - arrow.Arrow(2012, 2, 29), - arrow.Arrow(2013, 2, 28), - arrow.Arrow(2014, 2, 28), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2016, 2, 29), - ] - - -class TestArrowSpanRange: - def test_year(self): - - result = list( - arrow.Arrow.span_range("year", datetime(2013, 2, 1), datetime(2016, 3, 31)) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1), - arrow.Arrow(2013, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2014, 1, 1), - arrow.Arrow(2014, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2015, 1, 1), - arrow.Arrow(2015, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2016, 1, 1), - arrow.Arrow(2016, 12, 31, 23, 59, 59, 999999), - ), - ] - - def test_quarter(self): - - result = list( - arrow.Arrow.span_range( - "quarter", datetime(2013, 2, 2), datetime(2013, 5, 15) - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 3, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 6, 30, 23, 59, 59, 999999)), - ] - - def test_month(self): - - result = list( - arrow.Arrow.span_range("month", datetime(2013, 1, 2), datetime(2013, 4, 15)) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 1, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 2, 1), arrow.Arrow(2013, 2, 28, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 3, 1), arrow.Arrow(2013, 3, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 4, 30, 23, 59, 59, 999999)), - ] - - def test_week(self): - - result = list( - arrow.Arrow.span_range("week", datetime(2013, 2, 2), datetime(2013, 2, 28)) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 28), arrow.Arrow(2013, 2, 3, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 2, 4), arrow.Arrow(2013, 2, 10, 23, 59, 59, 999999)), - ( - arrow.Arrow(2013, 2, 11), - arrow.Arrow(2013, 2, 17, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 2, 18), - arrow.Arrow(2013, 2, 24, 23, 59, 59, 999999), - ), - (arrow.Arrow(2013, 2, 25), arrow.Arrow(2013, 3, 3, 23, 59, 59, 999999)), - ] - - def test_day(self): - - result = list( - arrow.Arrow.span_range( - "day", datetime(2013, 1, 1, 12), datetime(2013, 1, 4, 12) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 2, 0), - arrow.Arrow(2013, 1, 2, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 3, 0), - arrow.Arrow(2013, 1, 3, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 4, 0), - arrow.Arrow(2013, 1, 4, 23, 59, 59, 999999), - ), - ] - - def test_days(self): - - result = list( - arrow.Arrow.span_range( - "days", datetime(2013, 1, 1, 12), datetime(2013, 1, 4, 12) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 2, 0), - arrow.Arrow(2013, 1, 2, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 3, 0), - arrow.Arrow(2013, 1, 3, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 4, 0), - arrow.Arrow(2013, 1, 4, 23, 59, 59, 999999), - ), - ] - - def test_hour(self): - - result = list( - arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 0, 30), datetime(2013, 1, 1, 3, 30) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 0, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 1), - arrow.Arrow(2013, 1, 1, 1, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 2), - arrow.Arrow(2013, 1, 1, 2, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 3), - arrow.Arrow(2013, 1, 1, 3, 59, 59, 999999), - ), - ] - - result = list( - arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 3, 30), datetime(2013, 1, 1, 3, 30) - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1, 3), arrow.Arrow(2013, 1, 1, 3, 59, 59, 999999)) - ] - - def test_minute(self): - - result = list( - arrow.Arrow.span_range( - "minute", datetime(2013, 1, 1, 0, 0, 30), datetime(2013, 1, 1, 0, 3, 30) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0, 0), - arrow.Arrow(2013, 1, 1, 0, 0, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 1), - arrow.Arrow(2013, 1, 1, 0, 1, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 2), - arrow.Arrow(2013, 1, 1, 0, 2, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 3), - arrow.Arrow(2013, 1, 1, 0, 3, 59, 999999), - ), - ] - - def test_second(self): - - result = list( - arrow.Arrow.span_range( - "second", datetime(2013, 1, 1), datetime(2013, 1, 1, 0, 0, 3) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0, 0, 0), - arrow.Arrow(2013, 1, 1, 0, 0, 0, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 1), - arrow.Arrow(2013, 1, 1, 0, 0, 1, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 2), - arrow.Arrow(2013, 1, 1, 0, 0, 2, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 3), - arrow.Arrow(2013, 1, 1, 0, 0, 3, 999999), - ), - ] - - def test_naive_tz(self): - - tzinfo = tz.gettz("US/Pacific") - - result = arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 0), datetime(2013, 1, 1, 3, 59), "US/Pacific" - ) - - for f, c in result: - assert f.tzinfo == tzinfo - assert c.tzinfo == tzinfo - - def test_aware_same_tz(self): - - tzinfo = tz.gettz("US/Pacific") - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tzinfo), - datetime(2013, 1, 1, 2, 59, tzinfo=tzinfo), - ) - - for f, c in result: - assert f.tzinfo == tzinfo - assert c.tzinfo == tzinfo - - def test_aware_different_tz(self): - - tzinfo1 = tz.gettz("US/Pacific") - tzinfo2 = tz.gettz("US/Eastern") - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tzinfo1), - datetime(2013, 1, 1, 2, 59, tzinfo=tzinfo2), - ) - - for f, c in result: - assert f.tzinfo == tzinfo1 - assert c.tzinfo == tzinfo1 - - def test_aware_tz(self): - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 1, 2, 59, tzinfo=tz.gettz("US/Eastern")), - tz="US/Central", - ) - - for f, c in result: - assert f.tzinfo == tz.gettz("US/Central") - assert c.tzinfo == tz.gettz("US/Central") - - def test_bounds_param_is_passed(self): - - result = list( - arrow.Arrow.span_range( - "quarter", datetime(2013, 2, 2), datetime(2013, 5, 15), bounds="[]" - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 4, 1)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 7, 1)), - ] - - -class TestArrowInterval: - def test_incorrect_input(self): - with pytest.raises(ValueError): - list( - arrow.Arrow.interval( - "month", datetime(2013, 1, 2), datetime(2013, 4, 15), 0 - ) - ) - - def test_correct(self): - result = list( - arrow.Arrow.interval( - "hour", datetime(2013, 5, 5, 12, 30), datetime(2013, 5, 5, 17, 15), 2 - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 5, 5, 12), - arrow.Arrow(2013, 5, 5, 13, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 5, 5, 14), - arrow.Arrow(2013, 5, 5, 15, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 5, 5, 16), - arrow.Arrow(2013, 5, 5, 17, 59, 59, 999999), - ), - ] - - def test_bounds_param_is_passed(self): - result = list( - arrow.Arrow.interval( - "hour", - datetime(2013, 5, 5, 12, 30), - datetime(2013, 5, 5, 17, 15), - 2, - bounds="[]", - ) - ) - - assert result == [ - (arrow.Arrow(2013, 5, 5, 12), arrow.Arrow(2013, 5, 5, 14)), - (arrow.Arrow(2013, 5, 5, 14), arrow.Arrow(2013, 5, 5, 16)), - (arrow.Arrow(2013, 5, 5, 16), arrow.Arrow(2013, 5, 5, 18)), - ] - - -@pytest.mark.usefixtures("time_2013_02_15") -class TestArrowSpan: - def test_span_attribute(self): - - with pytest.raises(AttributeError): - self.arrow.span("span") - - def test_span_year(self): - - floor, ceil = self.arrow.span("year") - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 12, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_quarter(self): - - floor, ceil = self.arrow.span("quarter") - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 3, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_quarter_count(self): - - floor, ceil = self.arrow.span("quarter", 2) - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 6, 30, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_year_count(self): - - floor, ceil = self.arrow.span("year", 2) - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2014, 12, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_month(self): - - floor, ceil = self.arrow.span("month") - - assert floor == datetime(2013, 2, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 28, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_week(self): - - floor, ceil = self.arrow.span("week") - - assert floor == datetime(2013, 2, 11, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 17, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_day(self): - - floor, ceil = self.arrow.span("day") - - assert floor == datetime(2013, 2, 15, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_hour(self): - - floor, ceil = self.arrow.span("hour") - - assert floor == datetime(2013, 2, 15, 3, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_minute(self): - - floor, ceil = self.arrow.span("minute") - - assert floor == datetime(2013, 2, 15, 3, 41, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_second(self): - - floor, ceil = self.arrow.span("second") - - assert floor == datetime(2013, 2, 15, 3, 41, 22, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 22, 999999, tzinfo=tz.tzutc()) - - def test_span_microsecond(self): - - floor, ceil = self.arrow.span("microsecond") - - assert floor == datetime(2013, 2, 15, 3, 41, 22, 8923, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 22, 8923, tzinfo=tz.tzutc()) - - def test_floor(self): - - floor, ceil = self.arrow.span("month") - - assert floor == self.arrow.floor("month") - assert ceil == self.arrow.ceil("month") - - def test_span_inclusive_inclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="[]") - - assert floor == datetime(2013, 2, 15, 3, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 4, tzinfo=tz.tzutc()) - - def test_span_exclusive_inclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="(]") - - assert floor == datetime(2013, 2, 15, 3, 0, 0, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 4, tzinfo=tz.tzutc()) - - def test_span_exclusive_exclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="()") - - assert floor == datetime(2013, 2, 15, 3, 0, 0, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_bounds_are_validated(self): - - with pytest.raises(ValueError): - floor, ceil = self.arrow.span("hour", bounds="][") - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowHumanize: - def test_granularity(self): - - assert self.now.humanize(granularity="second") == "just now" - - later1 = self.now.shift(seconds=1) - assert self.now.humanize(later1, granularity="second") == "just now" - assert later1.humanize(self.now, granularity="second") == "just now" - assert self.now.humanize(later1, granularity="minute") == "0 minutes ago" - assert later1.humanize(self.now, granularity="minute") == "in 0 minutes" - - later100 = self.now.shift(seconds=100) - assert self.now.humanize(later100, granularity="second") == "100 seconds ago" - assert later100.humanize(self.now, granularity="second") == "in 100 seconds" - assert self.now.humanize(later100, granularity="minute") == "a minute ago" - assert later100.humanize(self.now, granularity="minute") == "in a minute" - assert self.now.humanize(later100, granularity="hour") == "0 hours ago" - assert later100.humanize(self.now, granularity="hour") == "in 0 hours" - - later4000 = self.now.shift(seconds=4000) - assert self.now.humanize(later4000, granularity="minute") == "66 minutes ago" - assert later4000.humanize(self.now, granularity="minute") == "in 66 minutes" - assert self.now.humanize(later4000, granularity="hour") == "an hour ago" - assert later4000.humanize(self.now, granularity="hour") == "in an hour" - assert self.now.humanize(later4000, granularity="day") == "0 days ago" - assert later4000.humanize(self.now, granularity="day") == "in 0 days" - - later105 = self.now.shift(seconds=10 ** 5) - assert self.now.humanize(later105, granularity="hour") == "27 hours ago" - assert later105.humanize(self.now, granularity="hour") == "in 27 hours" - assert self.now.humanize(later105, granularity="day") == "a day ago" - assert later105.humanize(self.now, granularity="day") == "in a day" - assert self.now.humanize(later105, granularity="week") == "0 weeks ago" - assert later105.humanize(self.now, granularity="week") == "in 0 weeks" - assert self.now.humanize(later105, granularity="month") == "0 months ago" - assert later105.humanize(self.now, granularity="month") == "in 0 months" - assert self.now.humanize(later105, granularity=["month"]) == "0 months ago" - assert later105.humanize(self.now, granularity=["month"]) == "in 0 months" - - later106 = self.now.shift(seconds=3 * 10 ** 6) - assert self.now.humanize(later106, granularity="day") == "34 days ago" - assert later106.humanize(self.now, granularity="day") == "in 34 days" - assert self.now.humanize(later106, granularity="week") == "4 weeks ago" - assert later106.humanize(self.now, granularity="week") == "in 4 weeks" - assert self.now.humanize(later106, granularity="month") == "a month ago" - assert later106.humanize(self.now, granularity="month") == "in a month" - assert self.now.humanize(later106, granularity="year") == "0 years ago" - assert later106.humanize(self.now, granularity="year") == "in 0 years" - - later506 = self.now.shift(seconds=50 * 10 ** 6) - assert self.now.humanize(later506, granularity="week") == "82 weeks ago" - assert later506.humanize(self.now, granularity="week") == "in 82 weeks" - assert self.now.humanize(later506, granularity="month") == "18 months ago" - assert later506.humanize(self.now, granularity="month") == "in 18 months" - assert self.now.humanize(later506, granularity="year") == "a year ago" - assert later506.humanize(self.now, granularity="year") == "in a year" - - later108 = self.now.shift(seconds=10 ** 8) - assert self.now.humanize(later108, granularity="year") == "3 years ago" - assert later108.humanize(self.now, granularity="year") == "in 3 years" - - later108onlydistance = self.now.shift(seconds=10 ** 8) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity="year" - ) - == "3 years" - ) - assert ( - later108onlydistance.humanize( - self.now, only_distance=True, granularity="year" - ) - == "3 years" - ) - - with pytest.raises(AttributeError): - self.now.humanize(later108, granularity="years") - - def test_multiple_granularity(self): - assert self.now.humanize(granularity="second") == "just now" - assert self.now.humanize(granularity=["second"]) == "just now" - assert ( - self.now.humanize(granularity=["year", "month", "day", "hour", "second"]) - == "in 0 years 0 months 0 days 0 hours and 0 seconds" - ) - - later4000 = self.now.shift(seconds=4000) - assert ( - later4000.humanize(self.now, granularity=["hour", "minute"]) - == "in an hour and 6 minutes" - ) - assert ( - self.now.humanize(later4000, granularity=["hour", "minute"]) - == "an hour and 6 minutes ago" - ) - assert ( - later4000.humanize( - self.now, granularity=["hour", "minute"], only_distance=True - ) - == "an hour and 6 minutes" - ) - assert ( - later4000.humanize(self.now, granularity=["day", "hour", "minute"]) - == "in 0 days an hour and 6 minutes" - ) - assert ( - self.now.humanize(later4000, granularity=["day", "hour", "minute"]) - == "0 days an hour and 6 minutes ago" - ) - - later105 = self.now.shift(seconds=10 ** 5) - assert ( - self.now.humanize(later105, granularity=["hour", "day", "minute"]) - == "a day 3 hours and 46 minutes ago" - ) - with pytest.raises(AttributeError): - self.now.humanize(later105, granularity=["error", "second"]) - - later108onlydistance = self.now.shift(seconds=10 ** 8) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["year"] - ) - == "3 years" - ) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["month", "week"] - ) - == "37 months and 4 weeks" - ) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["year", "second"] - ) - == "3 years and 5327200 seconds" - ) - - one_min_one_sec_ago = self.now.shift(minutes=-1, seconds=-1) - assert ( - one_min_one_sec_ago.humanize(self.now, granularity=["minute", "second"]) - == "a minute and a second ago" - ) - - one_min_two_secs_ago = self.now.shift(minutes=-1, seconds=-2) - assert ( - one_min_two_secs_ago.humanize(self.now, granularity=["minute", "second"]) - == "a minute and 2 seconds ago" - ) - - def test_seconds(self): - - later = self.now.shift(seconds=10) - - # regression test for issue #727 - assert self.now.humanize(later) == "10 seconds ago" - assert later.humanize(self.now) == "in 10 seconds" - - assert self.now.humanize(later, only_distance=True) == "10 seconds" - assert later.humanize(self.now, only_distance=True) == "10 seconds" - - def test_minute(self): - - later = self.now.shift(minutes=1) - - assert self.now.humanize(later) == "a minute ago" - assert later.humanize(self.now) == "in a minute" - - assert self.now.humanize(later, only_distance=True) == "a minute" - assert later.humanize(self.now, only_distance=True) == "a minute" - - def test_minutes(self): - - later = self.now.shift(minutes=2) - - assert self.now.humanize(later) == "2 minutes ago" - assert later.humanize(self.now) == "in 2 minutes" - - assert self.now.humanize(later, only_distance=True) == "2 minutes" - assert later.humanize(self.now, only_distance=True) == "2 minutes" - - def test_hour(self): - - later = self.now.shift(hours=1) - - assert self.now.humanize(later) == "an hour ago" - assert later.humanize(self.now) == "in an hour" - - assert self.now.humanize(later, only_distance=True) == "an hour" - assert later.humanize(self.now, only_distance=True) == "an hour" - - def test_hours(self): - - later = self.now.shift(hours=2) - - assert self.now.humanize(later) == "2 hours ago" - assert later.humanize(self.now) == "in 2 hours" - - assert self.now.humanize(later, only_distance=True) == "2 hours" - assert later.humanize(self.now, only_distance=True) == "2 hours" - - def test_day(self): - - later = self.now.shift(days=1) - - assert self.now.humanize(later) == "a day ago" - assert later.humanize(self.now) == "in a day" - - # regression test for issue #697 - less_than_48_hours = self.now.shift( - days=1, hours=23, seconds=59, microseconds=999999 - ) - assert self.now.humanize(less_than_48_hours) == "a day ago" - assert less_than_48_hours.humanize(self.now) == "in a day" - - less_than_48_hours_date = less_than_48_hours._datetime.date() - with pytest.raises(TypeError): - # humanize other argument does not take raw datetime.date objects - self.now.humanize(less_than_48_hours_date) - - # convert from date to arrow object - less_than_48_hours_date = arrow.Arrow.fromdate(less_than_48_hours_date) - assert self.now.humanize(less_than_48_hours_date) == "a day ago" - assert less_than_48_hours_date.humanize(self.now) == "in a day" - - assert self.now.humanize(later, only_distance=True) == "a day" - assert later.humanize(self.now, only_distance=True) == "a day" - - def test_days(self): - - later = self.now.shift(days=2) - - assert self.now.humanize(later) == "2 days ago" - assert later.humanize(self.now) == "in 2 days" - - assert self.now.humanize(later, only_distance=True) == "2 days" - assert later.humanize(self.now, only_distance=True) == "2 days" - - # Regression tests for humanize bug referenced in issue 541 - later = self.now.shift(days=3) - assert later.humanize(self.now) == "in 3 days" - - later = self.now.shift(days=3, seconds=1) - assert later.humanize(self.now) == "in 3 days" - - later = self.now.shift(days=4) - assert later.humanize(self.now) == "in 4 days" - - def test_week(self): - - later = self.now.shift(weeks=1) - - assert self.now.humanize(later) == "a week ago" - assert later.humanize(self.now) == "in a week" - - assert self.now.humanize(later, only_distance=True) == "a week" - assert later.humanize(self.now, only_distance=True) == "a week" - - def test_weeks(self): - - later = self.now.shift(weeks=2) - - assert self.now.humanize(later) == "2 weeks ago" - assert later.humanize(self.now) == "in 2 weeks" - - assert self.now.humanize(later, only_distance=True) == "2 weeks" - assert later.humanize(self.now, only_distance=True) == "2 weeks" - - def test_month(self): - - later = self.now.shift(months=1) - - assert self.now.humanize(later) == "a month ago" - assert later.humanize(self.now) == "in a month" - - assert self.now.humanize(later, only_distance=True) == "a month" - assert later.humanize(self.now, only_distance=True) == "a month" - - def test_months(self): - - later = self.now.shift(months=2) - earlier = self.now.shift(months=-2) - - assert earlier.humanize(self.now) == "2 months ago" - assert later.humanize(self.now) == "in 2 months" - - assert self.now.humanize(later, only_distance=True) == "2 months" - assert later.humanize(self.now, only_distance=True) == "2 months" - - def test_year(self): - - later = self.now.shift(years=1) - - assert self.now.humanize(later) == "a year ago" - assert later.humanize(self.now) == "in a year" - - assert self.now.humanize(later, only_distance=True) == "a year" - assert later.humanize(self.now, only_distance=True) == "a year" - - def test_years(self): - - later = self.now.shift(years=2) - - assert self.now.humanize(later) == "2 years ago" - assert later.humanize(self.now) == "in 2 years" - - assert self.now.humanize(later, only_distance=True) == "2 years" - assert later.humanize(self.now, only_distance=True) == "2 years" - - arw = arrow.Arrow(2014, 7, 2) - - result = arw.humanize(self.datetime) - - assert result == "in 2 years" - - def test_arrow(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - result = arw.humanize(arrow.Arrow.fromdatetime(self.datetime)) - - assert result == "just now" - - def test_datetime_tzinfo(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - result = arw.humanize(self.datetime.replace(tzinfo=tz.tzutc())) - - assert result == "just now" - - def test_other(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - with pytest.raises(TypeError): - arw.humanize(object()) - - def test_invalid_locale(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - with pytest.raises(ValueError): - arw.humanize(locale="klingon") - - def test_none(self): - - arw = arrow.Arrow.utcnow() - - result = arw.humanize() - - assert result == "just now" - - result = arw.humanize(None) - - assert result == "just now" - - def test_untranslated_granularity(self, mocker): - - arw = arrow.Arrow.utcnow() - later = arw.shift(weeks=1) - - # simulate an untranslated timeframe key - mocker.patch.dict("arrow.locales.EnglishLocale.timeframes") - del arrow.locales.EnglishLocale.timeframes["week"] - with pytest.raises(ValueError): - arw.humanize(later, granularity="week") - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowHumanizeTestsWithLocale: - def test_now(self): - - arw = arrow.Arrow(2013, 1, 1, 0, 0, 0) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "сейчас" - - def test_seconds(self): - arw = arrow.Arrow(2013, 1, 1, 0, 0, 44) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "через 44 несколько секунд" - - def test_years(self): - - arw = arrow.Arrow(2011, 7, 2) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "2 года назад" - - -class TestArrowIsBetween: - def test_start_before_end(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - result = target.is_between(start, end) - assert not result - - def test_exclusive_exclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 27)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 10)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 36)) - result = target.is_between(start, end, "()") - assert result - result = target.is_between(start, end) - assert result - - def test_exclusive_exclusive_bounds_same_date(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "()") - assert not result - - def test_inclusive_exclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 6)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 4)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 6)) - result = target.is_between(start, end, "[)") - assert not result - - def test_exclusive_inclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "(]") - assert result - - def test_inclusive_inclusive_bounds_same_date(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "[]") - assert result - - def test_type_error_exception(self): - with pytest.raises(TypeError): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = datetime(2013, 5, 5) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - target.is_between(start, end) - - with pytest.raises(TypeError): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = datetime(2013, 5, 8) - target.is_between(start, end) - - with pytest.raises(TypeError): - target.is_between(None, None) - - def test_value_error_exception(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - with pytest.raises(ValueError): - target.is_between(start, end, "][") - with pytest.raises(ValueError): - target.is_between(start, end, "") - with pytest.raises(ValueError): - target.is_between(start, end, "]") - with pytest.raises(ValueError): - target.is_between(start, end, "[") - with pytest.raises(ValueError): - target.is_between(start, end, "hello") - - -class TestArrowUtil: - def test_get_datetime(self): - - get_datetime = arrow.Arrow._get_datetime - - arw = arrow.Arrow.utcnow() - dt = datetime.utcnow() - timestamp = time.time() - - assert get_datetime(arw) == arw.datetime - assert get_datetime(dt) == dt - assert ( - get_datetime(timestamp) == arrow.Arrow.utcfromtimestamp(timestamp).datetime - ) - - with pytest.raises(ValueError) as raise_ctx: - get_datetime("abc") - assert "not recognized as a datetime or timestamp" in str(raise_ctx.value) - - def test_get_tzinfo(self): - - get_tzinfo = arrow.Arrow._get_tzinfo - - with pytest.raises(ValueError) as raise_ctx: - get_tzinfo("abc") - assert "not recognized as a timezone" in str(raise_ctx.value) - - def test_get_iteration_params(self): - - assert arrow.Arrow._get_iteration_params("end", None) == ("end", sys.maxsize) - assert arrow.Arrow._get_iteration_params(None, 100) == (arrow.Arrow.max, 100) - assert arrow.Arrow._get_iteration_params(100, 120) == (100, 120) - - with pytest.raises(ValueError): - arrow.Arrow._get_iteration_params(None, None) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py deleted file mode 100644 index 2b8df5168f..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py +++ /dev/null @@ -1,390 +0,0 @@ -# -*- coding: utf-8 -*- -import time -from datetime import date, datetime - -import pytest -from dateutil import tz - -from arrow.parser import ParserError - -from .utils import assert_datetime_equality - - -@pytest.mark.usefixtures("arrow_factory") -class TestGet: - def test_no_args(self): - - assert_datetime_equality( - self.factory.get(), datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - def test_timestamp_one_arg_no_arg(self): - - no_arg = self.factory.get(1406430900).timestamp - one_arg = self.factory.get("1406430900", "X").timestamp - - assert no_arg == one_arg - - def test_one_arg_none(self): - - assert_datetime_equality( - self.factory.get(None), datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - def test_struct_time(self): - - assert_datetime_equality( - self.factory.get(time.gmtime()), - datetime.utcnow().replace(tzinfo=tz.tzutc()), - ) - - def test_one_arg_timestamp(self): - - int_timestamp = int(time.time()) - timestamp_dt = datetime.utcfromtimestamp(int_timestamp).replace( - tzinfo=tz.tzutc() - ) - - assert self.factory.get(int_timestamp) == timestamp_dt - - with pytest.raises(ParserError): - self.factory.get(str(int_timestamp)) - - float_timestamp = time.time() - timestamp_dt = datetime.utcfromtimestamp(float_timestamp).replace( - tzinfo=tz.tzutc() - ) - - assert self.factory.get(float_timestamp) == timestamp_dt - - with pytest.raises(ParserError): - self.factory.get(str(float_timestamp)) - - # Regression test for issue #216 - # Python 3 raises OverflowError, Python 2 raises ValueError - timestamp = 99999999999999999999999999.99999999999999999999999999 - with pytest.raises((OverflowError, ValueError)): - self.factory.get(timestamp) - - def test_one_arg_expanded_timestamp(self): - - millisecond_timestamp = 1591328104308 - microsecond_timestamp = 1591328104308505 - - # Regression test for issue #796 - assert self.factory.get(millisecond_timestamp) == datetime.utcfromtimestamp( - 1591328104.308 - ).replace(tzinfo=tz.tzutc()) - assert self.factory.get(microsecond_timestamp) == datetime.utcfromtimestamp( - 1591328104.308505 - ).replace(tzinfo=tz.tzutc()) - - def test_one_arg_timestamp_with_tzinfo(self): - - timestamp = time.time() - timestamp_dt = datetime.fromtimestamp(timestamp, tz=tz.tzutc()).astimezone( - tz.gettz("US/Pacific") - ) - timezone = tz.gettz("US/Pacific") - - assert_datetime_equality( - self.factory.get(timestamp, tzinfo=timezone), timestamp_dt - ) - - def test_one_arg_arrow(self): - - arw = self.factory.utcnow() - result = self.factory.get(arw) - - assert arw == result - - def test_one_arg_datetime(self): - - dt = datetime.utcnow().replace(tzinfo=tz.tzutc()) - - assert self.factory.get(dt) == dt - - def test_one_arg_date(self): - - d = date.today() - dt = datetime(d.year, d.month, d.day, tzinfo=tz.tzutc()) - - assert self.factory.get(d) == dt - - def test_one_arg_tzinfo(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality( - self.factory.get(tz.gettz("US/Pacific")), self.expected - ) - - # regression test for issue #658 - def test_one_arg_dateparser_datetime(self): - dateparser = pytest.importorskip("dateparser") - expected = datetime(1990, 1, 1).replace(tzinfo=tz.tzutc()) - # dateparser outputs: datetime.datetime(1990, 1, 1, 0, 0, tzinfo=) - parsed_date = dateparser.parse("1990-01-01T00:00:00+00:00") - dt_output = self.factory.get(parsed_date)._datetime.replace(tzinfo=tz.tzutc()) - assert dt_output == expected - - def test_kwarg_tzinfo(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality( - self.factory.get(tzinfo=tz.gettz("US/Pacific")), self.expected - ) - - def test_kwarg_tzinfo_string(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality(self.factory.get(tzinfo="US/Pacific"), self.expected) - - with pytest.raises(ParserError): - self.factory.get(tzinfo="US/PacificInvalidTzinfo") - - def test_kwarg_normalize_whitespace(self): - result = self.factory.get( - "Jun 1 2005 1:33PM", - "MMM D YYYY H:mmA", - tzinfo=tz.tzutc(), - normalize_whitespace=True, - ) - assert result._datetime == datetime(2005, 6, 1, 13, 33, tzinfo=tz.tzutc()) - - result = self.factory.get( - "\t 2013-05-05T12:30:45.123456 \t \n", - tzinfo=tz.tzutc(), - normalize_whitespace=True, - ) - assert result._datetime == datetime( - 2013, 5, 5, 12, 30, 45, 123456, tzinfo=tz.tzutc() - ) - - def test_one_arg_iso_str(self): - - dt = datetime.utcnow() - - assert_datetime_equality( - self.factory.get(dt.isoformat()), dt.replace(tzinfo=tz.tzutc()) - ) - - def test_one_arg_iso_calendar(self): - - pairs = [ - (datetime(2004, 1, 4), (2004, 1, 7)), - (datetime(2008, 12, 30), (2009, 1, 2)), - (datetime(2010, 1, 2), (2009, 53, 6)), - (datetime(2000, 2, 29), (2000, 9, 2)), - (datetime(2005, 1, 1), (2004, 53, 6)), - (datetime(2010, 1, 4), (2010, 1, 1)), - (datetime(2010, 1, 3), (2009, 53, 7)), - (datetime(2003, 12, 29), (2004, 1, 1)), - ] - - for pair in pairs: - dt, iso = pair - assert self.factory.get(iso) == self.factory.get(dt) - - with pytest.raises(TypeError): - self.factory.get((2014, 7, 1, 4)) - - with pytest.raises(TypeError): - self.factory.get((2014, 7)) - - with pytest.raises(ValueError): - self.factory.get((2014, 70, 1)) - - with pytest.raises(ValueError): - self.factory.get((2014, 7, 10)) - - def test_one_arg_other(self): - - with pytest.raises(TypeError): - self.factory.get(object()) - - def test_one_arg_bool(self): - - with pytest.raises(TypeError): - self.factory.get(False) - - with pytest.raises(TypeError): - self.factory.get(True) - - def test_two_args_datetime_tzinfo(self): - - result = self.factory.get(datetime(2013, 1, 1), tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_datetime_tz_str(self): - - result = self.factory.get(datetime(2013, 1, 1), "US/Pacific") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_date_tzinfo(self): - - result = self.factory.get(date(2013, 1, 1), tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_date_tz_str(self): - - result = self.factory.get(date(2013, 1, 1), "US/Pacific") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_datetime_other(self): - - with pytest.raises(TypeError): - self.factory.get(datetime.utcnow(), object()) - - def test_two_args_date_other(self): - - with pytest.raises(TypeError): - self.factory.get(date.today(), object()) - - def test_two_args_str_str(self): - - result = self.factory.get("2013-01-01", "YYYY-MM-DD") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_str_tzinfo(self): - - result = self.factory.get("2013-01-01", tzinfo=tz.gettz("US/Pacific")) - - assert_datetime_equality( - result._datetime, datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - ) - - def test_two_args_twitter_format(self): - - # format returned by twitter API for created_at: - twitter_date = "Fri Apr 08 21:08:54 +0000 2016" - result = self.factory.get(twitter_date, "ddd MMM DD HH:mm:ss Z YYYY") - - assert result._datetime == datetime(2016, 4, 8, 21, 8, 54, tzinfo=tz.tzutc()) - - def test_two_args_str_list(self): - - result = self.factory.get("2013-01-01", ["MM/DD/YYYY", "YYYY-MM-DD"]) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_unicode_unicode(self): - - result = self.factory.get(u"2013-01-01", u"YYYY-MM-DD") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_other(self): - - with pytest.raises(TypeError): - self.factory.get(object(), object()) - - def test_three_args_with_tzinfo(self): - - timefmt = "YYYYMMDD" - d = "20150514" - - assert self.factory.get(d, timefmt, tzinfo=tz.tzlocal()) == datetime( - 2015, 5, 14, tzinfo=tz.tzlocal() - ) - - def test_three_args(self): - - assert self.factory.get(2013, 1, 1) == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_full_kwargs(self): - - assert ( - self.factory.get( - year=2016, - month=7, - day=14, - hour=7, - minute=16, - second=45, - microsecond=631092, - ) - == datetime(2016, 7, 14, 7, 16, 45, 631092, tzinfo=tz.tzutc()) - ) - - def test_three_kwargs(self): - - assert self.factory.get(year=2016, month=7, day=14) == datetime( - 2016, 7, 14, 0, 0, tzinfo=tz.tzutc() - ) - - def test_tzinfo_string_kwargs(self): - result = self.factory.get("2019072807", "YYYYMMDDHH", tzinfo="UTC") - assert result._datetime == datetime(2019, 7, 28, 7, 0, 0, 0, tzinfo=tz.tzutc()) - - def test_insufficient_kwargs(self): - - with pytest.raises(TypeError): - self.factory.get(year=2016) - - with pytest.raises(TypeError): - self.factory.get(year=2016, month=7) - - def test_locale(self): - result = self.factory.get("2010", "YYYY", locale="ja") - assert result._datetime == datetime(2010, 1, 1, 0, 0, 0, 0, tzinfo=tz.tzutc()) - - # regression test for issue #701 - result = self.factory.get( - "Montag, 9. September 2019, 16:15-20:00", "dddd, D. MMMM YYYY", locale="de" - ) - assert result._datetime == datetime(2019, 9, 9, 0, 0, 0, 0, tzinfo=tz.tzutc()) - - def test_locale_kwarg_only(self): - res = self.factory.get(locale="ja") - assert res.tzinfo == tz.tzutc() - - def test_locale_with_tzinfo(self): - res = self.factory.get(locale="ja", tzinfo=tz.gettz("Asia/Tokyo")) - assert res.tzinfo == tz.gettz("Asia/Tokyo") - - -@pytest.mark.usefixtures("arrow_factory") -class TestUtcNow: - def test_utcnow(self): - - assert_datetime_equality( - self.factory.utcnow()._datetime, - datetime.utcnow().replace(tzinfo=tz.tzutc()), - ) - - -@pytest.mark.usefixtures("arrow_factory") -class TestNow: - def test_no_tz(self): - - assert_datetime_equality(self.factory.now(), datetime.now(tz.tzlocal())) - - def test_tzinfo(self): - - assert_datetime_equality( - self.factory.now(tz.gettz("EST")), datetime.now(tz.gettz("EST")) - ) - - def test_tz_str(self): - - assert_datetime_equality(self.factory.now("EST"), datetime.now(tz.gettz("EST"))) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py deleted file mode 100644 index e97aeb5dcc..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py +++ /dev/null @@ -1,282 +0,0 @@ -# -*- coding: utf-8 -*- -from datetime import datetime - -import pytest -import pytz -from dateutil import tz as dateutil_tz - -from arrow import ( - FORMAT_ATOM, - FORMAT_COOKIE, - FORMAT_RFC822, - FORMAT_RFC850, - FORMAT_RFC1036, - FORMAT_RFC1123, - FORMAT_RFC2822, - FORMAT_RFC3339, - FORMAT_RSS, - FORMAT_W3C, -) - -from .utils import make_full_tz_list - - -@pytest.mark.usefixtures("arrow_formatter") -class TestFormatterFormatToken: - def test_format(self): - - dt = datetime(2013, 2, 5, 12, 32, 51) - - result = self.formatter.format(dt, "MM-DD-YYYY hh:mm:ss a") - - assert result == "02-05-2013 12:32:51 pm" - - def test_year(self): - - dt = datetime(2013, 1, 1) - assert self.formatter._format_token(dt, "YYYY") == "2013" - assert self.formatter._format_token(dt, "YY") == "13" - - def test_month(self): - - dt = datetime(2013, 1, 1) - assert self.formatter._format_token(dt, "MMMM") == "January" - assert self.formatter._format_token(dt, "MMM") == "Jan" - assert self.formatter._format_token(dt, "MM") == "01" - assert self.formatter._format_token(dt, "M") == "1" - - def test_day(self): - - dt = datetime(2013, 2, 1) - assert self.formatter._format_token(dt, "DDDD") == "032" - assert self.formatter._format_token(dt, "DDD") == "32" - assert self.formatter._format_token(dt, "DD") == "01" - assert self.formatter._format_token(dt, "D") == "1" - assert self.formatter._format_token(dt, "Do") == "1st" - - assert self.formatter._format_token(dt, "dddd") == "Friday" - assert self.formatter._format_token(dt, "ddd") == "Fri" - assert self.formatter._format_token(dt, "d") == "5" - - def test_hour(self): - - dt = datetime(2013, 1, 1, 2) - assert self.formatter._format_token(dt, "HH") == "02" - assert self.formatter._format_token(dt, "H") == "2" - - dt = datetime(2013, 1, 1, 13) - assert self.formatter._format_token(dt, "HH") == "13" - assert self.formatter._format_token(dt, "H") == "13" - - dt = datetime(2013, 1, 1, 2) - assert self.formatter._format_token(dt, "hh") == "02" - assert self.formatter._format_token(dt, "h") == "2" - - dt = datetime(2013, 1, 1, 13) - assert self.formatter._format_token(dt, "hh") == "01" - assert self.formatter._format_token(dt, "h") == "1" - - # test that 12-hour time converts to '12' at midnight - dt = datetime(2013, 1, 1, 0) - assert self.formatter._format_token(dt, "hh") == "12" - assert self.formatter._format_token(dt, "h") == "12" - - def test_minute(self): - - dt = datetime(2013, 1, 1, 0, 1) - assert self.formatter._format_token(dt, "mm") == "01" - assert self.formatter._format_token(dt, "m") == "1" - - def test_second(self): - - dt = datetime(2013, 1, 1, 0, 0, 1) - assert self.formatter._format_token(dt, "ss") == "01" - assert self.formatter._format_token(dt, "s") == "1" - - def test_sub_second(self): - - dt = datetime(2013, 1, 1, 0, 0, 0, 123456) - assert self.formatter._format_token(dt, "SSSSSS") == "123456" - assert self.formatter._format_token(dt, "SSSSS") == "12345" - assert self.formatter._format_token(dt, "SSSS") == "1234" - assert self.formatter._format_token(dt, "SSS") == "123" - assert self.formatter._format_token(dt, "SS") == "12" - assert self.formatter._format_token(dt, "S") == "1" - - dt = datetime(2013, 1, 1, 0, 0, 0, 2000) - assert self.formatter._format_token(dt, "SSSSSS") == "002000" - assert self.formatter._format_token(dt, "SSSSS") == "00200" - assert self.formatter._format_token(dt, "SSSS") == "0020" - assert self.formatter._format_token(dt, "SSS") == "002" - assert self.formatter._format_token(dt, "SS") == "00" - assert self.formatter._format_token(dt, "S") == "0" - - def test_timestamp(self): - - timestamp = 1588437009.8952794 - dt = datetime.utcfromtimestamp(timestamp) - expected = str(int(timestamp)) - assert self.formatter._format_token(dt, "X") == expected - - # Must round because time.time() may return a float with greater - # than 6 digits of precision - expected = str(int(timestamp * 1000000)) - assert self.formatter._format_token(dt, "x") == expected - - def test_timezone(self): - - dt = datetime.utcnow().replace(tzinfo=dateutil_tz.gettz("US/Pacific")) - - result = self.formatter._format_token(dt, "ZZ") - assert result == "-07:00" or result == "-08:00" - - result = self.formatter._format_token(dt, "Z") - assert result == "-0700" or result == "-0800" - - @pytest.mark.parametrize("full_tz_name", make_full_tz_list()) - def test_timezone_formatter(self, full_tz_name): - - # This test will fail if we use "now" as date as soon as we change from/to DST - dt = datetime(1986, 2, 14, tzinfo=pytz.timezone("UTC")).replace( - tzinfo=dateutil_tz.gettz(full_tz_name) - ) - abbreviation = dt.tzname() - - result = self.formatter._format_token(dt, "ZZZ") - assert result == abbreviation - - def test_am_pm(self): - - dt = datetime(2012, 1, 1, 11) - assert self.formatter._format_token(dt, "a") == "am" - assert self.formatter._format_token(dt, "A") == "AM" - - dt = datetime(2012, 1, 1, 13) - assert self.formatter._format_token(dt, "a") == "pm" - assert self.formatter._format_token(dt, "A") == "PM" - - def test_week(self): - dt = datetime(2017, 5, 19) - assert self.formatter._format_token(dt, "W") == "2017-W20-5" - - # make sure week is zero padded when needed - dt_early = datetime(2011, 1, 20) - assert self.formatter._format_token(dt_early, "W") == "2011-W03-4" - - def test_nonsense(self): - dt = datetime(2012, 1, 1, 11) - assert self.formatter._format_token(dt, None) is None - assert self.formatter._format_token(dt, "NONSENSE") is None - - def test_escape(self): - - assert ( - self.formatter.format( - datetime(2015, 12, 10, 17, 9), "MMMM D, YYYY [at] h:mma" - ) - == "December 10, 2015 at 5:09pm" - ) - - assert ( - self.formatter.format( - datetime(2015, 12, 10, 17, 9), "[MMMM] M D, YYYY [at] h:mma" - ) - == "MMMM 12 10, 2015 at 5:09pm" - ) - - assert ( - self.formatter.format( - datetime(1990, 11, 25), - "[It happened on] MMMM Do [in the year] YYYY [a long time ago]", - ) - == "It happened on November 25th in the year 1990 a long time ago" - ) - - assert ( - self.formatter.format( - datetime(1990, 11, 25), - "[It happened on] MMMM Do [in the][ year] YYYY [a long time ago]", - ) - == "It happened on November 25th in the year 1990 a long time ago" - ) - - assert ( - self.formatter.format( - datetime(1, 1, 1), "[I'm][ entirely][ escaped,][ weee!]" - ) - == "I'm entirely escaped, weee!" - ) - - # Special RegEx characters - assert ( - self.formatter.format( - datetime(2017, 12, 31, 2, 0), "MMM DD, YYYY |^${}().*+?<>-& h:mm A" - ) - == "Dec 31, 2017 |^${}().*+?<>-& 2:00 AM" - ) - - # Escaping is atomic: brackets inside brackets are treated literally - assert self.formatter.format(datetime(1, 1, 1), "[[[ ]]") == "[[ ]" - - -@pytest.mark.usefixtures("arrow_formatter", "time_1975_12_25") -class TestFormatterBuiltinFormats: - def test_atom(self): - assert ( - self.formatter.format(self.datetime, FORMAT_ATOM) - == "1975-12-25 14:15:16-05:00" - ) - - def test_cookie(self): - assert ( - self.formatter.format(self.datetime, FORMAT_COOKIE) - == "Thursday, 25-Dec-1975 14:15:16 EST" - ) - - def test_rfc_822(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC822) - == "Thu, 25 Dec 75 14:15:16 -0500" - ) - - def test_rfc_850(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC850) - == "Thursday, 25-Dec-75 14:15:16 EST" - ) - - def test_rfc_1036(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC1036) - == "Thu, 25 Dec 75 14:15:16 -0500" - ) - - def test_rfc_1123(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC1123) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_rfc_2822(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC2822) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_rfc3339(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC3339) - == "1975-12-25 14:15:16-05:00" - ) - - def test_rss(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RSS) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_w3c(self): - assert ( - self.formatter.format(self.datetime, FORMAT_W3C) - == "1975-12-25 14:15:16-05:00" - ) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py deleted file mode 100644 index 006ccdd5ba..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py +++ /dev/null @@ -1,1352 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -import pytest - -from arrow import arrow, locales - - -@pytest.mark.usefixtures("lang_locales") -class TestLocaleValidation: - """Validate locales to ensure that translations are valid and complete""" - - def test_locale_validation(self): - - for _, locale_cls in self.locales.items(): - # 7 days + 1 spacer to allow for 1-indexing of months - assert len(locale_cls.day_names) == 8 - assert locale_cls.day_names[0] == "" - # ensure that all string from index 1 onward are valid (not blank or None) - assert all(locale_cls.day_names[1:]) - - assert len(locale_cls.day_abbreviations) == 8 - assert locale_cls.day_abbreviations[0] == "" - assert all(locale_cls.day_abbreviations[1:]) - - # 12 months + 1 spacer to allow for 1-indexing of months - assert len(locale_cls.month_names) == 13 - assert locale_cls.month_names[0] == "" - assert all(locale_cls.month_names[1:]) - - assert len(locale_cls.month_abbreviations) == 13 - assert locale_cls.month_abbreviations[0] == "" - assert all(locale_cls.month_abbreviations[1:]) - - assert len(locale_cls.names) > 0 - assert locale_cls.past is not None - assert locale_cls.future is not None - - -class TestModule: - def test_get_locale(self, mocker): - mock_locale = mocker.Mock() - mock_locale_cls = mocker.Mock() - mock_locale_cls.return_value = mock_locale - - with pytest.raises(ValueError): - arrow.locales.get_locale("locale_name") - - cls_dict = arrow.locales._locales - mocker.patch.dict(cls_dict, {"locale_name": mock_locale_cls}) - - result = arrow.locales.get_locale("locale_name") - - assert result == mock_locale - - def test_get_locale_by_class_name(self, mocker): - mock_locale_cls = mocker.Mock() - mock_locale_obj = mock_locale_cls.return_value = mocker.Mock() - - globals_fn = mocker.Mock() - globals_fn.return_value = {"NonExistentLocale": mock_locale_cls} - - with pytest.raises(ValueError): - arrow.locales.get_locale_by_class_name("NonExistentLocale") - - mocker.patch.object(locales, "globals", globals_fn) - result = arrow.locales.get_locale_by_class_name("NonExistentLocale") - - mock_locale_cls.assert_called_once_with() - assert result == mock_locale_obj - - def test_locales(self): - - assert len(locales._locales) > 0 - - -@pytest.mark.usefixtures("lang_locale") -class TestEnglishLocale: - def test_describe(self): - assert self.locale.describe("now", only_distance=True) == "instantly" - assert self.locale.describe("now", only_distance=False) == "just now" - - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 hours" - assert self.locale._format_timeframe("hour", 0) == "an hour" - - def test_format_relative_now(self): - - result = self.locale._format_relative("just now", "now", 0) - - assert result == "just now" - - def test_format_relative_past(self): - - result = self.locale._format_relative("an hour", "hour", 1) - - assert result == "in an hour" - - def test_format_relative_future(self): - - result = self.locale._format_relative("an hour", "hour", -1) - - assert result == "an hour ago" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(0) == "0th" - assert self.locale.ordinal_number(1) == "1st" - assert self.locale.ordinal_number(2) == "2nd" - assert self.locale.ordinal_number(3) == "3rd" - assert self.locale.ordinal_number(4) == "4th" - assert self.locale.ordinal_number(10) == "10th" - assert self.locale.ordinal_number(11) == "11th" - assert self.locale.ordinal_number(12) == "12th" - assert self.locale.ordinal_number(13) == "13th" - assert self.locale.ordinal_number(14) == "14th" - assert self.locale.ordinal_number(21) == "21st" - assert self.locale.ordinal_number(22) == "22nd" - assert self.locale.ordinal_number(23) == "23rd" - assert self.locale.ordinal_number(24) == "24th" - - assert self.locale.ordinal_number(100) == "100th" - assert self.locale.ordinal_number(101) == "101st" - assert self.locale.ordinal_number(102) == "102nd" - assert self.locale.ordinal_number(103) == "103rd" - assert self.locale.ordinal_number(104) == "104th" - assert self.locale.ordinal_number(110) == "110th" - assert self.locale.ordinal_number(111) == "111th" - assert self.locale.ordinal_number(112) == "112th" - assert self.locale.ordinal_number(113) == "113th" - assert self.locale.ordinal_number(114) == "114th" - assert self.locale.ordinal_number(121) == "121st" - assert self.locale.ordinal_number(122) == "122nd" - assert self.locale.ordinal_number(123) == "123rd" - assert self.locale.ordinal_number(124) == "124th" - - def test_meridian_invalid_token(self): - assert self.locale.meridian(7, None) is None - assert self.locale.meridian(7, "B") is None - assert self.locale.meridian(7, "NONSENSE") is None - - -@pytest.mark.usefixtures("lang_locale") -class TestItalianLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1º" - - -@pytest.mark.usefixtures("lang_locale") -class TestSpanishLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1º" - - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "ahora" - assert self.locale._format_timeframe("seconds", 1) == "1 segundos" - assert self.locale._format_timeframe("seconds", 3) == "3 segundos" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "un minuto" - assert self.locale._format_timeframe("minutes", 4) == "4 minutos" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "una hora" - assert self.locale._format_timeframe("hours", 5) == "5 horas" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "un día" - assert self.locale._format_timeframe("days", 6) == "6 días" - assert self.locale._format_timeframe("days", 12) == "12 días" - assert self.locale._format_timeframe("week", 1) == "una semana" - assert self.locale._format_timeframe("weeks", 2) == "2 semanas" - assert self.locale._format_timeframe("weeks", 3) == "3 semanas" - assert self.locale._format_timeframe("month", 1) == "un mes" - assert self.locale._format_timeframe("months", 7) == "7 meses" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "un año" - assert self.locale._format_timeframe("years", 8) == "8 años" - assert self.locale._format_timeframe("years", 12) == "12 años" - - assert self.locale._format_timeframe("now", 0) == "ahora" - assert self.locale._format_timeframe("seconds", -1) == "1 segundos" - assert self.locale._format_timeframe("seconds", -9) == "9 segundos" - assert self.locale._format_timeframe("seconds", -12) == "12 segundos" - assert self.locale._format_timeframe("minute", -1) == "un minuto" - assert self.locale._format_timeframe("minutes", -2) == "2 minutos" - assert self.locale._format_timeframe("minutes", -10) == "10 minutos" - assert self.locale._format_timeframe("hour", -1) == "una hora" - assert self.locale._format_timeframe("hours", -3) == "3 horas" - assert self.locale._format_timeframe("hours", -11) == "11 horas" - assert self.locale._format_timeframe("day", -1) == "un día" - assert self.locale._format_timeframe("days", -2) == "2 días" - assert self.locale._format_timeframe("days", -12) == "12 días" - assert self.locale._format_timeframe("week", -1) == "una semana" - assert self.locale._format_timeframe("weeks", -2) == "2 semanas" - assert self.locale._format_timeframe("weeks", -3) == "3 semanas" - assert self.locale._format_timeframe("month", -1) == "un mes" - assert self.locale._format_timeframe("months", -3) == "3 meses" - assert self.locale._format_timeframe("months", -13) == "13 meses" - assert self.locale._format_timeframe("year", -1) == "un año" - assert self.locale._format_timeframe("years", -4) == "4 años" - assert self.locale._format_timeframe("years", -14) == "14 años" - - -@pytest.mark.usefixtures("lang_locale") -class TestFrenchLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1er" - assert self.locale.ordinal_number(2) == "2e" - - def test_month_abbreviation(self): - assert "juil" in self.locale.month_abbreviations - - -@pytest.mark.usefixtures("lang_locale") -class TestFrenchCanadianLocale: - def test_month_abbreviation(self): - assert "juill" in self.locale.month_abbreviations - - -@pytest.mark.usefixtures("lang_locale") -class TestRussianLocale: - def test_plurals2(self): - assert self.locale._format_timeframe("hours", 0) == "0 часов" - assert self.locale._format_timeframe("hours", 1) == "1 час" - assert self.locale._format_timeframe("hours", 2) == "2 часа" - assert self.locale._format_timeframe("hours", 4) == "4 часа" - assert self.locale._format_timeframe("hours", 5) == "5 часов" - assert self.locale._format_timeframe("hours", 21) == "21 час" - assert self.locale._format_timeframe("hours", 22) == "22 часа" - assert self.locale._format_timeframe("hours", 25) == "25 часов" - - # feminine grammatical gender should be tested separately - assert self.locale._format_timeframe("minutes", 0) == "0 минут" - assert self.locale._format_timeframe("minutes", 1) == "1 минуту" - assert self.locale._format_timeframe("minutes", 2) == "2 минуты" - assert self.locale._format_timeframe("minutes", 4) == "4 минуты" - assert self.locale._format_timeframe("minutes", 5) == "5 минут" - assert self.locale._format_timeframe("minutes", 21) == "21 минуту" - assert self.locale._format_timeframe("minutes", 22) == "22 минуты" - assert self.locale._format_timeframe("minutes", 25) == "25 минут" - - -@pytest.mark.usefixtures("lang_locale") -class TestPolishLocale: - def test_plurals(self): - - assert self.locale._format_timeframe("seconds", 0) == "0 sekund" - assert self.locale._format_timeframe("second", 1) == "sekundę" - assert self.locale._format_timeframe("seconds", 2) == "2 sekundy" - assert self.locale._format_timeframe("seconds", 5) == "5 sekund" - assert self.locale._format_timeframe("seconds", 21) == "21 sekund" - assert self.locale._format_timeframe("seconds", 22) == "22 sekundy" - assert self.locale._format_timeframe("seconds", 25) == "25 sekund" - - assert self.locale._format_timeframe("minutes", 0) == "0 minut" - assert self.locale._format_timeframe("minute", 1) == "minutę" - assert self.locale._format_timeframe("minutes", 2) == "2 minuty" - assert self.locale._format_timeframe("minutes", 5) == "5 minut" - assert self.locale._format_timeframe("minutes", 21) == "21 minut" - assert self.locale._format_timeframe("minutes", 22) == "22 minuty" - assert self.locale._format_timeframe("minutes", 25) == "25 minut" - - assert self.locale._format_timeframe("hours", 0) == "0 godzin" - assert self.locale._format_timeframe("hour", 1) == "godzinę" - assert self.locale._format_timeframe("hours", 2) == "2 godziny" - assert self.locale._format_timeframe("hours", 5) == "5 godzin" - assert self.locale._format_timeframe("hours", 21) == "21 godzin" - assert self.locale._format_timeframe("hours", 22) == "22 godziny" - assert self.locale._format_timeframe("hours", 25) == "25 godzin" - - assert self.locale._format_timeframe("weeks", 0) == "0 tygodni" - assert self.locale._format_timeframe("week", 1) == "tydzień" - assert self.locale._format_timeframe("weeks", 2) == "2 tygodnie" - assert self.locale._format_timeframe("weeks", 5) == "5 tygodni" - assert self.locale._format_timeframe("weeks", 21) == "21 tygodni" - assert self.locale._format_timeframe("weeks", 22) == "22 tygodnie" - assert self.locale._format_timeframe("weeks", 25) == "25 tygodni" - - assert self.locale._format_timeframe("months", 0) == "0 miesięcy" - assert self.locale._format_timeframe("month", 1) == "miesiąc" - assert self.locale._format_timeframe("months", 2) == "2 miesiące" - assert self.locale._format_timeframe("months", 5) == "5 miesięcy" - assert self.locale._format_timeframe("months", 21) == "21 miesięcy" - assert self.locale._format_timeframe("months", 22) == "22 miesiące" - assert self.locale._format_timeframe("months", 25) == "25 miesięcy" - - assert self.locale._format_timeframe("years", 0) == "0 lat" - assert self.locale._format_timeframe("year", 1) == "rok" - assert self.locale._format_timeframe("years", 2) == "2 lata" - assert self.locale._format_timeframe("years", 5) == "5 lat" - assert self.locale._format_timeframe("years", 21) == "21 lat" - assert self.locale._format_timeframe("years", 22) == "22 lata" - assert self.locale._format_timeframe("years", 25) == "25 lat" - - -@pytest.mark.usefixtures("lang_locale") -class TestIcelandicLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("minute", -1) == "einni mínútu" - assert self.locale._format_timeframe("minute", 1) == "eina mínútu" - - assert self.locale._format_timeframe("hours", -2) == "2 tímum" - assert self.locale._format_timeframe("hours", 2) == "2 tíma" - assert self.locale._format_timeframe("now", 0) == "rétt í þessu" - - -@pytest.mark.usefixtures("lang_locale") -class TestMalayalamLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 മണിക്കൂർ" - assert self.locale._format_timeframe("hour", 0) == "ഒരു മണിക്കൂർ" - - def test_format_relative_now(self): - - result = self.locale._format_relative("ഇപ്പോൾ", "now", 0) - - assert result == "ഇപ്പോൾ" - - def test_format_relative_past(self): - - result = self.locale._format_relative("ഒരു മണിക്കൂർ", "hour", 1) - assert result == "ഒരു മണിക്കൂർ ശേഷം" - - def test_format_relative_future(self): - - result = self.locale._format_relative("ഒരു മണിക്കൂർ", "hour", -1) - assert result == "ഒരു മണിക്കൂർ മുമ്പ്" - - -@pytest.mark.usefixtures("lang_locale") -class TestHindiLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 घंटे" - assert self.locale._format_timeframe("hour", 0) == "एक घंटा" - - def test_format_relative_now(self): - - result = self.locale._format_relative("अभी", "now", 0) - assert result == "अभी" - - def test_format_relative_past(self): - - result = self.locale._format_relative("एक घंटा", "hour", 1) - assert result == "एक घंटा बाद" - - def test_format_relative_future(self): - - result = self.locale._format_relative("एक घंटा", "hour", -1) - assert result == "एक घंटा पहले" - - -@pytest.mark.usefixtures("lang_locale") -class TestCzechLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 hodiny" - assert self.locale._format_timeframe("hours", 5) == "5 hodin" - assert self.locale._format_timeframe("hour", 0) == "0 hodin" - assert self.locale._format_timeframe("hours", -2) == "2 hodinami" - assert self.locale._format_timeframe("hours", -5) == "5 hodinami" - assert self.locale._format_timeframe("now", 0) == "Teď" - - assert self.locale._format_timeframe("weeks", 2) == "2 týdny" - assert self.locale._format_timeframe("weeks", 5) == "5 týdnů" - assert self.locale._format_timeframe("week", 0) == "0 týdnů" - assert self.locale._format_timeframe("weeks", -2) == "2 týdny" - assert self.locale._format_timeframe("weeks", -5) == "5 týdny" - - def test_format_relative_now(self): - - result = self.locale._format_relative("Teď", "now", 0) - assert result == "Teď" - - def test_format_relative_future(self): - - result = self.locale._format_relative("hodinu", "hour", 1) - assert result == "Za hodinu" - - def test_format_relative_past(self): - - result = self.locale._format_relative("hodinou", "hour", -1) - assert result == "Před hodinou" - - -@pytest.mark.usefixtures("lang_locale") -class TestSlovakLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("seconds", -5) == "5 sekundami" - assert self.locale._format_timeframe("seconds", -2) == "2 sekundami" - assert self.locale._format_timeframe("second", -1) == "sekundou" - assert self.locale._format_timeframe("second", 0) == "0 sekúnd" - assert self.locale._format_timeframe("second", 1) == "sekundu" - assert self.locale._format_timeframe("seconds", 2) == "2 sekundy" - assert self.locale._format_timeframe("seconds", 5) == "5 sekúnd" - - assert self.locale._format_timeframe("minutes", -5) == "5 minútami" - assert self.locale._format_timeframe("minutes", -2) == "2 minútami" - assert self.locale._format_timeframe("minute", -1) == "minútou" - assert self.locale._format_timeframe("minute", 0) == "0 minút" - assert self.locale._format_timeframe("minute", 1) == "minútu" - assert self.locale._format_timeframe("minutes", 2) == "2 minúty" - assert self.locale._format_timeframe("minutes", 5) == "5 minút" - - assert self.locale._format_timeframe("hours", -5) == "5 hodinami" - assert self.locale._format_timeframe("hours", -2) == "2 hodinami" - assert self.locale._format_timeframe("hour", -1) == "hodinou" - assert self.locale._format_timeframe("hour", 0) == "0 hodín" - assert self.locale._format_timeframe("hour", 1) == "hodinu" - assert self.locale._format_timeframe("hours", 2) == "2 hodiny" - assert self.locale._format_timeframe("hours", 5) == "5 hodín" - - assert self.locale._format_timeframe("days", -5) == "5 dňami" - assert self.locale._format_timeframe("days", -2) == "2 dňami" - assert self.locale._format_timeframe("day", -1) == "dňom" - assert self.locale._format_timeframe("day", 0) == "0 dní" - assert self.locale._format_timeframe("day", 1) == "deň" - assert self.locale._format_timeframe("days", 2) == "2 dni" - assert self.locale._format_timeframe("days", 5) == "5 dní" - - assert self.locale._format_timeframe("weeks", -5) == "5 týždňami" - assert self.locale._format_timeframe("weeks", -2) == "2 týždňami" - assert self.locale._format_timeframe("week", -1) == "týždňom" - assert self.locale._format_timeframe("week", 0) == "0 týždňov" - assert self.locale._format_timeframe("week", 1) == "týždeň" - assert self.locale._format_timeframe("weeks", 2) == "2 týždne" - assert self.locale._format_timeframe("weeks", 5) == "5 týždňov" - - assert self.locale._format_timeframe("months", -5) == "5 mesiacmi" - assert self.locale._format_timeframe("months", -2) == "2 mesiacmi" - assert self.locale._format_timeframe("month", -1) == "mesiacom" - assert self.locale._format_timeframe("month", 0) == "0 mesiacov" - assert self.locale._format_timeframe("month", 1) == "mesiac" - assert self.locale._format_timeframe("months", 2) == "2 mesiace" - assert self.locale._format_timeframe("months", 5) == "5 mesiacov" - - assert self.locale._format_timeframe("years", -5) == "5 rokmi" - assert self.locale._format_timeframe("years", -2) == "2 rokmi" - assert self.locale._format_timeframe("year", -1) == "rokom" - assert self.locale._format_timeframe("year", 0) == "0 rokov" - assert self.locale._format_timeframe("year", 1) == "rok" - assert self.locale._format_timeframe("years", 2) == "2 roky" - assert self.locale._format_timeframe("years", 5) == "5 rokov" - - assert self.locale._format_timeframe("now", 0) == "Teraz" - - def test_format_relative_now(self): - - result = self.locale._format_relative("Teraz", "now", 0) - assert result == "Teraz" - - def test_format_relative_future(self): - - result = self.locale._format_relative("hodinu", "hour", 1) - assert result == "O hodinu" - - def test_format_relative_past(self): - - result = self.locale._format_relative("hodinou", "hour", -1) - assert result == "Pred hodinou" - - -@pytest.mark.usefixtures("lang_locale") -class TestBulgarianLocale: - def test_plurals2(self): - assert self.locale._format_timeframe("hours", 0) == "0 часа" - assert self.locale._format_timeframe("hours", 1) == "1 час" - assert self.locale._format_timeframe("hours", 2) == "2 часа" - assert self.locale._format_timeframe("hours", 4) == "4 часа" - assert self.locale._format_timeframe("hours", 5) == "5 часа" - assert self.locale._format_timeframe("hours", 21) == "21 час" - assert self.locale._format_timeframe("hours", 22) == "22 часа" - assert self.locale._format_timeframe("hours", 25) == "25 часа" - - # feminine grammatical gender should be tested separately - assert self.locale._format_timeframe("minutes", 0) == "0 минути" - assert self.locale._format_timeframe("minutes", 1) == "1 минута" - assert self.locale._format_timeframe("minutes", 2) == "2 минути" - assert self.locale._format_timeframe("minutes", 4) == "4 минути" - assert self.locale._format_timeframe("minutes", 5) == "5 минути" - assert self.locale._format_timeframe("minutes", 21) == "21 минута" - assert self.locale._format_timeframe("minutes", 22) == "22 минути" - assert self.locale._format_timeframe("minutes", 25) == "25 минути" - - -@pytest.mark.usefixtures("lang_locale") -class TestMacedonianLocale: - def test_singles_mk(self): - assert self.locale._format_timeframe("second", 1) == "една секунда" - assert self.locale._format_timeframe("minute", 1) == "една минута" - assert self.locale._format_timeframe("hour", 1) == "еден саат" - assert self.locale._format_timeframe("day", 1) == "еден ден" - assert self.locale._format_timeframe("week", 1) == "една недела" - assert self.locale._format_timeframe("month", 1) == "еден месец" - assert self.locale._format_timeframe("year", 1) == "една година" - - def test_meridians_mk(self): - assert self.locale.meridian(7, "A") == "претпладне" - assert self.locale.meridian(18, "A") == "попладне" - assert self.locale.meridian(10, "a") == "дп" - assert self.locale.meridian(22, "a") == "пп" - - def test_describe_mk(self): - assert self.locale.describe("second", only_distance=True) == "една секунда" - assert self.locale.describe("second", only_distance=False) == "за една секунда" - assert self.locale.describe("minute", only_distance=True) == "една минута" - assert self.locale.describe("minute", only_distance=False) == "за една минута" - assert self.locale.describe("hour", only_distance=True) == "еден саат" - assert self.locale.describe("hour", only_distance=False) == "за еден саат" - assert self.locale.describe("day", only_distance=True) == "еден ден" - assert self.locale.describe("day", only_distance=False) == "за еден ден" - assert self.locale.describe("week", only_distance=True) == "една недела" - assert self.locale.describe("week", only_distance=False) == "за една недела" - assert self.locale.describe("month", only_distance=True) == "еден месец" - assert self.locale.describe("month", only_distance=False) == "за еден месец" - assert self.locale.describe("year", only_distance=True) == "една година" - assert self.locale.describe("year", only_distance=False) == "за една година" - - def test_relative_mk(self): - # time - assert self.locale._format_relative("сега", "now", 0) == "сега" - assert self.locale._format_relative("1 секунда", "seconds", 1) == "за 1 секунда" - assert self.locale._format_relative("1 минута", "minutes", 1) == "за 1 минута" - assert self.locale._format_relative("1 саат", "hours", 1) == "за 1 саат" - assert self.locale._format_relative("1 ден", "days", 1) == "за 1 ден" - assert self.locale._format_relative("1 недела", "weeks", 1) == "за 1 недела" - assert self.locale._format_relative("1 месец", "months", 1) == "за 1 месец" - assert self.locale._format_relative("1 година", "years", 1) == "за 1 година" - assert ( - self.locale._format_relative("1 секунда", "seconds", -1) == "пред 1 секунда" - ) - assert ( - self.locale._format_relative("1 минута", "minutes", -1) == "пред 1 минута" - ) - assert self.locale._format_relative("1 саат", "hours", -1) == "пред 1 саат" - assert self.locale._format_relative("1 ден", "days", -1) == "пред 1 ден" - assert self.locale._format_relative("1 недела", "weeks", -1) == "пред 1 недела" - assert self.locale._format_relative("1 месец", "months", -1) == "пред 1 месец" - assert self.locale._format_relative("1 година", "years", -1) == "пред 1 година" - - def test_plurals_mk(self): - # Seconds - assert self.locale._format_timeframe("seconds", 0) == "0 секунди" - assert self.locale._format_timeframe("seconds", 1) == "1 секунда" - assert self.locale._format_timeframe("seconds", 2) == "2 секунди" - assert self.locale._format_timeframe("seconds", 4) == "4 секунди" - assert self.locale._format_timeframe("seconds", 5) == "5 секунди" - assert self.locale._format_timeframe("seconds", 21) == "21 секунда" - assert self.locale._format_timeframe("seconds", 22) == "22 секунди" - assert self.locale._format_timeframe("seconds", 25) == "25 секунди" - - # Minutes - assert self.locale._format_timeframe("minutes", 0) == "0 минути" - assert self.locale._format_timeframe("minutes", 1) == "1 минута" - assert self.locale._format_timeframe("minutes", 2) == "2 минути" - assert self.locale._format_timeframe("minutes", 4) == "4 минути" - assert self.locale._format_timeframe("minutes", 5) == "5 минути" - assert self.locale._format_timeframe("minutes", 21) == "21 минута" - assert self.locale._format_timeframe("minutes", 22) == "22 минути" - assert self.locale._format_timeframe("minutes", 25) == "25 минути" - - # Hours - assert self.locale._format_timeframe("hours", 0) == "0 саати" - assert self.locale._format_timeframe("hours", 1) == "1 саат" - assert self.locale._format_timeframe("hours", 2) == "2 саати" - assert self.locale._format_timeframe("hours", 4) == "4 саати" - assert self.locale._format_timeframe("hours", 5) == "5 саати" - assert self.locale._format_timeframe("hours", 21) == "21 саат" - assert self.locale._format_timeframe("hours", 22) == "22 саати" - assert self.locale._format_timeframe("hours", 25) == "25 саати" - - # Days - assert self.locale._format_timeframe("days", 0) == "0 дена" - assert self.locale._format_timeframe("days", 1) == "1 ден" - assert self.locale._format_timeframe("days", 2) == "2 дена" - assert self.locale._format_timeframe("days", 3) == "3 дена" - assert self.locale._format_timeframe("days", 21) == "21 ден" - - # Weeks - assert self.locale._format_timeframe("weeks", 0) == "0 недели" - assert self.locale._format_timeframe("weeks", 1) == "1 недела" - assert self.locale._format_timeframe("weeks", 2) == "2 недели" - assert self.locale._format_timeframe("weeks", 4) == "4 недели" - assert self.locale._format_timeframe("weeks", 5) == "5 недели" - assert self.locale._format_timeframe("weeks", 21) == "21 недела" - assert self.locale._format_timeframe("weeks", 22) == "22 недели" - assert self.locale._format_timeframe("weeks", 25) == "25 недели" - - # Months - assert self.locale._format_timeframe("months", 0) == "0 месеци" - assert self.locale._format_timeframe("months", 1) == "1 месец" - assert self.locale._format_timeframe("months", 2) == "2 месеци" - assert self.locale._format_timeframe("months", 4) == "4 месеци" - assert self.locale._format_timeframe("months", 5) == "5 месеци" - assert self.locale._format_timeframe("months", 21) == "21 месец" - assert self.locale._format_timeframe("months", 22) == "22 месеци" - assert self.locale._format_timeframe("months", 25) == "25 месеци" - - # Years - assert self.locale._format_timeframe("years", 1) == "1 година" - assert self.locale._format_timeframe("years", 2) == "2 години" - assert self.locale._format_timeframe("years", 5) == "5 години" - - def test_multi_describe_mk(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "за 5 години 1 недела 1 саат 6 минути" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "за 0 дена 1 саат 6 минути" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "за 1 саат 6 минути" - assert describe(seconds4000, only_distance=True) == "1 саат 6 минути" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "за 1 саат 1 минута" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "за 0 саати 5 минути" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "за 5 минути" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "за 1 минута" - assert describe(seconds60, only_distance=True) == "1 минута" - seconds60 = [("seconds", 1)] - assert describe(seconds60) == "за 1 секунда" - assert describe(seconds60, only_distance=True) == "1 секунда" - - -@pytest.mark.usefixtures("time_2013_01_01") -@pytest.mark.usefixtures("lang_locale") -class TestHebrewLocale: - def test_couple_of_timeframe(self): - assert self.locale._format_timeframe("days", 1) == "יום" - assert self.locale._format_timeframe("days", 2) == "יומיים" - assert self.locale._format_timeframe("days", 3) == "3 ימים" - - assert self.locale._format_timeframe("hours", 1) == "שעה" - assert self.locale._format_timeframe("hours", 2) == "שעתיים" - assert self.locale._format_timeframe("hours", 3) == "3 שעות" - - assert self.locale._format_timeframe("week", 1) == "שבוע" - assert self.locale._format_timeframe("weeks", 2) == "שבועיים" - assert self.locale._format_timeframe("weeks", 3) == "3 שבועות" - - assert self.locale._format_timeframe("months", 1) == "חודש" - assert self.locale._format_timeframe("months", 2) == "חודשיים" - assert self.locale._format_timeframe("months", 4) == "4 חודשים" - - assert self.locale._format_timeframe("years", 1) == "שנה" - assert self.locale._format_timeframe("years", 2) == "שנתיים" - assert self.locale._format_timeframe("years", 5) == "5 שנים" - - def test_describe_multi(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "בעוד 5 שנים, שבוע, שעה ו־6 דקות" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "בעוד 0 ימים, שעה ו־6 דקות" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "בעוד שעה ו־6 דקות" - assert describe(seconds4000, only_distance=True) == "שעה ו־6 דקות" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "בעוד שעה ודקה" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "בעוד 0 שעות ו־5 דקות" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "בעוד 5 דקות" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "בעוד דקה" - assert describe(seconds60, only_distance=True) == "דקה" - - -@pytest.mark.usefixtures("lang_locale") -class TestMarathiLocale: - def test_dateCoreFunctionality(self): - dt = arrow.Arrow(2015, 4, 11, 17, 30, 00) - assert self.locale.month_name(dt.month) == "एप्रिल" - assert self.locale.month_abbreviation(dt.month) == "एप्रि" - assert self.locale.day_name(dt.isoweekday()) == "शनिवार" - assert self.locale.day_abbreviation(dt.isoweekday()) == "शनि" - - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 तास" - assert self.locale._format_timeframe("hour", 0) == "एक तास" - - def test_format_relative_now(self): - result = self.locale._format_relative("सद्य", "now", 0) - assert result == "सद्य" - - def test_format_relative_past(self): - result = self.locale._format_relative("एक तास", "hour", 1) - assert result == "एक तास नंतर" - - def test_format_relative_future(self): - result = self.locale._format_relative("एक तास", "hour", -1) - assert result == "एक तास आधी" - - # Not currently implemented - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1" - - -@pytest.mark.usefixtures("lang_locale") -class TestFinnishLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == ("2 tuntia", "2 tunnin") - assert self.locale._format_timeframe("hour", 0) == ("tunti", "tunnin") - - def test_format_relative_now(self): - result = self.locale._format_relative(["juuri nyt", "juuri nyt"], "now", 0) - assert result == "juuri nyt" - - def test_format_relative_past(self): - result = self.locale._format_relative(["tunti", "tunnin"], "hour", 1) - assert result == "tunnin kuluttua" - - def test_format_relative_future(self): - result = self.locale._format_relative(["tunti", "tunnin"], "hour", -1) - assert result == "tunti sitten" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1." - - -@pytest.mark.usefixtures("lang_locale") -class TestGermanLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1." - - def test_define(self): - assert self.locale.describe("minute", only_distance=True) == "eine Minute" - assert self.locale.describe("minute", only_distance=False) == "in einer Minute" - assert self.locale.describe("hour", only_distance=True) == "eine Stunde" - assert self.locale.describe("hour", only_distance=False) == "in einer Stunde" - assert self.locale.describe("day", only_distance=True) == "ein Tag" - assert self.locale.describe("day", only_distance=False) == "in einem Tag" - assert self.locale.describe("week", only_distance=True) == "eine Woche" - assert self.locale.describe("week", only_distance=False) == "in einer Woche" - assert self.locale.describe("month", only_distance=True) == "ein Monat" - assert self.locale.describe("month", only_distance=False) == "in einem Monat" - assert self.locale.describe("year", only_distance=True) == "ein Jahr" - assert self.locale.describe("year", only_distance=False) == "in einem Jahr" - - def test_weekday(self): - dt = arrow.Arrow(2015, 4, 11, 17, 30, 00) - assert self.locale.day_name(dt.isoweekday()) == "Samstag" - assert self.locale.day_abbreviation(dt.isoweekday()) == "Sa" - - -@pytest.mark.usefixtures("lang_locale") -class TestHungarianLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 óra" - assert self.locale._format_timeframe("hour", 0) == "egy órával" - assert self.locale._format_timeframe("hours", -2) == "2 órával" - assert self.locale._format_timeframe("now", 0) == "éppen most" - - -@pytest.mark.usefixtures("lang_locale") -class TestEsperantoLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 horoj" - assert self.locale._format_timeframe("hour", 0) == "un horo" - assert self.locale._format_timeframe("hours", -2) == "2 horoj" - assert self.locale._format_timeframe("now", 0) == "nun" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1a" - - -@pytest.mark.usefixtures("lang_locale") -class TestThaiLocale: - def test_year_full(self): - assert self.locale.year_full(2015) == "2558" - - def test_year_abbreviation(self): - assert self.locale.year_abbreviation(2015) == "58" - - def test_format_relative_now(self): - result = self.locale._format_relative("ขณะนี้", "now", 0) - assert result == "ขณะนี้" - - def test_format_relative_past(self): - result = self.locale._format_relative("1 ชั่วโมง", "hour", 1) - assert result == "ในอีก 1 ชั่วโมง" - result = self.locale._format_relative("{0} ชั่วโมง", "hours", 2) - assert result == "ในอีก {0} ชั่วโมง" - result = self.locale._format_relative("ไม่กี่วินาที", "seconds", 42) - assert result == "ในอีกไม่กี่วินาที" - - def test_format_relative_future(self): - result = self.locale._format_relative("1 ชั่วโมง", "hour", -1) - assert result == "1 ชั่วโมง ที่ผ่านมา" - - -@pytest.mark.usefixtures("lang_locale") -class TestBengaliLocale: - def test_ordinal_number(self): - assert self.locale._ordinal_number(0) == "0তম" - assert self.locale._ordinal_number(1) == "1ম" - assert self.locale._ordinal_number(3) == "3য়" - assert self.locale._ordinal_number(4) == "4র্থ" - assert self.locale._ordinal_number(5) == "5ম" - assert self.locale._ordinal_number(6) == "6ষ্ঠ" - assert self.locale._ordinal_number(10) == "10ম" - assert self.locale._ordinal_number(11) == "11তম" - assert self.locale._ordinal_number(42) == "42তম" - assert self.locale._ordinal_number(-1) is None - - -@pytest.mark.usefixtures("lang_locale") -class TestRomanianLocale: - def test_timeframes(self): - - assert self.locale._format_timeframe("hours", 2) == "2 ore" - assert self.locale._format_timeframe("months", 2) == "2 luni" - - assert self.locale._format_timeframe("days", 2) == "2 zile" - assert self.locale._format_timeframe("years", 2) == "2 ani" - - assert self.locale._format_timeframe("hours", 3) == "3 ore" - assert self.locale._format_timeframe("months", 4) == "4 luni" - assert self.locale._format_timeframe("days", 3) == "3 zile" - assert self.locale._format_timeframe("years", 5) == "5 ani" - - def test_relative_timeframes(self): - assert self.locale._format_relative("acum", "now", 0) == "acum" - assert self.locale._format_relative("o oră", "hour", 1) == "peste o oră" - assert self.locale._format_relative("o oră", "hour", -1) == "o oră în urmă" - assert self.locale._format_relative("un minut", "minute", 1) == "peste un minut" - assert ( - self.locale._format_relative("un minut", "minute", -1) == "un minut în urmă" - ) - assert ( - self.locale._format_relative("câteva secunde", "seconds", -1) - == "câteva secunde în urmă" - ) - assert ( - self.locale._format_relative("câteva secunde", "seconds", 1) - == "peste câteva secunde" - ) - assert self.locale._format_relative("o zi", "day", -1) == "o zi în urmă" - assert self.locale._format_relative("o zi", "day", 1) == "peste o zi" - - -@pytest.mark.usefixtures("lang_locale") -class TestArabicLocale: - def test_timeframes(self): - - # single - assert self.locale._format_timeframe("minute", 1) == "دقيقة" - assert self.locale._format_timeframe("hour", 1) == "ساعة" - assert self.locale._format_timeframe("day", 1) == "يوم" - assert self.locale._format_timeframe("month", 1) == "شهر" - assert self.locale._format_timeframe("year", 1) == "سنة" - - # double - assert self.locale._format_timeframe("minutes", 2) == "دقيقتين" - assert self.locale._format_timeframe("hours", 2) == "ساعتين" - assert self.locale._format_timeframe("days", 2) == "يومين" - assert self.locale._format_timeframe("months", 2) == "شهرين" - assert self.locale._format_timeframe("years", 2) == "سنتين" - - # up to ten - assert self.locale._format_timeframe("minutes", 3) == "3 دقائق" - assert self.locale._format_timeframe("hours", 4) == "4 ساعات" - assert self.locale._format_timeframe("days", 5) == "5 أيام" - assert self.locale._format_timeframe("months", 6) == "6 أشهر" - assert self.locale._format_timeframe("years", 10) == "10 سنوات" - - # more than ten - assert self.locale._format_timeframe("minutes", 11) == "11 دقيقة" - assert self.locale._format_timeframe("hours", 19) == "19 ساعة" - assert self.locale._format_timeframe("months", 24) == "24 شهر" - assert self.locale._format_timeframe("days", 50) == "50 يوم" - assert self.locale._format_timeframe("years", 115) == "115 سنة" - - -@pytest.mark.usefixtures("lang_locale") -class TestNepaliLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 3) == "3 घण्टा" - assert self.locale._format_timeframe("hour", 0) == "एक घण्टा" - - def test_format_relative_now(self): - result = self.locale._format_relative("अहिले", "now", 0) - assert result == "अहिले" - - def test_format_relative_future(self): - result = self.locale._format_relative("एक घण्टा", "hour", 1) - assert result == "एक घण्टा पछी" - - def test_format_relative_past(self): - result = self.locale._format_relative("एक घण्टा", "hour", -1) - assert result == "एक घण्टा पहिले" - - -@pytest.mark.usefixtures("lang_locale") -class TestIndonesianLocale: - def test_timeframes(self): - assert self.locale._format_timeframe("hours", 2) == "2 jam" - assert self.locale._format_timeframe("months", 2) == "2 bulan" - - assert self.locale._format_timeframe("days", 2) == "2 hari" - assert self.locale._format_timeframe("years", 2) == "2 tahun" - - assert self.locale._format_timeframe("hours", 3) == "3 jam" - assert self.locale._format_timeframe("months", 4) == "4 bulan" - assert self.locale._format_timeframe("days", 3) == "3 hari" - assert self.locale._format_timeframe("years", 5) == "5 tahun" - - def test_format_relative_now(self): - assert self.locale._format_relative("baru saja", "now", 0) == "baru saja" - - def test_format_relative_past(self): - assert self.locale._format_relative("1 jam", "hour", 1) == "dalam 1 jam" - assert self.locale._format_relative("1 detik", "seconds", 1) == "dalam 1 detik" - - def test_format_relative_future(self): - assert self.locale._format_relative("1 jam", "hour", -1) == "1 jam yang lalu" - - -@pytest.mark.usefixtures("lang_locale") -class TestTagalogLocale: - def test_singles_tl(self): - assert self.locale._format_timeframe("second", 1) == "isang segundo" - assert self.locale._format_timeframe("minute", 1) == "isang minuto" - assert self.locale._format_timeframe("hour", 1) == "isang oras" - assert self.locale._format_timeframe("day", 1) == "isang araw" - assert self.locale._format_timeframe("week", 1) == "isang linggo" - assert self.locale._format_timeframe("month", 1) == "isang buwan" - assert self.locale._format_timeframe("year", 1) == "isang taon" - - def test_meridians_tl(self): - assert self.locale.meridian(7, "A") == "ng umaga" - assert self.locale.meridian(18, "A") == "ng hapon" - assert self.locale.meridian(10, "a") == "nu" - assert self.locale.meridian(22, "a") == "nh" - - def test_describe_tl(self): - assert self.locale.describe("second", only_distance=True) == "isang segundo" - assert ( - self.locale.describe("second", only_distance=False) - == "isang segundo mula ngayon" - ) - assert self.locale.describe("minute", only_distance=True) == "isang minuto" - assert ( - self.locale.describe("minute", only_distance=False) - == "isang minuto mula ngayon" - ) - assert self.locale.describe("hour", only_distance=True) == "isang oras" - assert ( - self.locale.describe("hour", only_distance=False) - == "isang oras mula ngayon" - ) - assert self.locale.describe("day", only_distance=True) == "isang araw" - assert ( - self.locale.describe("day", only_distance=False) == "isang araw mula ngayon" - ) - assert self.locale.describe("week", only_distance=True) == "isang linggo" - assert ( - self.locale.describe("week", only_distance=False) - == "isang linggo mula ngayon" - ) - assert self.locale.describe("month", only_distance=True) == "isang buwan" - assert ( - self.locale.describe("month", only_distance=False) - == "isang buwan mula ngayon" - ) - assert self.locale.describe("year", only_distance=True) == "isang taon" - assert ( - self.locale.describe("year", only_distance=False) - == "isang taon mula ngayon" - ) - - def test_relative_tl(self): - # time - assert self.locale._format_relative("ngayon", "now", 0) == "ngayon" - assert ( - self.locale._format_relative("1 segundo", "seconds", 1) - == "1 segundo mula ngayon" - ) - assert ( - self.locale._format_relative("1 minuto", "minutes", 1) - == "1 minuto mula ngayon" - ) - assert ( - self.locale._format_relative("1 oras", "hours", 1) == "1 oras mula ngayon" - ) - assert self.locale._format_relative("1 araw", "days", 1) == "1 araw mula ngayon" - assert ( - self.locale._format_relative("1 linggo", "weeks", 1) - == "1 linggo mula ngayon" - ) - assert ( - self.locale._format_relative("1 buwan", "months", 1) - == "1 buwan mula ngayon" - ) - assert ( - self.locale._format_relative("1 taon", "years", 1) == "1 taon mula ngayon" - ) - assert ( - self.locale._format_relative("1 segundo", "seconds", -1) - == "nakaraang 1 segundo" - ) - assert ( - self.locale._format_relative("1 minuto", "minutes", -1) - == "nakaraang 1 minuto" - ) - assert self.locale._format_relative("1 oras", "hours", -1) == "nakaraang 1 oras" - assert self.locale._format_relative("1 araw", "days", -1) == "nakaraang 1 araw" - assert ( - self.locale._format_relative("1 linggo", "weeks", -1) - == "nakaraang 1 linggo" - ) - assert ( - self.locale._format_relative("1 buwan", "months", -1) == "nakaraang 1 buwan" - ) - assert self.locale._format_relative("1 taon", "years", -1) == "nakaraang 1 taon" - - def test_plurals_tl(self): - # Seconds - assert self.locale._format_timeframe("seconds", 0) == "0 segundo" - assert self.locale._format_timeframe("seconds", 1) == "1 segundo" - assert self.locale._format_timeframe("seconds", 2) == "2 segundo" - assert self.locale._format_timeframe("seconds", 4) == "4 segundo" - assert self.locale._format_timeframe("seconds", 5) == "5 segundo" - assert self.locale._format_timeframe("seconds", 21) == "21 segundo" - assert self.locale._format_timeframe("seconds", 22) == "22 segundo" - assert self.locale._format_timeframe("seconds", 25) == "25 segundo" - - # Minutes - assert self.locale._format_timeframe("minutes", 0) == "0 minuto" - assert self.locale._format_timeframe("minutes", 1) == "1 minuto" - assert self.locale._format_timeframe("minutes", 2) == "2 minuto" - assert self.locale._format_timeframe("minutes", 4) == "4 minuto" - assert self.locale._format_timeframe("minutes", 5) == "5 minuto" - assert self.locale._format_timeframe("minutes", 21) == "21 minuto" - assert self.locale._format_timeframe("minutes", 22) == "22 minuto" - assert self.locale._format_timeframe("minutes", 25) == "25 minuto" - - # Hours - assert self.locale._format_timeframe("hours", 0) == "0 oras" - assert self.locale._format_timeframe("hours", 1) == "1 oras" - assert self.locale._format_timeframe("hours", 2) == "2 oras" - assert self.locale._format_timeframe("hours", 4) == "4 oras" - assert self.locale._format_timeframe("hours", 5) == "5 oras" - assert self.locale._format_timeframe("hours", 21) == "21 oras" - assert self.locale._format_timeframe("hours", 22) == "22 oras" - assert self.locale._format_timeframe("hours", 25) == "25 oras" - - # Days - assert self.locale._format_timeframe("days", 0) == "0 araw" - assert self.locale._format_timeframe("days", 1) == "1 araw" - assert self.locale._format_timeframe("days", 2) == "2 araw" - assert self.locale._format_timeframe("days", 3) == "3 araw" - assert self.locale._format_timeframe("days", 21) == "21 araw" - - # Weeks - assert self.locale._format_timeframe("weeks", 0) == "0 linggo" - assert self.locale._format_timeframe("weeks", 1) == "1 linggo" - assert self.locale._format_timeframe("weeks", 2) == "2 linggo" - assert self.locale._format_timeframe("weeks", 4) == "4 linggo" - assert self.locale._format_timeframe("weeks", 5) == "5 linggo" - assert self.locale._format_timeframe("weeks", 21) == "21 linggo" - assert self.locale._format_timeframe("weeks", 22) == "22 linggo" - assert self.locale._format_timeframe("weeks", 25) == "25 linggo" - - # Months - assert self.locale._format_timeframe("months", 0) == "0 buwan" - assert self.locale._format_timeframe("months", 1) == "1 buwan" - assert self.locale._format_timeframe("months", 2) == "2 buwan" - assert self.locale._format_timeframe("months", 4) == "4 buwan" - assert self.locale._format_timeframe("months", 5) == "5 buwan" - assert self.locale._format_timeframe("months", 21) == "21 buwan" - assert self.locale._format_timeframe("months", 22) == "22 buwan" - assert self.locale._format_timeframe("months", 25) == "25 buwan" - - # Years - assert self.locale._format_timeframe("years", 1) == "1 taon" - assert self.locale._format_timeframe("years", 2) == "2 taon" - assert self.locale._format_timeframe("years", 5) == "5 taon" - - def test_multi_describe_tl(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "5 taon 1 linggo 1 oras 6 minuto mula ngayon" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "0 araw 1 oras 6 minuto mula ngayon" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "1 oras 6 minuto mula ngayon" - assert describe(seconds4000, only_distance=True) == "1 oras 6 minuto" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "1 oras 1 minuto mula ngayon" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "0 oras 5 minuto mula ngayon" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "5 minuto mula ngayon" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "1 minuto mula ngayon" - assert describe(seconds60, only_distance=True) == "1 minuto" - seconds60 = [("seconds", 1)] - assert describe(seconds60) == "1 segundo mula ngayon" - assert describe(seconds60, only_distance=True) == "1 segundo" - - def test_ordinal_number_tl(self): - assert self.locale.ordinal_number(0) == "ika-0" - assert self.locale.ordinal_number(1) == "ika-1" - assert self.locale.ordinal_number(2) == "ika-2" - assert self.locale.ordinal_number(3) == "ika-3" - assert self.locale.ordinal_number(10) == "ika-10" - assert self.locale.ordinal_number(23) == "ika-23" - assert self.locale.ordinal_number(100) == "ika-100" - assert self.locale.ordinal_number(103) == "ika-103" - assert self.locale.ordinal_number(114) == "ika-114" - - -@pytest.mark.usefixtures("lang_locale") -class TestEstonianLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "just nüüd" - assert self.locale._format_timeframe("second", 1) == "ühe sekundi" - assert self.locale._format_timeframe("seconds", 3) == "3 sekundi" - assert self.locale._format_timeframe("seconds", 30) == "30 sekundi" - assert self.locale._format_timeframe("minute", 1) == "ühe minuti" - assert self.locale._format_timeframe("minutes", 4) == "4 minuti" - assert self.locale._format_timeframe("minutes", 40) == "40 minuti" - assert self.locale._format_timeframe("hour", 1) == "tunni aja" - assert self.locale._format_timeframe("hours", 5) == "5 tunni" - assert self.locale._format_timeframe("hours", 23) == "23 tunni" - assert self.locale._format_timeframe("day", 1) == "ühe päeva" - assert self.locale._format_timeframe("days", 6) == "6 päeva" - assert self.locale._format_timeframe("days", 12) == "12 päeva" - assert self.locale._format_timeframe("month", 1) == "ühe kuu" - assert self.locale._format_timeframe("months", 7) == "7 kuu" - assert self.locale._format_timeframe("months", 11) == "11 kuu" - assert self.locale._format_timeframe("year", 1) == "ühe aasta" - assert self.locale._format_timeframe("years", 8) == "8 aasta" - assert self.locale._format_timeframe("years", 12) == "12 aasta" - - assert self.locale._format_timeframe("now", 0) == "just nüüd" - assert self.locale._format_timeframe("second", -1) == "üks sekund" - assert self.locale._format_timeframe("seconds", -9) == "9 sekundit" - assert self.locale._format_timeframe("seconds", -12) == "12 sekundit" - assert self.locale._format_timeframe("minute", -1) == "üks minut" - assert self.locale._format_timeframe("minutes", -2) == "2 minutit" - assert self.locale._format_timeframe("minutes", -10) == "10 minutit" - assert self.locale._format_timeframe("hour", -1) == "tund aega" - assert self.locale._format_timeframe("hours", -3) == "3 tundi" - assert self.locale._format_timeframe("hours", -11) == "11 tundi" - assert self.locale._format_timeframe("day", -1) == "üks päev" - assert self.locale._format_timeframe("days", -2) == "2 päeva" - assert self.locale._format_timeframe("days", -12) == "12 päeva" - assert self.locale._format_timeframe("month", -1) == "üks kuu" - assert self.locale._format_timeframe("months", -3) == "3 kuud" - assert self.locale._format_timeframe("months", -13) == "13 kuud" - assert self.locale._format_timeframe("year", -1) == "üks aasta" - assert self.locale._format_timeframe("years", -4) == "4 aastat" - assert self.locale._format_timeframe("years", -14) == "14 aastat" - - -@pytest.mark.usefixtures("lang_locale") -class TestPortugueseLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "agora" - assert self.locale._format_timeframe("second", 1) == "um segundo" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "um minuto" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "uma hora" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "um dia" - assert self.locale._format_timeframe("days", 12) == "12 dias" - assert self.locale._format_timeframe("month", 1) == "um mês" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "um ano" - assert self.locale._format_timeframe("years", 12) == "12 anos" - - -@pytest.mark.usefixtures("lang_locale") -class TestBrazilianPortugueseLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "agora" - assert self.locale._format_timeframe("second", 1) == "um segundo" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "um minuto" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "uma hora" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "um dia" - assert self.locale._format_timeframe("days", 12) == "12 dias" - assert self.locale._format_timeframe("month", 1) == "um mês" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "um ano" - assert self.locale._format_timeframe("years", 12) == "12 anos" - assert self.locale._format_relative("uma hora", "hour", -1) == "faz uma hora" - - -@pytest.mark.usefixtures("lang_locale") -class TestHongKongLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "剛才" - assert self.locale._format_timeframe("second", 1) == "1秒" - assert self.locale._format_timeframe("seconds", 30) == "30秒" - assert self.locale._format_timeframe("minute", 1) == "1分鐘" - assert self.locale._format_timeframe("minutes", 40) == "40分鐘" - assert self.locale._format_timeframe("hour", 1) == "1小時" - assert self.locale._format_timeframe("hours", 23) == "23小時" - assert self.locale._format_timeframe("day", 1) == "1天" - assert self.locale._format_timeframe("days", 12) == "12天" - assert self.locale._format_timeframe("week", 1) == "1星期" - assert self.locale._format_timeframe("weeks", 38) == "38星期" - assert self.locale._format_timeframe("month", 1) == "1個月" - assert self.locale._format_timeframe("months", 11) == "11個月" - assert self.locale._format_timeframe("year", 1) == "1年" - assert self.locale._format_timeframe("years", 12) == "12年" - - -@pytest.mark.usefixtures("lang_locale") -class TestChineseTWLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "剛才" - assert self.locale._format_timeframe("second", 1) == "1秒" - assert self.locale._format_timeframe("seconds", 30) == "30秒" - assert self.locale._format_timeframe("minute", 1) == "1分鐘" - assert self.locale._format_timeframe("minutes", 40) == "40分鐘" - assert self.locale._format_timeframe("hour", 1) == "1小時" - assert self.locale._format_timeframe("hours", 23) == "23小時" - assert self.locale._format_timeframe("day", 1) == "1天" - assert self.locale._format_timeframe("days", 12) == "12天" - assert self.locale._format_timeframe("week", 1) == "1週" - assert self.locale._format_timeframe("weeks", 38) == "38週" - assert self.locale._format_timeframe("month", 1) == "1個月" - assert self.locale._format_timeframe("months", 11) == "11個月" - assert self.locale._format_timeframe("year", 1) == "1年" - assert self.locale._format_timeframe("years", 12) == "12年" - - -@pytest.mark.usefixtures("lang_locale") -class TestSwahiliLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "sasa hivi" - assert self.locale._format_timeframe("second", 1) == "sekunde" - assert self.locale._format_timeframe("seconds", 3) == "sekunde 3" - assert self.locale._format_timeframe("seconds", 30) == "sekunde 30" - assert self.locale._format_timeframe("minute", 1) == "dakika moja" - assert self.locale._format_timeframe("minutes", 4) == "dakika 4" - assert self.locale._format_timeframe("minutes", 40) == "dakika 40" - assert self.locale._format_timeframe("hour", 1) == "saa moja" - assert self.locale._format_timeframe("hours", 5) == "saa 5" - assert self.locale._format_timeframe("hours", 23) == "saa 23" - assert self.locale._format_timeframe("day", 1) == "siku moja" - assert self.locale._format_timeframe("days", 6) == "siku 6" - assert self.locale._format_timeframe("days", 12) == "siku 12" - assert self.locale._format_timeframe("month", 1) == "mwezi moja" - assert self.locale._format_timeframe("months", 7) == "miezi 7" - assert self.locale._format_timeframe("week", 1) == "wiki moja" - assert self.locale._format_timeframe("weeks", 2) == "wiki 2" - assert self.locale._format_timeframe("months", 11) == "miezi 11" - assert self.locale._format_timeframe("year", 1) == "mwaka moja" - assert self.locale._format_timeframe("years", 8) == "miaka 8" - assert self.locale._format_timeframe("years", 12) == "miaka 12" - - def test_format_relative_now(self): - result = self.locale._format_relative("sasa hivi", "now", 0) - assert result == "sasa hivi" - - def test_format_relative_past(self): - result = self.locale._format_relative("saa moja", "hour", 1) - assert result == "muda wa saa moja" - - def test_format_relative_future(self): - result = self.locale._format_relative("saa moja", "hour", -1) - assert result == "saa moja iliyopita" - - -@pytest.mark.usefixtures("lang_locale") -class TestKoreanLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "지금" - assert self.locale._format_timeframe("second", 1) == "1초" - assert self.locale._format_timeframe("seconds", 2) == "2초" - assert self.locale._format_timeframe("minute", 1) == "1분" - assert self.locale._format_timeframe("minutes", 2) == "2분" - assert self.locale._format_timeframe("hour", 1) == "한시간" - assert self.locale._format_timeframe("hours", 2) == "2시간" - assert self.locale._format_timeframe("day", 1) == "하루" - assert self.locale._format_timeframe("days", 2) == "2일" - assert self.locale._format_timeframe("week", 1) == "1주" - assert self.locale._format_timeframe("weeks", 2) == "2주" - assert self.locale._format_timeframe("month", 1) == "한달" - assert self.locale._format_timeframe("months", 2) == "2개월" - assert self.locale._format_timeframe("year", 1) == "1년" - assert self.locale._format_timeframe("years", 2) == "2년" - - def test_format_relative(self): - assert self.locale._format_relative("지금", "now", 0) == "지금" - - assert self.locale._format_relative("1초", "second", 1) == "1초 후" - assert self.locale._format_relative("2초", "seconds", 2) == "2초 후" - assert self.locale._format_relative("1분", "minute", 1) == "1분 후" - assert self.locale._format_relative("2분", "minutes", 2) == "2분 후" - assert self.locale._format_relative("한시간", "hour", 1) == "한시간 후" - assert self.locale._format_relative("2시간", "hours", 2) == "2시간 후" - assert self.locale._format_relative("하루", "day", 1) == "내일" - assert self.locale._format_relative("2일", "days", 2) == "모레" - assert self.locale._format_relative("3일", "days", 3) == "글피" - assert self.locale._format_relative("4일", "days", 4) == "그글피" - assert self.locale._format_relative("5일", "days", 5) == "5일 후" - assert self.locale._format_relative("1주", "week", 1) == "1주 후" - assert self.locale._format_relative("2주", "weeks", 2) == "2주 후" - assert self.locale._format_relative("한달", "month", 1) == "한달 후" - assert self.locale._format_relative("2개월", "months", 2) == "2개월 후" - assert self.locale._format_relative("1년", "year", 1) == "내년" - assert self.locale._format_relative("2년", "years", 2) == "내후년" - assert self.locale._format_relative("3년", "years", 3) == "3년 후" - - assert self.locale._format_relative("1초", "second", -1) == "1초 전" - assert self.locale._format_relative("2초", "seconds", -2) == "2초 전" - assert self.locale._format_relative("1분", "minute", -1) == "1분 전" - assert self.locale._format_relative("2분", "minutes", -2) == "2분 전" - assert self.locale._format_relative("한시간", "hour", -1) == "한시간 전" - assert self.locale._format_relative("2시간", "hours", -2) == "2시간 전" - assert self.locale._format_relative("하루", "day", -1) == "어제" - assert self.locale._format_relative("2일", "days", -2) == "그제" - assert self.locale._format_relative("3일", "days", -3) == "그끄제" - assert self.locale._format_relative("4일", "days", -4) == "4일 전" - assert self.locale._format_relative("1주", "week", -1) == "1주 전" - assert self.locale._format_relative("2주", "weeks", -2) == "2주 전" - assert self.locale._format_relative("한달", "month", -1) == "한달 전" - assert self.locale._format_relative("2개월", "months", -2) == "2개월 전" - assert self.locale._format_relative("1년", "year", -1) == "작년" - assert self.locale._format_relative("2년", "years", -2) == "제작년" - assert self.locale._format_relative("3년", "years", -3) == "3년 전" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(0) == "0번째" - assert self.locale.ordinal_number(1) == "첫번째" - assert self.locale.ordinal_number(2) == "두번째" - assert self.locale.ordinal_number(3) == "세번째" - assert self.locale.ordinal_number(4) == "네번째" - assert self.locale.ordinal_number(5) == "다섯번째" - assert self.locale.ordinal_number(6) == "여섯번째" - assert self.locale.ordinal_number(7) == "일곱번째" - assert self.locale.ordinal_number(8) == "여덟번째" - assert self.locale.ordinal_number(9) == "아홉번째" - assert self.locale.ordinal_number(10) == "열번째" - assert self.locale.ordinal_number(11) == "11번째" - assert self.locale.ordinal_number(12) == "12번째" - assert self.locale.ordinal_number(100) == "100번째" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py deleted file mode 100644 index 9fb4e68f3c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py +++ /dev/null @@ -1,1657 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -import calendar -import os -import time -from datetime import datetime - -import pytest -from dateutil import tz - -import arrow -from arrow import formatter, parser -from arrow.constants import MAX_TIMESTAMP_US -from arrow.parser import DateTimeParser, ParserError, ParserMatchError - -from .utils import make_full_tz_list - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParser: - def test_parse_multiformat(self, mocker): - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=parser.ParserError, - ) - - with pytest.raises(parser.ParserError): - self.parser._parse_multiformat("str", ["fmt_a"]) - - mock_datetime = mocker.Mock() - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_b", - return_value=mock_datetime, - ) - - result = self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - assert result == mock_datetime - - def test_parse_multiformat_all_fail(self, mocker): - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=parser.ParserError, - ) - - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_b", - side_effect=parser.ParserError, - ) - - with pytest.raises(parser.ParserError): - self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - - def test_parse_multiformat_unself_expected_fail(self, mocker): - class UnselfExpectedError(Exception): - pass - - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=UnselfExpectedError, - ) - - with pytest.raises(UnselfExpectedError): - self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - - def test_parse_token_nonsense(self): - parts = {} - self.parser._parse_token("NONSENSE", "1900", parts) - assert parts == {} - - def test_parse_token_invalid_meridians(self): - parts = {} - self.parser._parse_token("A", "a..m", parts) - assert parts == {} - self.parser._parse_token("a", "p..m", parts) - assert parts == {} - - def test_parser_no_caching(self, mocker): - - mocked_parser = mocker.patch( - "arrow.parser.DateTimeParser._generate_pattern_re", fmt="fmt_a" - ) - self.parser = parser.DateTimeParser(cache_size=0) - for _ in range(100): - self.parser._generate_pattern_re("fmt_a") - assert mocked_parser.call_count == 100 - - def test_parser_1_line_caching(self, mocker): - mocked_parser = mocker.patch("arrow.parser.DateTimeParser._generate_pattern_re") - self.parser = parser.DateTimeParser(cache_size=1) - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 1 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 3 - assert mocked_parser.call_args_list[2] == mocker.call(fmt="fmt_a") - - def test_parser_multiple_line_caching(self, mocker): - mocked_parser = mocker.patch("arrow.parser.DateTimeParser._generate_pattern_re") - self.parser = parser.DateTimeParser(cache_size=2) - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 1 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - # fmt_a and fmt_b are in the cache, so no new calls should be made - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - def test_YY_and_YYYY_format_list(self): - - assert self.parser.parse("15/01/19", ["DD/MM/YY", "DD/MM/YYYY"]) == datetime( - 2019, 1, 15 - ) - - # Regression test for issue #580 - assert self.parser.parse("15/01/2019", ["DD/MM/YY", "DD/MM/YYYY"]) == datetime( - 2019, 1, 15 - ) - - assert ( - self.parser.parse( - "15/01/2019T04:05:06.789120Z", - ["D/M/YYThh:mm:ss.SZ", "D/M/YYYYThh:mm:ss.SZ"], - ) - == datetime(2019, 1, 15, 4, 5, 6, 789120, tzinfo=tz.tzutc()) - ) - - # regression test for issue #447 - def test_timestamp_format_list(self): - # should not match on the "X" token - assert ( - self.parser.parse( - "15 Jul 2000", - ["MM/DD/YYYY", "YYYY-MM-DD", "X", "DD-MMMM-YYYY", "D MMM YYYY"], - ) - == datetime(2000, 7, 15) - ) - - with pytest.raises(ParserError): - self.parser.parse("15 Jul", "X") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserParse: - def test_parse_list(self, mocker): - - mocker.patch( - "arrow.parser.DateTimeParser._parse_multiformat", - string="str", - formats=["fmt_a", "fmt_b"], - return_value="result", - ) - - result = self.parser.parse("str", ["fmt_a", "fmt_b"]) - assert result == "result" - - def test_parse_unrecognized_token(self, mocker): - - mocker.patch.dict("arrow.parser.DateTimeParser._BASE_INPUT_RE_MAP") - del arrow.parser.DateTimeParser._BASE_INPUT_RE_MAP["YYYY"] - - # need to make another local parser to apply patch changes - _parser = parser.DateTimeParser() - with pytest.raises(parser.ParserError): - _parser.parse("2013-01-01", "YYYY-MM-DD") - - def test_parse_parse_no_match(self): - - with pytest.raises(ParserError): - self.parser.parse("01-01", "YYYY-MM-DD") - - def test_parse_separators(self): - - with pytest.raises(ParserError): - self.parser.parse("1403549231", "YYYY-MM-DD") - - def test_parse_numbers(self): - - self.expected = datetime(2012, 1, 1, 12, 5, 10) - assert ( - self.parser.parse("2012-01-01 12:05:10", "YYYY-MM-DD HH:mm:ss") - == self.expected - ) - - def test_parse_year_two_digit(self): - - self.expected = datetime(1979, 1, 1, 12, 5, 10) - assert ( - self.parser.parse("79-01-01 12:05:10", "YY-MM-DD HH:mm:ss") == self.expected - ) - - def test_parse_timestamp(self): - - tz_utc = tz.tzutc() - int_timestamp = int(time.time()) - self.expected = datetime.fromtimestamp(int_timestamp, tz=tz_utc) - assert self.parser.parse("{:d}".format(int_timestamp), "X") == self.expected - - float_timestamp = time.time() - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert self.parser.parse("{:f}".format(float_timestamp), "X") == self.expected - - # test handling of ns timestamp (arrow will round to 6 digits regardless) - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}123".format(float_timestamp), "X") == self.expected - ) - - # test ps timestamp (arrow will round to 6 digits regardless) - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}123456".format(float_timestamp), "X") - == self.expected - ) - - # NOTE: negative timestamps cannot be handled by datetime on Window - # Must use timedelta to handle them. ref: https://stackoverflow.com/questions/36179914 - if os.name != "nt": - # regression test for issue #662 - negative_int_timestamp = -int_timestamp - self.expected = datetime.fromtimestamp(negative_int_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:d}".format(negative_int_timestamp), "X") - == self.expected - ) - - negative_float_timestamp = -float_timestamp - self.expected = datetime.fromtimestamp(negative_float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}".format(negative_float_timestamp), "X") - == self.expected - ) - - # NOTE: timestamps cannot be parsed from natural language strings (by removing the ^...$) because it will - # break cases like "15 Jul 2000" and a format list (see issue #447) - with pytest.raises(ParserError): - natural_lang_string = "Meet me at {} at the restaurant.".format( - float_timestamp - ) - self.parser.parse(natural_lang_string, "X") - - with pytest.raises(ParserError): - self.parser.parse("1565982019.", "X") - - with pytest.raises(ParserError): - self.parser.parse(".1565982019", "X") - - def test_parse_expanded_timestamp(self): - # test expanded timestamps that include milliseconds - # and microseconds as multiples rather than decimals - # requested in issue #357 - - tz_utc = tz.tzutc() - timestamp = 1569982581.413132 - timestamp_milli = int(round(timestamp * 1000)) - timestamp_micro = int(round(timestamp * 1000000)) - - # "x" token should parse integer timestamps below MAX_TIMESTAMP normally - self.expected = datetime.fromtimestamp(int(timestamp), tz=tz_utc) - assert self.parser.parse("{:d}".format(int(timestamp)), "x") == self.expected - - self.expected = datetime.fromtimestamp(round(timestamp, 3), tz=tz_utc) - assert self.parser.parse("{:d}".format(timestamp_milli), "x") == self.expected - - self.expected = datetime.fromtimestamp(timestamp, tz=tz_utc) - assert self.parser.parse("{:d}".format(timestamp_micro), "x") == self.expected - - # anything above max µs timestamp should fail - with pytest.raises(ValueError): - self.parser.parse("{:d}".format(int(MAX_TIMESTAMP_US) + 1), "x") - - # floats are not allowed with the "x" token - with pytest.raises(ParserMatchError): - self.parser.parse("{:f}".format(timestamp), "x") - - def test_parse_names(self): - - self.expected = datetime(2012, 1, 1) - - assert self.parser.parse("January 1, 2012", "MMMM D, YYYY") == self.expected - assert self.parser.parse("Jan 1, 2012", "MMM D, YYYY") == self.expected - - def test_parse_pm(self): - - self.expected = datetime(1, 1, 1, 13, 0, 0) - assert self.parser.parse("1 pm", "H a") == self.expected - assert self.parser.parse("1 pm", "h a") == self.expected - - self.expected = datetime(1, 1, 1, 1, 0, 0) - assert self.parser.parse("1 am", "H A") == self.expected - assert self.parser.parse("1 am", "h A") == self.expected - - self.expected = datetime(1, 1, 1, 0, 0, 0) - assert self.parser.parse("12 am", "H A") == self.expected - assert self.parser.parse("12 am", "h A") == self.expected - - self.expected = datetime(1, 1, 1, 12, 0, 0) - assert self.parser.parse("12 pm", "H A") == self.expected - assert self.parser.parse("12 pm", "h A") == self.expected - - def test_parse_tz_hours_only(self): - - self.expected = datetime(2025, 10, 17, 5, 30, 10, tzinfo=tz.tzoffset(None, 0)) - parsed = self.parser.parse("2025-10-17 05:30:10+00", "YYYY-MM-DD HH:mm:ssZ") - assert parsed == self.expected - - def test_parse_tz_zz(self): - - self.expected = datetime(2013, 1, 1, tzinfo=tz.tzoffset(None, -7 * 3600)) - assert self.parser.parse("2013-01-01 -07:00", "YYYY-MM-DD ZZ") == self.expected - - @pytest.mark.parametrize("full_tz_name", make_full_tz_list()) - def test_parse_tz_name_zzz(self, full_tz_name): - - self.expected = datetime(2013, 1, 1, tzinfo=tz.gettz(full_tz_name)) - assert ( - self.parser.parse("2013-01-01 {}".format(full_tz_name), "YYYY-MM-DD ZZZ") - == self.expected - ) - - # note that offsets are not timezones - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9+1000", "YYYY-MM-DDZZZ") - - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9+10:00", "YYYY-MM-DDZZZ") - - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9-10", "YYYY-MM-DDZZZ") - - def test_parse_subsecond(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 900000) - assert ( - self.parser.parse("2013-01-01 12:30:45.9", "YYYY-MM-DD HH:mm:ss.S") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 980000) - assert ( - self.parser.parse("2013-01-01 12:30:45.98", "YYYY-MM-DD HH:mm:ss.SS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987000) - assert ( - self.parser.parse("2013-01-01 12:30:45.987", "YYYY-MM-DD HH:mm:ss.SSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987600) - assert ( - self.parser.parse("2013-01-01 12:30:45.9876", "YYYY-MM-DD HH:mm:ss.SSSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987650) - assert ( - self.parser.parse("2013-01-01 12:30:45.98765", "YYYY-MM-DD HH:mm:ss.SSSSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert ( - self.parser.parse( - "2013-01-01 12:30:45.987654", "YYYY-MM-DD HH:mm:ss.SSSSSS" - ) - == self.expected - ) - - def test_parse_subsecond_rounding(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - datetime_format = "YYYY-MM-DD HH:mm:ss.S" - - # round up - string = "2013-01-01 12:30:45.9876539" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round down - string = "2013-01-01 12:30:45.98765432" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round half-up - string = "2013-01-01 12:30:45.987653521" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round half-down - string = "2013-01-01 12:30:45.9876545210" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # overflow (zero out the subseconds and increment the seconds) - # regression tests for issue #636 - def test_parse_subsecond_rounding_overflow(self): - datetime_format = "YYYY-MM-DD HH:mm:ss.S" - - self.expected = datetime(2013, 1, 1, 12, 30, 46) - string = "2013-01-01 12:30:45.9999995" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - self.expected = datetime(2013, 1, 1, 12, 31, 0) - string = "2013-01-01 12:30:59.9999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - self.expected = datetime(2013, 1, 2, 0, 0, 0) - string = "2013-01-01 23:59:59.9999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # 6 digits should remain unrounded - self.expected = datetime(2013, 1, 1, 12, 30, 45, 999999) - string = "2013-01-01 12:30:45.999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # Regression tests for issue #560 - def test_parse_long_year(self): - with pytest.raises(ParserError): - self.parser.parse("09 January 123456789101112", "DD MMMM YYYY") - - with pytest.raises(ParserError): - self.parser.parse("123456789101112 09 January", "YYYY DD MMMM") - - with pytest.raises(ParserError): - self.parser.parse("68096653015/01/19", "YY/M/DD") - - def test_parse_with_extra_words_at_start_and_end_invalid(self): - input_format_pairs = [ - ("blah2016", "YYYY"), - ("blah2016blah", "YYYY"), - ("2016blah", "YYYY"), - ("2016-05blah", "YYYY-MM"), - ("2016-05-16blah", "YYYY-MM-DD"), - ("2016-05-16T04:05:06.789120blah", "YYYY-MM-DDThh:mm:ss.S"), - ("2016-05-16T04:05:06.789120ZblahZ", "YYYY-MM-DDThh:mm:ss.SZ"), - ("2016-05-16T04:05:06.789120Zblah", "YYYY-MM-DDThh:mm:ss.SZ"), - ("2016-05-16T04:05:06.789120blahZ", "YYYY-MM-DDThh:mm:ss.SZ"), - ] - - for pair in input_format_pairs: - with pytest.raises(ParserError): - self.parser.parse(pair[0], pair[1]) - - def test_parse_with_extra_words_at_start_and_end_valid(self): - # Spaces surrounding the parsable date are ok because we - # allow the parsing of natural language input. Additionally, a single - # character of specific punctuation before or after the date is okay. - # See docs for full list of valid punctuation. - - assert self.parser.parse("blah 2016 blah", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("blah 2016", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("2016 blah", "YYYY") == datetime(2016, 1, 1) - - # test one additional space along with space divider - assert self.parser.parse( - "blah 2016-05-16 04:05:06.789120", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - "2016-05-16 04:05:06.789120 blah", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - # test one additional space along with T divider - assert self.parser.parse( - "blah 2016-05-16T04:05:06.789120", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - "2016-05-16T04:05:06.789120 blah", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert ( - self.parser.parse( - "Meet me at 2016-05-16T04:05:06.789120 at the restaurant.", - "YYYY-MM-DDThh:mm:ss.S", - ) - == datetime(2016, 5, 16, 4, 5, 6, 789120) - ) - - assert ( - self.parser.parse( - "Meet me at 2016-05-16 04:05:06.789120 at the restaurant.", - "YYYY-MM-DD hh:mm:ss.S", - ) - == datetime(2016, 5, 16, 4, 5, 6, 789120) - ) - - # regression test for issue #701 - # tests cases of a partial match surrounded by punctuation - # for the list of valid punctuation, see documentation - def test_parse_with_punctuation_fences(self): - assert self.parser.parse( - "Meet me at my house on Halloween (2019-31-10)", "YYYY-DD-MM" - ) == datetime(2019, 10, 31) - - assert self.parser.parse( - "Monday, 9. September 2019, 16:15-20:00", "dddd, D. MMMM YYYY" - ) == datetime(2019, 9, 9) - - assert self.parser.parse("A date is 11.11.2011.", "DD.MM.YYYY") == datetime( - 2011, 11, 11 - ) - - with pytest.raises(ParserMatchError): - self.parser.parse("11.11.2011.1 is not a valid date.", "DD.MM.YYYY") - - with pytest.raises(ParserMatchError): - self.parser.parse( - "This date has too many punctuation marks following it (11.11.2011).", - "DD.MM.YYYY", - ) - - def test_parse_with_leading_and_trailing_whitespace(self): - assert self.parser.parse(" 2016", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("2016 ", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse(" 2016 ", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse( - " 2016-05-16 04:05:06.789120 ", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - " 2016-05-16T04:05:06.789120 ", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - def test_parse_YYYY_DDDD(self): - assert self.parser.parse("1998-136", "YYYY-DDDD") == datetime(1998, 5, 16) - - assert self.parser.parse("1998-006", "YYYY-DDDD") == datetime(1998, 1, 6) - - with pytest.raises(ParserError): - self.parser.parse("1998-456", "YYYY-DDDD") - - def test_parse_YYYY_DDD(self): - assert self.parser.parse("1998-6", "YYYY-DDD") == datetime(1998, 1, 6) - - assert self.parser.parse("1998-136", "YYYY-DDD") == datetime(1998, 5, 16) - - with pytest.raises(ParserError): - self.parser.parse("1998-756", "YYYY-DDD") - - # month cannot be passed with DDD and DDDD tokens - def test_parse_YYYY_MM_DDDD(self): - with pytest.raises(ParserError): - self.parser.parse("2015-01-009", "YYYY-MM-DDDD") - - # year is required with the DDD and DDDD tokens - def test_parse_DDD_only(self): - with pytest.raises(ParserError): - self.parser.parse("5", "DDD") - - def test_parse_DDDD_only(self): - with pytest.raises(ParserError): - self.parser.parse("145", "DDDD") - - def test_parse_ddd_and_dddd(self): - fr_parser = parser.DateTimeParser("fr") - - # Day of week should be ignored when a day is passed - # 2019-10-17 is a Thursday, so we know day of week - # is ignored if the same date is outputted - expected = datetime(2019, 10, 17) - assert self.parser.parse("Tue 2019-10-17", "ddd YYYY-MM-DD") == expected - assert fr_parser.parse("mar 2019-10-17", "ddd YYYY-MM-DD") == expected - assert self.parser.parse("Tuesday 2019-10-17", "dddd YYYY-MM-DD") == expected - assert fr_parser.parse("mardi 2019-10-17", "dddd YYYY-MM-DD") == expected - - # Get first Tuesday after epoch - expected = datetime(1970, 1, 6) - assert self.parser.parse("Tue", "ddd") == expected - assert fr_parser.parse("mar", "ddd") == expected - assert self.parser.parse("Tuesday", "dddd") == expected - assert fr_parser.parse("mardi", "dddd") == expected - - # Get first Tuesday in 2020 - expected = datetime(2020, 1, 7) - assert self.parser.parse("Tue 2020", "ddd YYYY") == expected - assert fr_parser.parse("mar 2020", "ddd YYYY") == expected - assert self.parser.parse("Tuesday 2020", "dddd YYYY") == expected - assert fr_parser.parse("mardi 2020", "dddd YYYY") == expected - - # Get first Tuesday in February 2020 - expected = datetime(2020, 2, 4) - assert self.parser.parse("Tue 02 2020", "ddd MM YYYY") == expected - assert fr_parser.parse("mar 02 2020", "ddd MM YYYY") == expected - assert self.parser.parse("Tuesday 02 2020", "dddd MM YYYY") == expected - assert fr_parser.parse("mardi 02 2020", "dddd MM YYYY") == expected - - # Get first Tuesday in February after epoch - expected = datetime(1970, 2, 3) - assert self.parser.parse("Tue 02", "ddd MM") == expected - assert fr_parser.parse("mar 02", "ddd MM") == expected - assert self.parser.parse("Tuesday 02", "dddd MM") == expected - assert fr_parser.parse("mardi 02", "dddd MM") == expected - - # Times remain intact - expected = datetime(2020, 2, 4, 10, 25, 54, 123456, tz.tzoffset(None, -3600)) - assert ( - self.parser.parse( - "Tue 02 2020 10:25:54.123456-01:00", "ddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - fr_parser.parse( - "mar 02 2020 10:25:54.123456-01:00", "ddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - self.parser.parse( - "Tuesday 02 2020 10:25:54.123456-01:00", "dddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - fr_parser.parse( - "mardi 02 2020 10:25:54.123456-01:00", "dddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - - def test_parse_ddd_and_dddd_ignore_case(self): - # Regression test for issue #851 - expected = datetime(2019, 6, 24) - assert ( - self.parser.parse("MONDAY, June 24, 2019", "dddd, MMMM DD, YYYY") - == expected - ) - - def test_parse_ddd_and_dddd_then_format(self): - # Regression test for issue #446 - arw_formatter = formatter.DateTimeFormatter() - assert arw_formatter.format(self.parser.parse("Mon", "ddd"), "ddd") == "Mon" - assert ( - arw_formatter.format(self.parser.parse("Monday", "dddd"), "dddd") - == "Monday" - ) - assert arw_formatter.format(self.parser.parse("Tue", "ddd"), "ddd") == "Tue" - assert ( - arw_formatter.format(self.parser.parse("Tuesday", "dddd"), "dddd") - == "Tuesday" - ) - assert arw_formatter.format(self.parser.parse("Wed", "ddd"), "ddd") == "Wed" - assert ( - arw_formatter.format(self.parser.parse("Wednesday", "dddd"), "dddd") - == "Wednesday" - ) - assert arw_formatter.format(self.parser.parse("Thu", "ddd"), "ddd") == "Thu" - assert ( - arw_formatter.format(self.parser.parse("Thursday", "dddd"), "dddd") - == "Thursday" - ) - assert arw_formatter.format(self.parser.parse("Fri", "ddd"), "ddd") == "Fri" - assert ( - arw_formatter.format(self.parser.parse("Friday", "dddd"), "dddd") - == "Friday" - ) - assert arw_formatter.format(self.parser.parse("Sat", "ddd"), "ddd") == "Sat" - assert ( - arw_formatter.format(self.parser.parse("Saturday", "dddd"), "dddd") - == "Saturday" - ) - assert arw_formatter.format(self.parser.parse("Sun", "ddd"), "ddd") == "Sun" - assert ( - arw_formatter.format(self.parser.parse("Sunday", "dddd"), "dddd") - == "Sunday" - ) - - def test_parse_HH_24(self): - assert self.parser.parse( - "2019-10-30T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2019, 10, 31, 0, 0, 0, 0) - assert self.parser.parse("2019-10-30T24:00", "YYYY-MM-DDTHH:mm") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse("2019-10-30T24", "YYYY-MM-DDTHH") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse( - "2019-10-30T24:00:00.0", "YYYY-MM-DDTHH:mm:ss.S" - ) == datetime(2019, 10, 31, 0, 0, 0, 0) - assert self.parser.parse( - "2019-10-31T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2019, 11, 1, 0, 0, 0, 0) - assert self.parser.parse( - "2019-12-31T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2020, 1, 1, 0, 0, 0, 0) - assert self.parser.parse( - "2019-12-31T23:59:59.9999999", "YYYY-MM-DDTHH:mm:ss.S" - ) == datetime(2020, 1, 1, 0, 0, 0, 0) - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:01:00", "YYYY-MM-DDTHH:mm:ss") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:01", "YYYY-MM-DDTHH:mm:ss") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:00.1", "YYYY-MM-DDTHH:mm:ss.S") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:00.999999", "YYYY-MM-DDTHH:mm:ss.S") - - def test_parse_W(self): - - assert self.parser.parse("2011-W05-4", "W") == datetime(2011, 2, 3) - assert self.parser.parse("2011W054", "W") == datetime(2011, 2, 3) - assert self.parser.parse("2011-W05", "W") == datetime(2011, 1, 31) - assert self.parser.parse("2011W05", "W") == datetime(2011, 1, 31) - assert self.parser.parse("2011-W05-4T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - assert self.parser.parse("2011W054T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - assert self.parser.parse("2011-W05T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 1, 31, 14, 17, 1 - ) - assert self.parser.parse("2011W05T141701", "WTHHmmss") == datetime( - 2011, 1, 31, 14, 17, 1 - ) - assert self.parser.parse("2011W054T141701", "WTHHmmss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - bad_formats = [ - "201W22", - "1995-W1-4", - "2001-W34-90", - "2001--W34", - "2011-W03--3", - "thstrdjtrsrd676776r65", - "2002-W66-1T14:17:01", - "2002-W23-03T14:17:01", - ] - - for fmt in bad_formats: - with pytest.raises(ParserError): - self.parser.parse(fmt, "W") - - def test_parse_normalize_whitespace(self): - assert self.parser.parse( - "Jun 1 2005 1:33PM", "MMM D YYYY H:mmA", normalize_whitespace=True - ) == datetime(2005, 6, 1, 13, 33) - - with pytest.raises(ParserError): - self.parser.parse("Jun 1 2005 1:33PM", "MMM D YYYY H:mmA") - - assert ( - self.parser.parse( - "\t 2013-05-05 T \n 12:30:45\t123456 \t \n", - "YYYY-MM-DD T HH:mm:ss S", - normalize_whitespace=True, - ) - == datetime(2013, 5, 5, 12, 30, 45, 123456) - ) - - with pytest.raises(ParserError): - self.parser.parse( - "\t 2013-05-05 T \n 12:30:45\t123456 \t \n", - "YYYY-MM-DD T HH:mm:ss S", - ) - - assert self.parser.parse( - " \n Jun 1\t 2005\n ", "MMM D YYYY", normalize_whitespace=True - ) == datetime(2005, 6, 1) - - with pytest.raises(ParserError): - self.parser.parse(" \n Jun 1\t 2005\n ", "MMM D YYYY") - - -@pytest.mark.usefixtures("dt_parser_regex") -class TestDateTimeParserRegex: - def test_format_year(self): - - assert self.format_regex.findall("YYYY-YY") == ["YYYY", "YY"] - - def test_format_month(self): - - assert self.format_regex.findall("MMMM-MMM-MM-M") == ["MMMM", "MMM", "MM", "M"] - - def test_format_day(self): - - assert self.format_regex.findall("DDDD-DDD-DD-D") == ["DDDD", "DDD", "DD", "D"] - - def test_format_hour(self): - - assert self.format_regex.findall("HH-H-hh-h") == ["HH", "H", "hh", "h"] - - def test_format_minute(self): - - assert self.format_regex.findall("mm-m") == ["mm", "m"] - - def test_format_second(self): - - assert self.format_regex.findall("ss-s") == ["ss", "s"] - - def test_format_subsecond(self): - - assert self.format_regex.findall("SSSSSS-SSSSS-SSSS-SSS-SS-S") == [ - "SSSSSS", - "SSSSS", - "SSSS", - "SSS", - "SS", - "S", - ] - - def test_format_tz(self): - - assert self.format_regex.findall("ZZZ-ZZ-Z") == ["ZZZ", "ZZ", "Z"] - - def test_format_am_pm(self): - - assert self.format_regex.findall("A-a") == ["A", "a"] - - def test_format_timestamp(self): - - assert self.format_regex.findall("X") == ["X"] - - def test_format_timestamp_milli(self): - - assert self.format_regex.findall("x") == ["x"] - - def test_escape(self): - - escape_regex = parser.DateTimeParser._ESCAPE_RE - - assert escape_regex.findall("2018-03-09 8 [h] 40 [hello]") == ["[h]", "[hello]"] - - def test_month_names(self): - p = parser.DateTimeParser("en_us") - - text = "_".join(calendar.month_name[1:]) - - result = p._input_re_map["MMMM"].findall(text) - - assert result == calendar.month_name[1:] - - def test_month_abbreviations(self): - p = parser.DateTimeParser("en_us") - - text = "_".join(calendar.month_abbr[1:]) - - result = p._input_re_map["MMM"].findall(text) - - assert result == calendar.month_abbr[1:] - - def test_digits(self): - - assert parser.DateTimeParser._ONE_OR_TWO_DIGIT_RE.findall("4-56") == ["4", "56"] - assert parser.DateTimeParser._ONE_OR_TWO_OR_THREE_DIGIT_RE.findall( - "4-56-789" - ) == ["4", "56", "789"] - assert parser.DateTimeParser._ONE_OR_MORE_DIGIT_RE.findall( - "4-56-789-1234-12345" - ) == ["4", "56", "789", "1234", "12345"] - assert parser.DateTimeParser._TWO_DIGIT_RE.findall("12-3-45") == ["12", "45"] - assert parser.DateTimeParser._THREE_DIGIT_RE.findall("123-4-56") == ["123"] - assert parser.DateTimeParser._FOUR_DIGIT_RE.findall("1234-56") == ["1234"] - - def test_tz(self): - tz_z_re = parser.DateTimeParser._TZ_Z_RE - assert tz_z_re.findall("-0700") == [("-", "07", "00")] - assert tz_z_re.findall("+07") == [("+", "07", "")] - assert tz_z_re.search("15/01/2019T04:05:06.789120Z") is not None - assert tz_z_re.search("15/01/2019T04:05:06.789120") is None - - tz_zz_re = parser.DateTimeParser._TZ_ZZ_RE - assert tz_zz_re.findall("-07:00") == [("-", "07", "00")] - assert tz_zz_re.findall("+07") == [("+", "07", "")] - assert tz_zz_re.search("15/01/2019T04:05:06.789120Z") is not None - assert tz_zz_re.search("15/01/2019T04:05:06.789120") is None - - tz_name_re = parser.DateTimeParser._TZ_NAME_RE - assert tz_name_re.findall("Europe/Warsaw") == ["Europe/Warsaw"] - assert tz_name_re.findall("GMT") == ["GMT"] - - def test_timestamp(self): - timestamp_re = parser.DateTimeParser._TIMESTAMP_RE - assert timestamp_re.findall("1565707550.452729") == ["1565707550.452729"] - assert timestamp_re.findall("-1565707550.452729") == ["-1565707550.452729"] - assert timestamp_re.findall("-1565707550") == ["-1565707550"] - assert timestamp_re.findall("1565707550") == ["1565707550"] - assert timestamp_re.findall("1565707550.") == [] - assert timestamp_re.findall(".1565707550") == [] - - def test_timestamp_milli(self): - timestamp_expanded_re = parser.DateTimeParser._TIMESTAMP_EXPANDED_RE - assert timestamp_expanded_re.findall("-1565707550") == ["-1565707550"] - assert timestamp_expanded_re.findall("1565707550") == ["1565707550"] - assert timestamp_expanded_re.findall("1565707550.452729") == [] - assert timestamp_expanded_re.findall("1565707550.") == [] - assert timestamp_expanded_re.findall(".1565707550") == [] - - def test_time(self): - time_re = parser.DateTimeParser._TIME_RE - time_seperators = [":", ""] - - for sep in time_seperators: - assert time_re.findall("12") == [("12", "", "", "", "")] - assert time_re.findall("12{sep}35".format(sep=sep)) == [ - ("12", "35", "", "", "") - ] - assert time_re.findall("12{sep}35{sep}46".format(sep=sep)) == [ - ("12", "35", "46", "", "") - ] - assert time_re.findall("12{sep}35{sep}46.952313".format(sep=sep)) == [ - ("12", "35", "46", ".", "952313") - ] - assert time_re.findall("12{sep}35{sep}46,952313".format(sep=sep)) == [ - ("12", "35", "46", ",", "952313") - ] - - assert time_re.findall("12:") == [] - assert time_re.findall("12:35:46.") == [] - assert time_re.findall("12:35:46,") == [] - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserISO: - def test_YYYY(self): - - assert self.parser.parse_iso("2013") == datetime(2013, 1, 1) - - def test_YYYY_DDDD(self): - assert self.parser.parse_iso("1998-136") == datetime(1998, 5, 16) - - assert self.parser.parse_iso("1998-006") == datetime(1998, 1, 6) - - with pytest.raises(ParserError): - self.parser.parse_iso("1998-456") - - # 2016 is a leap year, so Feb 29 exists (leap day) - assert self.parser.parse_iso("2016-059") == datetime(2016, 2, 28) - assert self.parser.parse_iso("2016-060") == datetime(2016, 2, 29) - assert self.parser.parse_iso("2016-061") == datetime(2016, 3, 1) - - # 2017 is not a leap year, so Feb 29 does not exist - assert self.parser.parse_iso("2017-059") == datetime(2017, 2, 28) - assert self.parser.parse_iso("2017-060") == datetime(2017, 3, 1) - assert self.parser.parse_iso("2017-061") == datetime(2017, 3, 2) - - # Since 2016 is a leap year, the 366th day falls in the same year - assert self.parser.parse_iso("2016-366") == datetime(2016, 12, 31) - - # Since 2017 is not a leap year, the 366th day falls in the next year - assert self.parser.parse_iso("2017-366") == datetime(2018, 1, 1) - - def test_YYYY_DDDD_HH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-036 04:05:06+01:00") == datetime( - 2013, 2, 5, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-036 04:05:06Z") == datetime( - 2013, 2, 5, 4, 5, 6, tzinfo=tz.tzutc() - ) - - def test_YYYY_MM_DDDD(self): - with pytest.raises(ParserError): - self.parser.parse_iso("2014-05-125") - - def test_YYYY_MM(self): - - for separator in DateTimeParser.SEPARATORS: - assert self.parser.parse_iso(separator.join(("2013", "02"))) == datetime( - 2013, 2, 1 - ) - - def test_YYYY_MM_DD(self): - - for separator in DateTimeParser.SEPARATORS: - assert self.parser.parse_iso( - separator.join(("2013", "02", "03")) - ) == datetime(2013, 2, 3) - - def test_YYYY_MM_DDTHH_mmZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05+01:00") == datetime( - 2013, 2, 3, 4, 5, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm(self): - - assert self.parser.parse_iso("2013-02-03T04:05") == datetime(2013, 2, 3, 4, 5) - - def test_YYYY_MM_DDTHH(self): - - assert self.parser.parse_iso("2013-02-03T04") == datetime(2013, 2, 3, 4) - - def test_YYYY_MM_DDTHHZ(self): - - assert self.parser.parse_iso("2013-02-03T04+01:00") == datetime( - 2013, 2, 3, 4, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm_ss(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06") == datetime( - 2013, 2, 3, 4, 5, 6 - ) - - def test_YYYY_MM_DD_HH_mmZ(self): - - assert self.parser.parse_iso("2013-02-03 04:05+01:00") == datetime( - 2013, 2, 3, 4, 5, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DD_HH_mm(self): - - assert self.parser.parse_iso("2013-02-03 04:05") == datetime(2013, 2, 3, 4, 5) - - def test_YYYY_MM_DD_HH(self): - - assert self.parser.parse_iso("2013-02-03 04") == datetime(2013, 2, 3, 4) - - def test_invalid_time(self): - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03 044") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03 04:05:06.") - - def test_YYYY_MM_DD_HH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-02-03 04:05:06+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DD_HH_mm_ss(self): - - assert self.parser.parse_iso("2013-02-03 04:05:06") == datetime( - 2013, 2, 3, 4, 5, 6 - ) - - def test_YYYY_MM_DDTHH_mm_ss_S(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06.7") == datetime( - 2013, 2, 3, 4, 5, 6, 700000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78") == datetime( - 2013, 2, 3, 4, 5, 6, 780000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.789") == datetime( - 2013, 2, 3, 4, 5, 6, 789000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.7891") == datetime( - 2013, 2, 3, 4, 5, 6, 789100 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78912") == datetime( - 2013, 2, 3, 4, 5, 6, 789120 - ) - - # ISO 8601:2004(E), ISO, 2004-12-01, 4.2.2.4 ... the decimal fraction - # shall be divided from the integer part by the decimal sign specified - # in ISO 31-0, i.e. the comma [,] or full stop [.]. Of these, the comma - # is the preferred sign. - assert self.parser.parse_iso("2013-02-03T04:05:06,789123678") == datetime( - 2013, 2, 3, 4, 5, 6, 789124 - ) - - # there is no limit on the number of decimal places - assert self.parser.parse_iso("2013-02-03T04:05:06.789123678") == datetime( - 2013, 2, 3, 4, 5, 6, 789124 - ) - - def test_YYYY_MM_DDTHH_mm_ss_SZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06.7+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 700000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 780000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.789+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.7891+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789100, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78912+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789120, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03 04:05:06.78912Z") == datetime( - 2013, 2, 3, 4, 5, 6, 789120, tzinfo=tz.tzutc() - ) - - def test_W(self): - - assert self.parser.parse_iso("2011-W05-4") == datetime(2011, 2, 3) - - assert self.parser.parse_iso("2011-W05-4T14:17:01") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - assert self.parser.parse_iso("2011W054") == datetime(2011, 2, 3) - - assert self.parser.parse_iso("2011W054T141701") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - def test_invalid_Z(self): - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912zz") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912Zz") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912ZZ") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912+Z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912-Z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912 Z") - - def test_parse_subsecond(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 900000) - assert self.parser.parse_iso("2013-01-01 12:30:45.9") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 980000) - assert self.parser.parse_iso("2013-01-01 12:30:45.98") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987000) - assert self.parser.parse_iso("2013-01-01 12:30:45.987") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987600) - assert self.parser.parse_iso("2013-01-01 12:30:45.9876") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987650) - assert self.parser.parse_iso("2013-01-01 12:30:45.98765") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert self.parser.parse_iso("2013-01-01 12:30:45.987654") == self.expected - - # use comma as subsecond separator - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert self.parser.parse_iso("2013-01-01 12:30:45,987654") == self.expected - - def test_gnu_date(self): - """Regression tests for parsing output from GNU date.""" - # date -Ins - assert self.parser.parse_iso("2016-11-16T09:46:30,895636557-0800") == datetime( - 2016, 11, 16, 9, 46, 30, 895636, tzinfo=tz.tzoffset(None, -3600 * 8) - ) - - # date --rfc-3339=ns - assert self.parser.parse_iso("2016-11-16 09:51:14.682141526-08:00") == datetime( - 2016, 11, 16, 9, 51, 14, 682142, tzinfo=tz.tzoffset(None, -3600 * 8) - ) - - def test_isoformat(self): - - dt = datetime.utcnow() - - assert self.parser.parse_iso(dt.isoformat()) == dt - - def test_parse_iso_normalize_whitespace(self): - assert self.parser.parse_iso( - "2013-036 \t 04:05:06Z", normalize_whitespace=True - ) == datetime(2013, 2, 5, 4, 5, 6, tzinfo=tz.tzutc()) - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-036 \t 04:05:06Z") - - assert self.parser.parse_iso( - "\t 2013-05-05T12:30:45.123456 \t \n", normalize_whitespace=True - ) == datetime(2013, 5, 5, 12, 30, 45, 123456) - - with pytest.raises(ParserError): - self.parser.parse_iso("\t 2013-05-05T12:30:45.123456 \t \n") - - def test_parse_iso_with_leading_and_trailing_whitespace(self): - datetime_string = " 2016-11-15T06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = " 2016-11-15T06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = "2016-11-15T06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = "2016-11-15T 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # leading whitespace - datetime_string = " 2016-11-15 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # trailing whitespace - datetime_string = "2016-11-15 06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = " 2016-11-15 06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # two dividing spaces - datetime_string = "2016-11-15 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - def test_parse_iso_with_extra_words_at_start_and_end_invalid(self): - test_inputs = [ - "blah2016", - "blah2016blah", - "blah 2016 blah", - "blah 2016", - "2016 blah", - "blah 2016-05-16 04:05:06.789120", - "2016-05-16 04:05:06.789120 blah", - "blah 2016-05-16T04:05:06.789120", - "2016-05-16T04:05:06.789120 blah", - "2016blah", - "2016-05blah", - "2016-05-16blah", - "2016-05-16T04:05:06.789120blah", - "2016-05-16T04:05:06.789120ZblahZ", - "2016-05-16T04:05:06.789120Zblah", - "2016-05-16T04:05:06.789120blahZ", - "Meet me at 2016-05-16T04:05:06.789120 at the restaurant.", - "Meet me at 2016-05-16 04:05:06.789120 at the restaurant.", - ] - - for ti in test_inputs: - with pytest.raises(ParserError): - self.parser.parse_iso(ti) - - def test_iso8601_basic_format(self): - assert self.parser.parse_iso("20180517") == datetime(2018, 5, 17) - - assert self.parser.parse_iso("20180517T10") == datetime(2018, 5, 17, 10) - - assert self.parser.parse_iso("20180517T105513.843456") == datetime( - 2018, 5, 17, 10, 55, 13, 843456 - ) - - assert self.parser.parse_iso("20180517T105513Z") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzutc() - ) - - assert self.parser.parse_iso("20180517T105513.843456-0700") == datetime( - 2018, 5, 17, 10, 55, 13, 843456, tzinfo=tz.tzoffset(None, -25200) - ) - - assert self.parser.parse_iso("20180517T105513-0700") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzoffset(None, -25200) - ) - - assert self.parser.parse_iso("20180517T105513-07") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzoffset(None, -25200) - ) - - # ordinal in basic format: YYYYDDDD - assert self.parser.parse_iso("1998136") == datetime(1998, 5, 16) - - # timezone requires +- seperator - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T1055130700") - - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T10551307") - - # too many digits in date - with pytest.raises(ParserError): - self.parser.parse_iso("201860517T105513Z") - - # too many digits in time - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T1055213Z") - - def test_midnight_end_day(self): - assert self.parser.parse_iso("2019-10-30T24:00:00") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-30T24:00") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-30T24:00:00.0") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-31T24:00:00") == datetime( - 2019, 11, 1, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-12-31T24:00:00") == datetime( - 2020, 1, 1, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-12-31T23:59:59.9999999") == datetime( - 2020, 1, 1, 0, 0, 0, 0 - ) - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:01:00") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:01") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:00.1") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:00.999999") - - -@pytest.mark.usefixtures("tzinfo_parser") -class TestTzinfoParser: - def test_parse_local(self): - - assert self.parser.parse("local") == tz.tzlocal() - - def test_parse_utc(self): - - assert self.parser.parse("utc") == tz.tzutc() - assert self.parser.parse("UTC") == tz.tzutc() - - def test_parse_iso(self): - - assert self.parser.parse("01:00") == tz.tzoffset(None, 3600) - assert self.parser.parse("11:35") == tz.tzoffset(None, 11 * 3600 + 2100) - assert self.parser.parse("+01:00") == tz.tzoffset(None, 3600) - assert self.parser.parse("-01:00") == tz.tzoffset(None, -3600) - - assert self.parser.parse("0100") == tz.tzoffset(None, 3600) - assert self.parser.parse("+0100") == tz.tzoffset(None, 3600) - assert self.parser.parse("-0100") == tz.tzoffset(None, -3600) - - assert self.parser.parse("01") == tz.tzoffset(None, 3600) - assert self.parser.parse("+01") == tz.tzoffset(None, 3600) - assert self.parser.parse("-01") == tz.tzoffset(None, -3600) - - def test_parse_str(self): - - assert self.parser.parse("US/Pacific") == tz.gettz("US/Pacific") - - def test_parse_fails(self): - - with pytest.raises(parser.ParserError): - self.parser.parse("fail") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMonthName: - def test_shortmonth_capitalized(self): - - assert self.parser.parse("2013-Jan-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_shortmonth_allupper(self): - - assert self.parser.parse("2013-JAN-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_shortmonth_alllower(self): - - assert self.parser.parse("2013-jan-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_month_capitalized(self): - - assert self.parser.parse("2013-January-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_month_allupper(self): - - assert self.parser.parse("2013-JANUARY-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_month_alllower(self): - - assert self.parser.parse("2013-january-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_localized_month_name(self): - parser_ = parser.DateTimeParser("fr_fr") - - assert parser_.parse("2013-Janvier-01", "YYYY-MMMM-DD") == datetime(2013, 1, 1) - - def test_localized_month_abbreviation(self): - parser_ = parser.DateTimeParser("it_it") - - assert parser_.parse("2013-Gen-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMeridians: - def test_meridians_lowercase(self): - assert self.parser.parse("2013-01-01 5am", "YYYY-MM-DD ha") == datetime( - 2013, 1, 1, 5 - ) - - assert self.parser.parse("2013-01-01 5pm", "YYYY-MM-DD ha") == datetime( - 2013, 1, 1, 17 - ) - - def test_meridians_capitalized(self): - assert self.parser.parse("2013-01-01 5AM", "YYYY-MM-DD hA") == datetime( - 2013, 1, 1, 5 - ) - - assert self.parser.parse("2013-01-01 5PM", "YYYY-MM-DD hA") == datetime( - 2013, 1, 1, 17 - ) - - def test_localized_meridians_lowercase(self): - parser_ = parser.DateTimeParser("hu_hu") - assert parser_.parse("2013-01-01 5 de", "YYYY-MM-DD h a") == datetime( - 2013, 1, 1, 5 - ) - - assert parser_.parse("2013-01-01 5 du", "YYYY-MM-DD h a") == datetime( - 2013, 1, 1, 17 - ) - - def test_localized_meridians_capitalized(self): - parser_ = parser.DateTimeParser("hu_hu") - assert parser_.parse("2013-01-01 5 DE", "YYYY-MM-DD h A") == datetime( - 2013, 1, 1, 5 - ) - - assert parser_.parse("2013-01-01 5 DU", "YYYY-MM-DD h A") == datetime( - 2013, 1, 1, 17 - ) - - # regression test for issue #607 - def test_es_meridians(self): - parser_ = parser.DateTimeParser("es") - - assert parser_.parse( - "Junio 30, 2019 - 08:00 pm", "MMMM DD, YYYY - hh:mm a" - ) == datetime(2019, 6, 30, 20, 0) - - with pytest.raises(ParserError): - parser_.parse( - "Junio 30, 2019 - 08:00 pasdfasdfm", "MMMM DD, YYYY - hh:mm a" - ) - - def test_fr_meridians(self): - parser_ = parser.DateTimeParser("fr") - - # the French locale always uses a 24 hour clock, so it does not support meridians - with pytest.raises(ParserError): - parser_.parse("Janvier 30, 2019 - 08:00 pm", "MMMM DD, YYYY - hh:mm a") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMonthOrdinalDay: - def test_english(self): - parser_ = parser.DateTimeParser("en_us") - - assert parser_.parse("January 1st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - assert parser_.parse("January 2nd, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 2 - ) - assert parser_.parse("January 3rd, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 3 - ) - assert parser_.parse("January 4th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 4 - ) - assert parser_.parse("January 11th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 11 - ) - assert parser_.parse("January 12th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 12 - ) - assert parser_.parse("January 13th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 13 - ) - assert parser_.parse("January 21st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 21 - ) - assert parser_.parse("January 31st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 31 - ) - - with pytest.raises(ParserError): - parser_.parse("January 1th, 2013", "MMMM Do, YYYY") - - with pytest.raises(ParserError): - parser_.parse("January 11st, 2013", "MMMM Do, YYYY") - - def test_italian(self): - parser_ = parser.DateTimeParser("it_it") - - assert parser_.parse("Gennaio 1º, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - - def test_spanish(self): - parser_ = parser.DateTimeParser("es_es") - - assert parser_.parse("Enero 1º, 2013", "MMMM Do, YYYY") == datetime(2013, 1, 1) - - def test_french(self): - parser_ = parser.DateTimeParser("fr_fr") - - assert parser_.parse("Janvier 1er, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - - assert parser_.parse("Janvier 2e, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 2 - ) - - assert parser_.parse("Janvier 11e, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 11 - ) - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserSearchDate: - def test_parse_search(self): - - assert self.parser.parse( - "Today is 25 of September of 2003", "DD of MMMM of YYYY" - ) == datetime(2003, 9, 25) - - def test_parse_search_with_numbers(self): - - assert self.parser.parse( - "2000 people met the 2012-01-01 12:05:10", "YYYY-MM-DD HH:mm:ss" - ) == datetime(2012, 1, 1, 12, 5, 10) - - assert self.parser.parse( - "Call 01-02-03 on 79-01-01 12:05:10", "YY-MM-DD HH:mm:ss" - ) == datetime(1979, 1, 1, 12, 5, 10) - - def test_parse_search_with_names(self): - - assert self.parser.parse("June was born in May 1980", "MMMM YYYY") == datetime( - 1980, 5, 1 - ) - - def test_parse_search_locale_with_names(self): - p = parser.DateTimeParser("sv_se") - - assert p.parse("Jan föddes den 31 Dec 1980", "DD MMM YYYY") == datetime( - 1980, 12, 31 - ) - - assert p.parse("Jag föddes den 25 Augusti 1975", "DD MMMM YYYY") == datetime( - 1975, 8, 25 - ) - - def test_parse_search_fails(self): - - with pytest.raises(parser.ParserError): - self.parser.parse("Jag föddes den 25 Augusti 1975", "DD MMMM YYYY") - - def test_escape(self): - - format = "MMMM D, YYYY [at] h:mma" - assert self.parser.parse( - "Thursday, December 10, 2015 at 5:09pm", format - ) == datetime(2015, 12, 10, 17, 9) - - format = "[MMMM] M D, YYYY [at] h:mma" - assert self.parser.parse("MMMM 12 10, 2015 at 5:09pm", format) == datetime( - 2015, 12, 10, 17, 9 - ) - - format = "[It happened on] MMMM Do [in the year] YYYY [a long time ago]" - assert self.parser.parse( - "It happened on November 25th in the year 1990 a long time ago", format - ) == datetime(1990, 11, 25) - - format = "[It happened on] MMMM Do [in the][ year] YYYY [a long time ago]" - assert self.parser.parse( - "It happened on November 25th in the year 1990 a long time ago", format - ) == datetime(1990, 11, 25) - - format = "[I'm][ entirely][ escaped,][ weee!]" - assert self.parser.parse("I'm entirely escaped, weee!", format) == datetime( - 1, 1, 1 - ) - - # Special RegEx characters - format = "MMM DD, YYYY |^${}().*+?<>-& h:mm A" - assert self.parser.parse( - "Dec 31, 2017 |^${}().*+?<>-& 2:00 AM", format - ) == datetime(2017, 12, 31, 2, 0) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py deleted file mode 100644 index e48b4de066..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py +++ /dev/null @@ -1,81 +0,0 @@ -# -*- coding: utf-8 -*- -import time -from datetime import datetime - -import pytest - -from arrow import util - - -class TestUtil: - def test_next_weekday(self): - # Get first Monday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 0) == datetime(1970, 1, 5) - - # Get first Tuesday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 1) == datetime(1970, 1, 6) - - # Get first Wednesday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 2) == datetime(1970, 1, 7) - - # Get first Thursday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 3) == datetime(1970, 1, 1) - - # Get first Friday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 4) == datetime(1970, 1, 2) - - # Get first Saturday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 5) == datetime(1970, 1, 3) - - # Get first Sunday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 6) == datetime(1970, 1, 4) - - # Weekdays are 0-indexed - with pytest.raises(ValueError): - util.next_weekday(datetime(1970, 1, 1), 7) - - with pytest.raises(ValueError): - util.next_weekday(datetime(1970, 1, 1), -1) - - def test_total_seconds(self): - td = datetime(2019, 1, 1) - datetime(2018, 1, 1) - assert util.total_seconds(td) == td.total_seconds() - - def test_is_timestamp(self): - timestamp_float = time.time() - timestamp_int = int(timestamp_float) - - assert util.is_timestamp(timestamp_int) - assert util.is_timestamp(timestamp_float) - assert util.is_timestamp(str(timestamp_int)) - assert util.is_timestamp(str(timestamp_float)) - - assert not util.is_timestamp(True) - assert not util.is_timestamp(False) - - class InvalidTimestamp: - pass - - assert not util.is_timestamp(InvalidTimestamp()) - - full_datetime = "2019-06-23T13:12:42" - assert not util.is_timestamp(full_datetime) - - def test_normalize_timestamp(self): - timestamp = 1591161115.194556 - millisecond_timestamp = 1591161115194 - microsecond_timestamp = 1591161115194556 - - assert util.normalize_timestamp(timestamp) == timestamp - assert util.normalize_timestamp(millisecond_timestamp) == 1591161115.194 - assert util.normalize_timestamp(microsecond_timestamp) == 1591161115.194556 - - with pytest.raises(ValueError): - util.normalize_timestamp(3e17) - - def test_iso_gregorian(self): - with pytest.raises(ValueError): - util.iso_to_gregorian(2013, 0, 5) - - with pytest.raises(ValueError): - util.iso_to_gregorian(2013, 8, 0) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py deleted file mode 100644 index 2a048feb3f..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py +++ /dev/null @@ -1,16 +0,0 @@ -# -*- coding: utf-8 -*- -import pytz -from dateutil.zoneinfo import get_zonefile_instance - -from arrow import util - - -def make_full_tz_list(): - dateutil_zones = set(get_zonefile_instance().zones) - pytz_zones = set(pytz.all_timezones) - return dateutil_zones.union(pytz_zones) - - -def assert_datetime_equality(dt1, dt2, within=10): - assert dt1.tzinfo == dt2.tzinfo - assert abs(util.total_seconds(dt1 - dt2)) < within diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tox.ini b/openpype/modules/ftrack/python2_vendor/arrow/tox.ini deleted file mode 100644 index 46576b12e3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tox.ini +++ /dev/null @@ -1,53 +0,0 @@ -[tox] -minversion = 3.18.0 -envlist = py{py3,27,35,36,37,38,39},lint,docs -skip_missing_interpreters = true - -[gh-actions] -python = - pypy3: pypy3 - 2.7: py27 - 3.5: py35 - 3.6: py36 - 3.7: py37 - 3.8: py38 - 3.9: py39 - -[testenv] -deps = -rrequirements.txt -allowlist_externals = pytest -commands = pytest - -[testenv:lint] -basepython = python3 -skip_install = true -deps = pre-commit -commands = - pre-commit install - pre-commit run --all-files --show-diff-on-failure - -[testenv:docs] -basepython = python3 -skip_install = true -changedir = docs -deps = - doc8 - sphinx - python-dateutil -allowlist_externals = make -commands = - doc8 index.rst ../README.rst --extension .rst --ignore D001 - make html SPHINXOPTS="-W --keep-going" - -[pytest] -addopts = -v --cov-branch --cov=arrow --cov-fail-under=100 --cov-report=term-missing --cov-report=xml -testpaths = tests - -[isort] -line_length = 88 -multi_line_output = 3 -include_trailing_comma = true - -[flake8] -per-file-ignores = arrow/__init__.py:F401 -ignore = E203,E501,W503 diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py index 6e5dd056f3..b66e1f01e0 100644 --- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py +++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py @@ -121,7 +121,7 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin): publish_comment = self.format_publish_comment(instance) if not publish_comment: - self.log.info("Comment is not set.") + self.log.debug("Comment is not set.") else: self.log.debug("Comment is `{}`".format(publish_comment)) diff --git a/openpype/modules/launcher_action.py b/openpype/modules/launcher_action.py index c4331b6094..5e14f25f76 100644 --- a/openpype/modules/launcher_action.py +++ b/openpype/modules/launcher_action.py @@ -1,3 +1,6 @@ +import os + +from openpype import PLUGINS_DIR, AYON_SERVER_ENABLED from openpype.modules import ( OpenPypeModule, ITrayAction, @@ -13,36 +16,66 @@ class LauncherAction(OpenPypeModule, ITrayAction): self.enabled = True # Tray attributes - self.window = None + self._window = None def tray_init(self): - self.create_window() + self._create_window() - self.add_doubleclick_callback(self.show_launcher) + self.add_doubleclick_callback(self._show_launcher) def tray_start(self): return def connect_with_modules(self, enabled_modules): # Register actions - if self.tray_initialized: - from openpype.tools.launcher import actions - actions.register_config_actions() - actions_paths = self.manager.collect_plugin_paths()["actions"] - actions.register_actions_from_paths(actions_paths) - actions.register_environment_actions() - - def create_window(self): - if self.window: + if not self.tray_initialized: return - from openpype.tools.launcher import LauncherWindow - self.window = LauncherWindow() + + from openpype.pipeline.actions import register_launcher_action_path + + actions_dir = os.path.join(PLUGINS_DIR, "actions") + if os.path.exists(actions_dir): + register_launcher_action_path(actions_dir) + + actions_paths = self.manager.collect_plugin_paths()["actions"] + for path in actions_paths: + if path and os.path.exists(path): + register_launcher_action_path(actions_dir) + + paths_str = os.environ.get("AVALON_ACTIONS") or "" + if paths_str: + self.log.warning( + "WARNING: 'AVALON_ACTIONS' is deprecated. Support of this" + " environment variable will be removed in future versions." + " Please consider using 'OpenPypeModule' to define custom" + " action paths. Planned version to drop the support" + " is 3.17.2 or 3.18.0 ." + ) + + for path in paths_str.split(os.pathsep): + if path and os.path.exists(path): + register_launcher_action_path(path) def on_action_trigger(self): - self.show_launcher() + """Implementation for ITrayAction interface. - def show_launcher(self): - if self.window: - self.window.show() - self.window.raise_() - self.window.activateWindow() + Show launcher tool on action trigger. + """ + + self._show_launcher() + + def _create_window(self): + if self._window: + return + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_launcher.ui import LauncherWindow + else: + from openpype.tools.launcher import LauncherWindow + self._window = LauncherWindow() + + def _show_launcher(self): + if self._window is None: + return + self._window.show() + self._window.raise_() + self._window.activateWindow() diff --git a/openpype/modules/log_viewer/log_view_module.py b/openpype/modules/log_viewer/log_view_module.py index e9dba2041c..1cafbe4fbd 100644 --- a/openpype/modules/log_viewer/log_view_module.py +++ b/openpype/modules/log_viewer/log_view_module.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayModule @@ -7,6 +8,8 @@ class LogViewModule(OpenPypeModule, ITrayModule): def initialize(self, modules_settings): logging_settings = modules_settings[self.name] self.enabled = logging_settings["enabled"] + if AYON_SERVER_ENABLED: + self.enabled = False # Tray attributes self.window = None diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/modules/muster/plugins/publish/submit_maya_muster.py similarity index 96% rename from openpype/hosts/maya/plugins/publish/submit_maya_muster.py rename to openpype/modules/muster/plugins/publish/submit_maya_muster.py index 1a6463fb9d..5c95744876 100644 --- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py +++ b/openpype/modules/muster/plugins/publish/submit_maya_muster.py @@ -25,6 +25,7 @@ def _get_template_id(renderer): :rtype: int """ + # TODO: Use settings from context? templates = get_system_settings()["modules"]["muster"]["templates_mapping"] if not templates: raise RuntimeError(("Muster template mapping missing in " @@ -215,9 +216,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): :rtype: int :raises: Exception if template ID isn't found """ - self.log.info("Trying to find template for [{}]".format(renderer)) + self.log.debug("Trying to find template for [{}]".format(renderer)) mapped = _get_template_id(renderer) - self.log.info("got id [{}]".format(mapped)) + self.log.debug("got id [{}]".format(mapped)) return self._templates.get(mapped) def _submit(self, payload): @@ -249,7 +250,6 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): Authenticate with Muster, collect all data, prepare path for post render publish job and submit job to farm. """ - instance.data["toBeRenderedOn"] = "muster" # setup muster environment self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") @@ -265,6 +265,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): context = instance.context workspace = context.data["workspaceDir"] + project_name = context.data["projectName"] + asset_name = context.data["asset"] filepath = None @@ -288,7 +290,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): comment = context.data.get("comment", "") scene = os.path.splitext(filename)[0] dirname = os.path.join(workspace, "renders") - renderlayer = instance.data['setMembers'] # rs_beauty + renderlayer = instance.data['renderlayer'] # rs_beauty renderlayer_name = instance.data['subset'] # beauty renderglobals = instance.data["renderGlobals"] # legacy_layers = renderlayer_globals["UseLegacyRenderLayers"] @@ -371,8 +373,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): "jobId": -1, "startOn": 0, "parentId": -1, - "project": os.environ.get('AVALON_PROJECT') or scene, - "shot": os.environ.get('AVALON_ASSET') or scene, + "project": project_name or scene, + "shot": asset_name or scene, "camera": instance.data.get("cameras")[0], "dependMode": 0, "packetSize": 4, @@ -452,8 +454,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): self.preflight_check(instance) - self.log.info("Submitting ...") - self.log.info(json.dumps(payload, indent=4, sort_keys=True)) + self.log.debug("Submitting ...") + self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) response = self._submit(payload) # response = requests.post(url, json=payload) @@ -546,3 +548,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): "%f=%d was rounded off to nearest integer" % (value, int(value)) ) + + +# TODO: Remove hack to avoid this plug-in in new publisher +# This plug-in should actually be in dedicated module +if not os.environ.get("MUSTER_REST_URL"): + del MayaSubmitMuster diff --git a/openpype/hosts/maya/plugins/publish/validate_muster_connection.py b/openpype/modules/muster/plugins/publish/validate_muster_connection.py similarity index 100% rename from openpype/hosts/maya/plugins/publish/validate_muster_connection.py rename to openpype/modules/muster/plugins/publish/validate_muster_connection.py diff --git a/openpype/modules/project_manager_action.py b/openpype/modules/project_manager_action.py index 5f74dd9ee5..bf55e1544d 100644 --- a/openpype/modules/project_manager_action.py +++ b/openpype/modules/project_manager_action.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayAction @@ -11,6 +12,9 @@ class ProjectManagerAction(OpenPypeModule, ITrayAction): module_settings = modules_settings.get(self.name) if module_settings: enabled = module_settings.get("enabled", enabled) + + if AYON_SERVER_ENABLED: + enabled = False self.enabled = enabled # Tray attributes diff --git a/openpype/modules/royalrender/api.py b/openpype/modules/royalrender/api.py index de1dba8724..e610a0c8a8 100644 --- a/openpype/modules/royalrender/api.py +++ b/openpype/modules/royalrender/api.py @@ -3,10 +3,10 @@ import sys import os -from openpype.settings import get_project_settings from openpype.lib.local_settings import OpenPypeSettingsRegistry from openpype.lib import Logger, run_subprocess from .rr_job import RRJob, SubmitFile, SubmitterParameter +from openpype.lib.vendor_bin_utils import find_tool_in_custom_paths class Api: @@ -15,69 +15,57 @@ class Api: RR_SUBMIT_CONSOLE = 1 RR_SUBMIT_API = 2 - def __init__(self, settings, project=None): + def __init__(self, rr_path=None): self.log = Logger.get_logger("RoyalRender") - self._settings = settings - self._initialize_rr(project) + self._rr_path = rr_path + os.environ["RR_ROOT"] = rr_path - def _initialize_rr(self, project=None): - # type: (str) -> None - """Initialize RR Path. + @staticmethod + def get_rr_bin_path(rr_root, tool_name=None): + # type: (str, str) -> str + """Get path to RR bin folder. Args: - project (str, Optional): Project name to set RR api in - context. + tool_name (str): Name of RR executable you want. + rr_root (str): Custom RR root if needed. + + Returns: + str: Path to the tool based on current platform. """ - if project: - project_settings = get_project_settings(project) - rr_path = ( - project_settings - ["royalrender"] - ["rr_paths"] - ) - else: - rr_path = ( - self._settings - ["modules"] - ["royalrender"] - ["rr_path"] - ["default"] - ) - os.environ["RR_ROOT"] = rr_path - self._rr_path = rr_path - - def _get_rr_bin_path(self, rr_root=None): - # type: (str) -> str - """Get path to RR bin folder.""" - rr_root = rr_root or self._rr_path is_64bit_python = sys.maxsize > 2 ** 32 - rr_bin_path = "" + rr_bin_parts = [rr_root, "bin"] if sys.platform.lower() == "win32": - rr_bin_path = "/bin/win64" - if not is_64bit_python: - # we are using 64bit python - rr_bin_path = "/bin/win" - rr_bin_path = rr_bin_path.replace( - "/", os.path.sep - ) + rr_bin_parts.append("win") if sys.platform.lower() == "darwin": - rr_bin_path = "/bin/mac64" - if not is_64bit_python: - rr_bin_path = "/bin/mac" + rr_bin_parts.append("mac") - if sys.platform.lower() == "linux": - rr_bin_path = "/bin/lx64" + if sys.platform.lower().startswith("linux"): + rr_bin_parts.append("lx") - return os.path.join(rr_root, rr_bin_path) + rr_bin_path = os.sep.join(rr_bin_parts) + + paths_to_check = [] + # if we use 64bit python, append 64bit specific path first + if is_64bit_python: + if not tool_name: + return rr_bin_path + "64" + paths_to_check.append(rr_bin_path + "64") + + # otherwise use 32bit + if not tool_name: + return rr_bin_path + paths_to_check.append(rr_bin_path) + + return find_tool_in_custom_paths(paths_to_check, tool_name) def _initialize_module_path(self): # type: () -> None """Set RR modules for Python.""" # default for linux - rr_bin = self._get_rr_bin_path() + rr_bin = self.get_rr_bin_path(self._rr_path) rr_module_path = os.path.join(rr_bin, "lx64/lib") if sys.platform.lower() == "win32": @@ -91,51 +79,46 @@ class Api: sys.path.append(os.path.join(self._rr_path, rr_module_path)) - def create_submission(self, jobs, submitter_attributes, file_name=None): - # type: (list[RRJob], list[SubmitterParameter], str) -> SubmitFile + @staticmethod + def create_submission(jobs, submitter_attributes): + # type: (list[RRJob], list[SubmitterParameter]) -> SubmitFile """Create jobs submission file. Args: jobs (list): List of :class:`RRJob` submitter_attributes (list): List of submitter attributes :class:`SubmitterParameter` for whole submission batch. - file_name (str), optional): File path to write data to. Returns: str: XML data of job submission files. """ - raise NotImplementedError + return SubmitFile(SubmitterParameters=submitter_attributes, Jobs=jobs) def submit_file(self, file, mode=RR_SUBMIT_CONSOLE): # type: (SubmitFile, int) -> None if mode == self.RR_SUBMIT_CONSOLE: self._submit_using_console(file) + return - # RR v7 supports only Python 2.7 so we bail out in fear + # RR v7 supports only Python 2.7, so we bail out in fear # until there is support for Python 3 😰 raise NotImplementedError( "Submission via RoyalRender API is not supported yet") # self._submit_using_api(file) - def _submit_using_console(self, file): - # type: (SubmitFile) -> bool - rr_console = os.path.join( - self._get_rr_bin_path(), - "rrSubmitterconsole" - ) + def _submit_using_console(self, job_file): + # type: (SubmitFile) -> None + rr_start_local = self.get_rr_bin_path( + self._rr_path, "rrStartLocal") - if sys.platform.lower() == "darwin": - if "/bin/mac64" in rr_console: - rr_console = rr_console.replace("/bin/mac64", "/bin/mac") + self.log.info("rr_console: {}".format(rr_start_local)) - if sys.platform.lower() == "win32": - if "/bin/win64" in rr_console: - rr_console = rr_console.replace("/bin/win64", "/bin/win") - rr_console += ".exe" - - args = [rr_console, file] - run_subprocess(" ".join(args), logger=self.log) + args = [rr_start_local, "rrSubmitterconsole", job_file] + self.log.info("Executing: {}".format(" ".join(args))) + env = os.environ + env["RR_ROOT"] = self._rr_path + run_subprocess(args, logger=self.log, env=env) def _submit_using_api(self, file): # type: (SubmitFile) -> None diff --git a/openpype/modules/royalrender/lib.py b/openpype/modules/royalrender/lib.py new file mode 100644 index 0000000000..4708d25eed --- /dev/null +++ b/openpype/modules/royalrender/lib.py @@ -0,0 +1,304 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import os +import re +import platform +from datetime import datetime + +import pyblish.api +from openpype.tests.lib import is_in_tests +from openpype.pipeline.publish.lib import get_published_workfile_instance +from openpype.pipeline.publish import KnownPublishError +from openpype.modules.royalrender.api import Api as rrApi +from openpype.modules.royalrender.rr_job import ( + RRJob, CustomAttribute, get_rr_platform) +from openpype.lib import ( + is_running_from_build, + BoolDef, + NumberDef, +) +from openpype.pipeline import OpenPypePyblishPluginMixin + + +class BaseCreateRoyalRenderJob(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): + """Creates separate rendering job for Royal Render""" + label = "Create Nuke Render job in RR" + order = pyblish.api.IntegratorOrder + 0.1 + hosts = ["nuke"] + families = ["render", "prerender"] + targets = ["local"] + optional = True + + priority = 50 + chunk_size = 1 + concurrent_tasks = 1 + use_gpu = True + use_published = True + + @classmethod + def get_attribute_defs(cls): + return [ + NumberDef( + "priority", + label="Priority", + default=cls.priority, + decimals=0 + ), + NumberDef( + "chunk", + label="Frames Per Task", + default=cls.chunk_size, + decimals=0, + minimum=1, + maximum=1000 + ), + NumberDef( + "concurrency", + label="Concurrency", + default=cls.concurrent_tasks, + decimals=0, + minimum=1, + maximum=10 + ), + BoolDef( + "use_gpu", + default=cls.use_gpu, + label="Use GPU" + ), + BoolDef( + "suspend_publish", + default=False, + label="Suspend publish" + ), + BoolDef( + "use_published", + default=cls.use_published, + label="Use published workfile" + ) + ] + + def __init__(self, *args, **kwargs): + self._rr_root = None + self.scene_path = None + self.job = None + self.submission_parameters = None + self.rr_api = None + + def process(self, instance): + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + + instance.data["attributeValues"] = self.get_attr_values_from_data( + instance.data) + + # add suspend_publish attributeValue to instance data + instance.data["suspend_publish"] = instance.data["attributeValues"][ + "suspend_publish"] + + context = instance.context + + self._rr_root = self._resolve_rr_path(context, instance.data.get( + "rrPathName")) # noqa + self.log.debug(self._rr_root) + if not self._rr_root: + raise KnownPublishError( + ("Missing RoyalRender root. " + "You need to configure RoyalRender module.")) + + self.rr_api = rrApi(self._rr_root) + + self.scene_path = context.data["currentFile"] + if self.use_published: + published_workfile = get_published_workfile_instance(context) + + # fallback if nothing was set + if published_workfile is None: + self.log.warning("Falling back to workfile") + file_path = context.data["currentFile"] + else: + workfile_repre = published_workfile.data["representations"][0] + file_path = workfile_repre["published_path"] + + self.scene_path = file_path + self.log.info( + "Using published scene for render {}".format(self.scene_path) + ) + + if not instance.data.get("expectedFiles"): + instance.data["expectedFiles"] = [] + + if not instance.data.get("rrJobs"): + instance.data["rrJobs"] = [] + + def get_job(self, instance, script_path, render_path, node_name): + """Get RR job based on current instance. + + Args: + script_path (str): Path to Nuke script. + render_path (str): Output path. + node_name (str): Name of the render node. + + Returns: + RRJob: RoyalRender Job instance. + + """ + start_frame = int(instance.data["frameStartHandle"]) + end_frame = int(instance.data["frameEndHandle"]) + + batch_name = os.path.basename(script_path) + jobname = "%s - %s" % (batch_name, instance.name) + if is_in_tests(): + batch_name += datetime.now().strftime("%d%m%Y%H%M%S") + + render_dir = os.path.normpath(os.path.dirname(render_path)) + output_filename_0 = self.pad_file_name(render_path, str(start_frame)) + file_name, file_ext = os.path.splitext( + os.path.basename(output_filename_0)) + + custom_attributes = [] + if is_running_from_build(): + custom_attributes = [ + CustomAttribute( + name="OpenPypeVersion", + value=os.environ.get("OPENPYPE_VERSION")) + ] + + # this will append expected files to instance as needed. + expected_files = self.expected_files( + instance, render_path, start_frame, end_frame) + instance.data["expectedFiles"].extend(expected_files) + + job = RRJob( + Software="", + Renderer="", + SeqStart=int(start_frame), + SeqEnd=int(end_frame), + SeqStep=int(instance.data.get("byFrameStep", 1)), + SeqFileOffset=0, + Version=0, + SceneName=script_path, + IsActive=True, + ImageDir=render_dir.replace("\\", "/"), + ImageFilename=file_name, + ImageExtension=file_ext, + ImagePreNumberLetter="", + ImageSingleOutputFile=False, + SceneOS=get_rr_platform(), + Layer=node_name, + SceneDatabaseDir=script_path, + CustomSHotName=jobname, + CompanyProjectName=instance.context.data["projectName"], + ImageWidth=instance.data["resolutionWidth"], + ImageHeight=instance.data["resolutionHeight"], + CustomAttributes=custom_attributes + ) + + return job + + def update_job_with_host_specific(self, instance, job): + """Host specific mapping for RRJob""" + raise NotImplementedError + + @staticmethod + def _resolve_rr_path(context, rr_path_name): + # type: (pyblish.api.Context, str) -> str + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + try: + default_servers = rr_settings["rr_paths"] + project_servers = ( + context.data + ["project_settings"] + ["royalrender"] + ["rr_paths"] + ) + rr_servers = { + k: default_servers[k] + for k in project_servers + if k in default_servers + } + + except (AttributeError, KeyError): + # Handle situation were we had only one url for royal render. + return context.data["defaultRRPath"][platform.system().lower()] + + return rr_servers[rr_path_name][platform.system().lower()] + + def expected_files(self, instance, path, start_frame, end_frame): + """Get expected files. + + This function generate expected files from provided + path and start/end frames. + + It was taken from Deadline module, but this should be + probably handled better in collector to support more + flexible scenarios. + + Args: + instance (Instance) + path (str): Output path. + start_frame (int): Start frame. + end_frame (int): End frame. + + Returns: + list: List of expected files. + + """ + dir_name = os.path.dirname(path) + file = os.path.basename(path) + + expected_files = [] + + if "#" in file: + pparts = file.split("#") + padding = "%0{}d".format(len(pparts) - 1) + file = pparts[0] + padding + pparts[-1] + + if "%" not in file: + expected_files.append(path) + return expected_files + + if instance.data.get("slate"): + start_frame -= 1 + + expected_files.extend( + os.path.join(dir_name, (file % i)).replace("\\", "/") + for i in range(start_frame, (end_frame + 1)) + ) + return expected_files + + def pad_file_name(self, path, first_frame): + """Return output file path with #### for padding. + + RR requires the path to be formatted with # in place of numbers. + For example `/path/to/render.####.png` + + Args: + path (str): path to rendered image + first_frame (str): from representation to cleany replace with # + padding + + Returns: + str + + """ + self.log.debug("pad_file_name path: `{}`".format(path)) + if "%" in path: + search_results = re.search(r"(%0)(\d)(d.)", path).groups() + self.log.debug("_ search_results: `{}`".format(search_results)) + return int(search_results[1]) + if "#" in path: + self.log.debug("already padded: `{}`".format(path)) + return path + + if first_frame: + padding = len(first_frame) + path = path.replace(first_frame, "#" * padding) + + return path diff --git a/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py b/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py deleted file mode 100644 index 3ce95e0c50..0000000000 --- a/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py +++ /dev/null @@ -1,23 +0,0 @@ -# -*- coding: utf-8 -*- -"""Collect default Deadline server.""" -import pyblish.api - - -class CollectDefaultRRPath(pyblish.api.ContextPlugin): - """Collect default Royal Render path.""" - - order = pyblish.api.CollectorOrder - label = "Default Royal Render Path" - - def process(self, context): - try: - rr_module = context.data.get( - "openPypeModules")["royalrender"] - except AttributeError: - msg = "Cannot get OpenPype Royal Render module." - self.log.error(msg) - raise AssertionError(msg) - - # get default deadline webservice url from deadline module - self.log.debug(rr_module.rr_paths) - context.data["defaultRRPath"] = rr_module.rr_paths["default"] # noqa: E501 diff --git a/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py b/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py index 6a3dc276f3..e978ce5bed 100644 --- a/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py +++ b/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py @@ -5,29 +5,31 @@ import pyblish.api class CollectRRPathFromInstance(pyblish.api.InstancePlugin): """Collect RR Path from instance.""" - order = pyblish.api.CollectorOrder + 0.01 - label = "Royal Render Path from the Instance" - families = ["rendering"] + order = pyblish.api.CollectorOrder + label = "Collect Royal Render path name from the Instance" + families = ["render", "prerender", "renderlayer"] def process(self, instance): - instance.data["rrPath"] = self._collect_rr_path(instance) + instance.data["rrPathName"] = self._collect_rr_path_name(instance) self.log.info( - "Using {} for submission.".format(instance.data["rrPath"])) + "Using '{}' for submission.".format(instance.data["rrPathName"])) @staticmethod - def _collect_rr_path(render_instance): + def _collect_rr_path_name(instance): # type: (pyblish.api.Instance) -> str - """Get Royal Render path from render instance.""" + """Get Royal Render pat name from render instance.""" rr_settings = ( - render_instance.context.data + instance.context.data ["system_settings"] ["modules"] ["royalrender"] ) + if not instance.data.get("rrPaths"): + return "default" try: default_servers = rr_settings["rr_paths"] project_servers = ( - render_instance.context.data + instance.context.data ["project_settings"] ["royalrender"] ["rr_paths"] @@ -40,10 +42,6 @@ class CollectRRPathFromInstance(pyblish.api.InstancePlugin): except (AttributeError, KeyError): # Handle situation were we had only one url for royal render. - return render_instance.context.data["defaultRRPath"] + return rr_settings["rr_paths"]["default"] - return rr_servers[ - list(rr_servers.keys())[ - int(render_instance.data.get("rrPaths")) - ] - ] + return list(rr_servers.keys())[int(instance.data.get("rrPaths"))] diff --git a/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py b/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py index 65af90e8a6..1bfee19e3d 100644 --- a/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py +++ b/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py @@ -71,7 +71,7 @@ class CollectSequencesFromJob(pyblish.api.ContextPlugin): """Gather file sequences from job directory. When "OPENPYPE_PUBLISH_DATA" environment variable is set these paths - (folders or .json files) are parsed for image sequences. Otherwise the + (folders or .json files) are parsed for image sequences. Otherwise, the current working directory is searched for file sequences. """ @@ -189,7 +189,7 @@ class CollectSequencesFromJob(pyblish.api.ContextPlugin): "families": list(families), "subset": subset, "asset": data.get( - "asset", legacy_io.Session["AVALON_ASSET"] + "asset", context.data["asset"] ), "stagingDir": root, "frameStart": start, diff --git a/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py new file mode 100644 index 0000000000..22d910b7cd --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py @@ -0,0 +1,42 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import os + +from maya.OpenMaya import MGlobal + +from openpype.modules.royalrender import lib +from openpype.pipeline.farm.tools import iter_expected_files + + +class CreateMayaRoyalRenderJob(lib.BaseCreateRoyalRenderJob): + label = "Create Maya Render job in RR" + hosts = ["maya"] + families = ["renderlayer"] + + def update_job_with_host_specific(self, instance, job): + job.Software = "Maya" + job.Version = "{0:.2f}".format(MGlobal.apiVersion() / 10000) + if instance.data.get("cameras"): + job.Camera = instance.data["cameras"][0].replace("'", '"') + workspace = instance.context.data["workspaceDir"] + job.SceneDatabaseDir = workspace + + return job + + def process(self, instance): + """Plugin entry point.""" + super(CreateMayaRoyalRenderJob, self).process(instance) + + expected_files = instance.data["expectedFiles"] + first_file_path = next(iter_expected_files(expected_files)) + output_dir = os.path.dirname(first_file_path) + instance.data["outputDir"] = output_dir + + layer = instance.data["setMembers"] # type: str + layer_name = layer.removeprefix("rs_") + + job = self.get_job(instance, self.scene_path, first_file_path, + layer_name) + job = self.update_job_with_host_specific(instance, job) + + instance.data["rrJobs"].append(job) diff --git a/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py new file mode 100644 index 0000000000..71daa6edf8 --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py @@ -0,0 +1,69 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import re + +from openpype.modules.royalrender import lib + + +class CreateNukeRoyalRenderJob(lib.BaseCreateRoyalRenderJob): + """Creates separate rendering job for Royal Render""" + label = "Create Nuke Render job in RR" + hosts = ["nuke"] + families = ["render", "prerender"] + + def process(self, instance): + super(CreateNukeRoyalRenderJob, self).process(instance) + + # redefinition of families + if "render" in instance.data["family"]: + instance.data["family"] = "write" + instance.data["families"].insert(0, "render2d") + elif "prerender" in instance.data["family"]: + instance.data["family"] = "write" + instance.data["families"].insert(0, "prerender") + + jobs = self.create_jobs(instance) + for job in jobs: + job = self.update_job_with_host_specific(instance, job) + + instance.data["rrJobs"].append(job) + + def update_job_with_host_specific(self, instance, job): + nuke_version = re.search( + r"\d+\.\d+", instance.context.data.get("hostVersion")) + + job.Software = "Nuke" + job.Version = nuke_version.group() + + return job + + def create_jobs(self, instance): + """Nuke creates multiple RR jobs - for baking etc.""" + # get output path + render_path = instance.data['path'] + script_path = self.scene_path + node = instance.data["transientData"]["node"] + + # main job + jobs = [ + self.get_job( + instance, + script_path, + render_path, + node.name() + ) + ] + + for baking_script in instance.data.get("bakingNukeScripts", []): + render_path = baking_script["bakeRenderPath"] + script_path = baking_script["bakeScriptPath"] + exe_node_name = baking_script["bakeWriteNodeName"] + + jobs.append(self.get_job( + instance, + script_path, + render_path, + exe_node_name + )) + + return jobs diff --git a/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py new file mode 100644 index 0000000000..3eb49a39ee --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py @@ -0,0 +1,286 @@ +# -*- coding: utf-8 -*- +"""Create publishing job on RoyalRender.""" +import os +import attr +import json +import re + +import pyblish.api + +from openpype.modules.royalrender.rr_job import ( + RRJob, + RREnvList, + get_rr_platform +) +from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline import ( + legacy_io, +) +from openpype.pipeline.farm.pyblish_functions import ( + create_skeleton_instance, + create_instances_for_aov, + attach_instances_to_subset, + prepare_representations, + create_metadata_path +) +from openpype.pipeline import publish + + +class CreatePublishRoyalRenderJob(pyblish.api.InstancePlugin, + publish.ColormanagedPyblishPluginMixin): + """Creates job which publishes rendered files to publish area. + + Job waits until all rendering jobs are finished, triggers `publish` command + where it reads from prepared .json file with metadata about what should + be published, renames prepared images and publishes them. + + When triggered it produces .log file next to .json file in work area. + """ + label = "Create publish job in RR" + order = pyblish.api.IntegratorOrder + 0.2 + icon = "tractor" + targets = ["local"] + hosts = ["fusion", "maya", "nuke", "celaction", "aftereffects", "harmony"] + families = ["render.farm", "prerender.farm", + "renderlayer", "imagesequence", "vrayscene"] + aov_filter = {"maya": [r".*([Bb]eauty).*"], + "aftereffects": [r".*"], # for everything from AE + "harmony": [r".*"], # for everything from AE + "celaction": [r".*"]} + + skip_integration_repre_list = [] + + # mapping of instance properties to be transferred to new instance + # for every specified family + instance_transfer = { + "slate": ["slateFrames", "slate"], + "review": ["lutPath"], + "render2d": ["bakingNukeScripts", "version"], + "renderlayer": ["convertToScanline"] + } + + # list of family names to transfer to new family if present + families_transfer = ["render3d", "render2d", "ftrack", "slate"] + + environ_job_filter = [ + "OPENPYPE_METADATA_FILE" + ] + + environ_keys = [ + "FTRACK_API_USER", + "FTRACK_API_KEY", + "FTRACK_SERVER", + "AVALON_APP_NAME", + "OPENPYPE_USERNAME", + "OPENPYPE_SG_USER", + "OPENPYPE_MONGO" + ] + priority = 50 + + def process(self, instance): + context = instance.context + self.context = context + self.anatomy = instance.context.data["anatomy"] + + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + + instance_skeleton_data = create_skeleton_instance( + instance, + families_transfer=self.families_transfer, + instance_transfer=self.instance_transfer) + + do_not_add_review = False + if instance.data.get("review") is False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True + + if isinstance(instance.data.get("expectedFiles")[0], dict): + instances = create_instances_for_aov( + instance, instance_skeleton_data, + self.aov_filter, self.skip_integration_repre_list, + do_not_add_review) + + else: + representations = prepare_representations( + instance_skeleton_data, + instance.data.get("expectedFiles"), + self.anatomy, + self.aov_filter, + self.skip_integration_repre_list, + do_not_add_review, + instance.context, + self + ) + + if "representations" not in instance_skeleton_data.keys(): + instance_skeleton_data["representations"] = [] + + # add representation + instance_skeleton_data["representations"] += representations + instances = [instance_skeleton_data] + + # attach instances to subset + if instance.data.get("attachTo"): + instances = attach_instances_to_subset( + instance.data.get("attachTo"), instances + ) + + self.log.info("Creating RoyalRender Publish job ...") + + if not instance.data.get("rrJobs"): + self.log.error(("There is no prior RoyalRender " + "job on the instance.")) + raise KnownPublishError( + "Can't create publish job without prior rendering jobs first") + + rr_job = self.get_job(instance, instances) + instance.data["rrJobs"].append(rr_job) + + # publish job file + publish_job = { + "asset": instance_skeleton_data["asset"], + "frameStart": instance_skeleton_data["frameStart"], + "frameEnd": instance_skeleton_data["frameEnd"], + "fps": instance_skeleton_data["fps"], + "source": instance_skeleton_data["source"], + "user": instance.context.data["user"], + "version": instance.context.data["version"], # workfile version + "intent": instance.context.data.get("intent"), + "comment": instance.context.data.get("comment"), + "job": attr.asdict(rr_job), + "session": legacy_io.Session.copy(), + "instances": instances + } + + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, self.anatomy) + + self.log.info("Writing json file: {}".format(metadata_path)) + with open(metadata_path, "w") as f: + json.dump(publish_job, f, indent=4, sort_keys=True) + + def get_job(self, instance, instances): + """Create RR publishing job. + + Based on provided original instance and additional instances, + create publishing job and return it to be submitted to farm. + + Args: + instance (Instance): Original instance. + instances (list of Instance): List of instances to + be published on farm. + + Returns: + RRJob: RoyalRender publish job. + + """ + data = instance.data.copy() + subset = data["subset"] + jobname = "Publish - {subset}".format(subset=subset) + + # Transfer the environment from the original job to this dependent + # job, so they use the same environment + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, self.anatomy) + + anatomy_data = instance.context.data["anatomyData"] + + environment = RREnvList({ + "AVALON_PROJECT": anatomy_data["project"]["name"], + "AVALON_ASSET": anatomy_data["asset"], + "AVALON_TASK": anatomy_data["task"]["name"], + "OPENPYPE_USERNAME": anatomy_data["user"] + }) + + # add environments from self.environ_keys + for env_key in self.environ_keys: + if os.getenv(env_key): + environment[env_key] = os.environ[env_key] + + # pass environment keys from self.environ_job_filter + # and collect all pre_ids to wait for + job_environ = {} + jobs_pre_ids = [] + for job in instance.data["rrJobs"]: # type: RRJob + if job.rrEnvList: + job_environ.update( + dict(RREnvList.parse(job.rrEnvList)) + ) + jobs_pre_ids.append(job.PreID) + + for env_j_key in self.environ_job_filter: + if job_environ.get(env_j_key): + environment[env_j_key] = job_environ[env_j_key] + + priority = self.priority or instance.data.get("priority", 50) + + # rr requires absolut path or all jobs won't show up in rControl + abs_metadata_path = self.anatomy.fill_root(rootless_metadata_path) + + # command line set in E01__OpenPype__PublishJob.cfg, here only + # additional logging + args = [ + ">", os.path.join(os.path.dirname(abs_metadata_path), + "rr_out.log"), + "2>&1" + ] + + job = RRJob( + Software="OpenPype", + Renderer="Once", + SeqStart=1, + SeqEnd=1, + SeqStep=1, + SeqFileOffset=0, + Version=self._sanitize_version(os.environ.get("OPENPYPE_VERSION")), + SceneName=abs_metadata_path, + # command line arguments + CustomAddCmdFlags=" ".join(args), + IsActive=True, + ImageFilename="execOnce.file", + ImageDir="", + ImageExtension="", + ImagePreNumberLetter="", + SceneOS=get_rr_platform(), + rrEnvList=environment.serialize(), + Priority=priority, + CustomSHotName=jobname, + CompanyProjectName=instance.context.data["projectName"] + ) + + # add assembly jobs as dependencies + if instance.data.get("tileRendering"): + self.log.info("Adding tile assembly jobs as dependencies...") + job.WaitForPreIDs += instance.data.get("assemblySubmissionJobs") + elif instance.data.get("bakingSubmissionJobs"): + self.log.info("Adding baking submission jobs as dependencies...") + job.WaitForPreIDs += instance.data["bakingSubmissionJobs"] + else: + job.WaitForPreIDs += jobs_pre_ids + + return job + + def _sanitize_version(self, version): + """Returns version in format MAJOR.MINORPATCH + + 3.15.7-nightly.2 >> 3.157 + """ + VERSION_REGEX = re.compile( + r"(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"(?:-(?P[a-zA-Z\d\-.]*))?" + r"(?:\+(?P[a-zA-Z\d\-.]*))?" + ) + + valid_parts = VERSION_REGEX.findall(version) + if len(valid_parts) != 1: + # Return invalid version with filled 'origin' attribute + return version + + # Unpack found version + major, minor, patch, pre, post = valid_parts[0] + + return "{}.{}{}".format(major, minor, patch) diff --git a/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py b/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py new file mode 100644 index 0000000000..8fc8604b83 --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py @@ -0,0 +1,131 @@ +# -*- coding: utf-8 -*- +"""Submit jobs to RoyalRender.""" +import tempfile +import platform + +import pyblish.api +from openpype.modules.royalrender.api import ( + RRJob, + Api as rrApi, + SubmitterParameter +) +from openpype.pipeline.publish import KnownPublishError + + +class SubmitJobsToRoyalRender(pyblish.api.ContextPlugin): + """Find all jobs, create submission XML and submit it to RoyalRender.""" + label = "Submit jobs to RoyalRender" + order = pyblish.api.IntegratorOrder + 0.3 + targets = ["local"] + + def __init__(self): + super(SubmitJobsToRoyalRender, self).__init__() + self._rr_root = None + self._rr_api = None + self._submission_parameters = [] + + def process(self, context): + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + + if rr_settings["enabled"] is not True: + self.log.warning("RoyalRender modules is disabled.") + return + + # iterate over all instances and try to find RRJobs + jobs = [] + instance_rr_path = None + for instance in context: + if isinstance(instance.data.get("rrJob"), RRJob): + jobs.append(instance.data.get("rrJob")) + if instance.data.get("rrJobs"): + if all( + isinstance(job, RRJob) + for job in instance.data.get("rrJobs")): + jobs += instance.data.get("rrJobs") + if instance.data.get("rrPathName"): + instance_rr_path = instance.data["rrPathName"] + + if jobs: + self._rr_root = self._resolve_rr_path(context, instance_rr_path) + if not self._rr_root: + raise KnownPublishError( + ("Missing RoyalRender root. " + "You need to configure RoyalRender module.")) + self._rr_api = rrApi(self._rr_root) + self._submission_parameters = self.get_submission_parameters() + self.process_submission(jobs) + return + + self.log.info("No RoyalRender jobs found") + + def process_submission(self, jobs): + # type: ([RRJob]) -> None + + idx_pre_id = 0 + for job in jobs: + job.PreID = idx_pre_id + if idx_pre_id > 0: + job.WaitForPreIDs.append(idx_pre_id - 1) + idx_pre_id += 1 + + submission = rrApi.create_submission( + jobs, + self._submission_parameters) + + xml = tempfile.NamedTemporaryFile(suffix=".xml", delete=False) + with open(xml.name, "w") as f: + f.write(submission.serialize()) + + self.log.info("submitting job(s) file: {}".format(xml.name)) + self._rr_api.submit_file(file=xml.name) + + def create_file(self, name, ext, contents=None): + temp = tempfile.NamedTemporaryFile( + dir=self.tempdir, + suffix=ext, + prefix=name + '.', + delete=False, + ) + + if contents: + with open(temp.name, 'w') as f: + f.write(contents) + + return temp.name + + def get_submission_parameters(self): + return [SubmitterParameter("RequiredMemory", "0")] + + @staticmethod + def _resolve_rr_path(context, rr_path_name): + # type: (pyblish.api.Context, str) -> str + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + try: + default_servers = rr_settings["rr_paths"] + project_servers = ( + context.data + ["project_settings"] + ["royalrender"] + ["rr_paths"] + ) + rr_servers = { + k: default_servers[k] + for k in project_servers + if k in default_servers + } + + except (AttributeError, KeyError): + # Handle situation were we had only one url for royal render. + return context.data["defaultRRPath"][platform.system().lower()] + + return rr_servers[rr_path_name][platform.system().lower()] diff --git a/openpype/modules/royalrender/rr_job.py b/openpype/modules/royalrender/rr_job.py index c660eceac7..b85ac592f8 100644 --- a/openpype/modules/royalrender/rr_job.py +++ b/openpype/modules/royalrender/rr_job.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Python wrapper for RoyalRender XML job file.""" +import sys from xml.dom import minidom as md import attr from collections import namedtuple, OrderedDict @@ -8,8 +9,36 @@ from collections import namedtuple, OrderedDict CustomAttribute = namedtuple("CustomAttribute", ["name", "value"]) +def get_rr_platform(): + # type: () -> str + """Returns name of platform used in rr jobs.""" + if sys.platform.lower() in ["win32", "win64"]: + return "windows" + elif sys.platform.lower() == "darwin": + return "mac" + else: + return "linux" + + +class RREnvList(dict): + def serialize(self): + # VariableA=ValueA~~~VariableB=ValueB + return "~~~".join( + ["{}={}".format(k, v) for k, v in sorted(self.items())]) + + @staticmethod + def parse(data): + # type: (str) -> RREnvList + """Parse rrEnvList string and return it as RREnvList object.""" + out = RREnvList() + for var in data.split("~~~"): + k, v = var.split("=") + out[k] = v + return out + + @attr.s -class RRJob: +class RRJob(object): """Mapping of Royal Render job file to a data class.""" # Required @@ -35,7 +64,7 @@ class RRJob: # Is the job enabled for submission? # enabled by default - IsActive = attr.ib() # type: str + IsActive = attr.ib() # type: bool # Sequence settings of this job SeqStart = attr.ib() # type: int @@ -60,7 +89,7 @@ class RRJob: # If you render a single file, e.g. Quicktime or Avi, then you have to # set this value. Videos have to be rendered at once on one client. - ImageSingleOutputFile = attr.ib(default="false") # type: str + ImageSingleOutputFile = attr.ib(default=False) # type: bool # Semi-Required (required for some render applications) # ----------------------------------------------------- @@ -87,7 +116,7 @@ class RRJob: # Frame Padding of the frame number in the rendered filename. # Some render config files are setting the padding at render time. - ImageFramePadding = attr.ib(default=None) # type: str + ImageFramePadding = attr.ib(default=None) # type: int # Some render applications support overriding the image format at # the render commandline. @@ -108,7 +137,7 @@ class RRJob: # jobs send from this machine. If a job with the PreID was found, then # this jobs waits for the other job. Note: This flag can be used multiple # times to wait for multiple jobs. - WaitForPreID = attr.ib(default=None) # type: int + WaitForPreIDs = attr.ib(factory=list) # type: list # List of submitter options per job # list item must be of `SubmitterParameter` type @@ -120,6 +149,9 @@ class RRJob: # list item must be of `CustomAttribute` named tuple CustomAttributes = attr.ib(factory=list) # type: list + # This is used to hold command line arguments for Execute job + CustomAddCmdFlags = attr.ib(default=None) # type: str + # Additional information for subsequent publish script and # for better display in rrControl UserName = attr.ib(default=None) # type: str @@ -129,6 +161,7 @@ class RRJob: CustomUserInfo = attr.ib(default=None) # type: str SubmitMachine = attr.ib(default=None) # type: str Color_ID = attr.ib(default=2) # type: int + CompanyProjectName = attr.ib(default=None) # type: str RequiredLicenses = attr.ib(default=None) # type: str @@ -137,6 +170,10 @@ class RRJob: TotalFrames = attr.ib(default=None) # type: int Tiled = attr.ib(default=None) # type: str + # Environment + # only used in RR 8.3 and newer + rrEnvList = attr.ib(default=None) # type: str + class SubmitterParameter: """Wrapper for Submitter Parameters.""" @@ -160,7 +197,7 @@ class SubmitterParameter: @attr.s -class SubmitFile: +class SubmitFile(object): """Class wrapping Royal Render submission XML file.""" # Syntax version of the submission file. @@ -169,11 +206,11 @@ class SubmitFile: # Delete submission file after processing DeleteXML = attr.ib(default=1) # type: int - # List of submitter options per job + # List of the submitter options per job. # list item must be of `SubmitterParameter` type SubmitterParameters = attr.ib(factory=list) # type: list - # List of job is submission batch. + # List of the jobs in submission batch. # list item must be of type `RRJob` Jobs = attr.ib(factory=list) # type: list @@ -225,7 +262,7 @@ class SubmitFile: # foo=bar~baz~goo self._process_submitter_parameters( self.SubmitterParameters, root, job_file) - + root.appendChild(job_file) for job in self.Jobs: # type: RRJob if not isinstance(job, RRJob): raise AttributeError( @@ -241,16 +278,28 @@ class SubmitFile: job, dict_factory=OrderedDict, filter=filter_data) serialized_job.pop("CustomAttributes") serialized_job.pop("SubmitterParameters") + # we are handling `WaitForPreIDs` separately. + wait_pre_ids = serialized_job.pop("WaitForPreIDs", []) for custom_attr in job_custom_attributes: # type: CustomAttribute serialized_job["Custom{}".format( custom_attr.name)] = custom_attr.value for item, value in serialized_job.items(): - xml_attr = root.create(item) + xml_attr = root.createElement(item) xml_attr.appendChild( - root.createTextNode(value) + root.createTextNode(str(value)) ) xml_job.appendChild(xml_attr) + # WaitForPreID - can be used multiple times + for pre_id in wait_pre_ids: + xml_attr = root.createElement("WaitForPreID") + xml_attr.appendChild( + root.createTextNode(str(pre_id)) + ) + xml_job.appendChild(xml_attr) + + job_file.appendChild(xml_job) + return root.toprettyxml(indent="\t") diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png new file mode 100644 index 0000000000..68c5aec117 Binary files /dev/null and b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png differ diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg new file mode 100644 index 0000000000..864eeaf15a --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg @@ -0,0 +1,71 @@ +IconApp= E01__OpenPype.png +Name= OpenPype +rendererName= Once +Version= 1 +Version_Minor= 0 +Type=Execute +TYPEv9=Execute +ExecuteJobType=Once + + +################################# [Windows] [Linux] [Osx] ################################## + + +CommandLine=> + +CommandLine= + + +::win CommandLine= set "CUDA_VISIBLE_DEVICES=" +::lx CommandLine= setenv CUDA_VISIBLE_DEVICES +::osx CommandLine= setenv CUDA_VISIBLE_DEVICES + + +CommandLine= + + +CommandLine= + + +CommandLine= + + +CommandLine= "" --headless publish + --targets royalrender + --targets farm + + + +CommandLine= + + + + +################################## Render Settings ################################## + + + +################################## Submitter Settings ################################## +StartMultipleInstances= 0~0 +SceneFileExtension= *.json +AllowImageNameChange= 0 +AllowImageDirChange= 0 +SequenceDivide= 0~1 +PPSequenceCheck=0~0 +PPCreateSmallVideo=0~0 +PPCreateFullVideo=0~0 +AllowLocalRenderOut= 0~0 + + +################################## Client Settings ################################## + +IconApp=E01__OpenPype.png + +licenseFailLine= + +errorSearchLine= + +permanentErrorSearchLine = + +Frozen_MinCoreUsage=0.3 +Frozen_Minutes=30 diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc new file mode 100644 index 0000000000..ba38337340 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc @@ -0,0 +1,2 @@ +IconApp= E01__OpenPype.png +Name= OpenPype diff --git a/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg b/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg new file mode 100644 index 0000000000..07f7547d29 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg @@ -0,0 +1,12 @@ +[Windows] +Executable= openpype_console.exe +Path= OS; \OpenPype\*\openpype_console.exe +Path= 32; \OpenPype\*\openpype_console.exe + +[Linux] +Executable= openpype_console +Path= OS; /opt/openpype/*/openpype_console + +[Mac] +Executable= openpype_console +Path= OS; /Applications/OpenPype*/Content/MacOS/openpype_console diff --git a/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg new file mode 100644 index 0000000000..70f0bc2e24 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg @@ -0,0 +1,11 @@ +PrePostType= pre +CommandLine= + +CommandLine= rrPythonconsole" > "render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py" + +CommandLine= + + +CommandLine= "" +CommandLine= + diff --git a/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py new file mode 100644 index 0000000000..891de9594c --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py @@ -0,0 +1,4 @@ +# -*- coding: utf-8 -*- +import os + +os.environ["OPENYPYPE_TESTVAR"] = "OpenPype was here" diff --git a/openpype/modules/settings_action.py b/openpype/modules/settings_action.py index 90092a133d..5950fbd910 100644 --- a/openpype/modules/settings_action.py +++ b/openpype/modules/settings_action.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayAction @@ -10,6 +11,8 @@ class SettingsAction(OpenPypeModule, ITrayAction): def initialize(self, _modules_settings): # This action is always enabled self.enabled = True + if AYON_SERVER_ENABLED: + self.enabled = False # User role # TODO should be changeable @@ -80,6 +83,8 @@ class LocalSettingsAction(OpenPypeModule, ITrayAction): def initialize(self, _modules_settings): # This action is always enabled self.enabled = True + if AYON_SERVER_ENABLED: + self.enabled = False # Tray attributes self.settings_window = None diff --git a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py index 0b03ac2e5d..db2e4eadc5 100644 --- a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py +++ b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py @@ -1,7 +1,5 @@ -import os - import pyblish.api -from openpype.lib.mongo import OpenPypeMongoConnection +from openpype.client.mongo import OpenPypeMongoConnection class CollectShotgridEntities(pyblish.api.ContextPlugin): @@ -14,7 +12,7 @@ class CollectShotgridEntities(pyblish.api.ContextPlugin): avalon_project = context.data.get("projectEntity") avalon_asset = context.data.get("assetEntity") - avalon_task_name = os.getenv("AVALON_TASK") + avalon_task_name = context.data.get("task") self.log.info(avalon_project) self.log.info(avalon_asset) diff --git a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py index 0f4bc22a34..891c92bb7a 100644 --- a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py +++ b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook from openpype_modules.slack import SLACK_MODULE_DIR @@ -8,6 +8,7 @@ class PrePython2Support(PreLaunchHook): Path to vendor modules is added to the beginning of PYTHONPATH. """ + launch_types = set() def execute(self): if not self.application.use_python_2: diff --git a/openpype/modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py index 86c97586d2..4c5a39318a 100644 --- a/openpype/modules/slack/plugins/publish/integrate_slack_api.py +++ b/openpype/modules/slack/plugins/publish/integrate_slack_api.py @@ -350,6 +350,10 @@ class SlackPython3Operations(AbstractSlackOperations): self.log.warning("Cannot pull user info, " "mentions won't work", exc_info=True) return [], [] + except Exception: + self.log.warning("Cannot pull user info, " + "mentions won't work", exc_info=True) + return [], [] return users, groups @@ -377,8 +381,12 @@ class SlackPython3Operations(AbstractSlackOperations): return response.data["ts"], file_ids except SlackApiError as e: # # You will get a SlackApiError if "ok" is False - error_str = self._enrich_error(str(e.response["error"]), channel) - self.log.warning("Error happened {}".format(error_str)) + if e.response.get("error"): + error_str = self._enrich_error(str(e.response["error"]), channel) + else: + error_str = self._enrich_error(str(e), channel) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) except Exception as e: error_str = self._enrich_error(str(e), channel) self.log.warning("Not SlackAPI error", exc_info=True) @@ -448,12 +456,14 @@ class SlackPython2Operations(AbstractSlackOperations): if response.get("error"): error_str = self._enrich_error(str(response.get("error")), channel) - self.log.warning("Error happened: {}".format(error_str)) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) else: return response["ts"], file_ids except Exception as e: # You will get a SlackApiError if "ok" is False error_str = self._enrich_error(str(e), channel) - self.log.warning("Error happened: {}".format(error_str)) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) return None, [] diff --git a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index bbc220945c..bdb4b109a1 100644 --- a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,12 +1,9 @@ import os import shutil +import filecmp -from openpype.client.entities import ( - get_representations, - get_project -) - -from openpype.lib import PreLaunchHook +from openpype.client.entities import get_representations +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.lib.profiles_filtering import filter_profiles from openpype.modules.sync_server.sync_server import ( download_last_published_workfile, @@ -32,6 +29,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "nuke", "nukeassist", "nukex", "hiero", "nukestudio", "maya", "harmony", "celaction", "flame", "fusion", "houdini", "tvpaint"] + launch_types = {LaunchTypes.local} def execute(self): """Check if local workfile doesn't exist, else copy it. @@ -119,6 +117,18 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "task": {"name": task_name, "type": task_type} } + # Add version filter + workfile_version = self.launch_context.data.get("workfile_version", -1) + if workfile_version > 0 and workfile_version not in {None, "last"}: + context_filters["version"] = self.launch_context.data[ + "workfile_version" + ] + + # Only one version will be matched + version_index = 0 + else: + version_index = workfile_version + workfile_representations = list(get_representations( project_name, context_filters=context_filters @@ -136,9 +146,10 @@ class CopyLastPublishedWorkfile(PreLaunchHook): lambda r: r["context"].get("version") is not None, workfile_representations ) - workfile_representation = max( + # Get workfile version + workfile_representation = sorted( filtered_repres, key=lambda r: r["context"]["version"] - ) + )[version_index] # Copy file and substitute path last_published_workfile_path = download_last_published_workfile( @@ -184,3 +195,69 @@ class CopyLastPublishedWorkfile(PreLaunchHook): self.data["last_workfile_path"] = local_workfile_path # Keep source filepath for further path conformation self.data["source_filepath"] = last_published_workfile_path + + # Get resources directory + resources_dir = os.path.join( + os.path.dirname(local_workfile_path), 'resources' + ) + # Make resource directory if it doesn't exist + if not os.path.exists(resources_dir): + os.mkdir(resources_dir) + + # Copy resources to the local resources directory + for file in workfile_representation['files']: + # Get resource main path + resource_main_path = anatomy.fill_root(file["path"]) + + # Get resource file basename + resource_basename = os.path.basename(resource_main_path) + + # Only copy if the resource file exists, and it's not the workfile + if ( + not os.path.exists(resource_main_path) + or resource_basename == os.path.basename( + last_published_workfile_path + ) + ): + continue + + # Get resource path in workfile folder + resource_work_path = os.path.join( + resources_dir, resource_basename + ) + + # Check if the resource file already exists in the resources folder + if os.path.exists(resource_work_path): + # Check if both files are the same + if filecmp.cmp(resource_main_path, resource_work_path): + self.log.warning( + 'Resource "{}" already exists.' + .format(resource_basename) + ) + continue + else: + # Add `.old` to existing resource path + resource_path_old = resource_work_path + '.old' + if os.path.exists(resource_work_path + '.old'): + for i in range(1, 100): + p = resource_path_old + '%02d' % i + if not os.path.exists(p): + # Rename existing resource file to + # `resource_name.old` + 2 digits + shutil.move(resource_work_path, p) + break + else: + self.log.warning( + 'There are a hundred old files for ' + 'resource "{}". ' + 'Perhaps is it time to clean up your ' + 'resources folder' + .format(resource_basename) + ) + continue + else: + # Rename existing resource file to `resource_name.old` + shutil.move(resource_work_path, resource_path_old) + + # Copy resource file to workfile resources folder + shutil.copy(resource_main_path, resources_dir) diff --git a/openpype/plugins/load/add_site.py b/openpype/modules/sync_server/plugins/load/add_site.py similarity index 100% rename from openpype/plugins/load/add_site.py rename to openpype/modules/sync_server/plugins/load/add_site.py diff --git a/openpype/plugins/load/remove_site.py b/openpype/modules/sync_server/plugins/load/remove_site.py similarity index 100% rename from openpype/plugins/load/remove_site.py rename to openpype/modules/sync_server/plugins/load/remove_site.py diff --git a/openpype/modules/sync_server/sync_server.py b/openpype/modules/sync_server/sync_server.py index 98065b68a0..1b7b2dc3a6 100644 --- a/openpype/modules/sync_server/sync_server.py +++ b/openpype/modules/sync_server/sync_server.py @@ -536,8 +536,8 @@ class SyncServerThread(threading.Thread): _site_is_working(self.module, project_name, remote_site, remote_site_config)]): self.log.debug( - "Some of the sites {} - {} is not working properly".format( - local_site, remote_site + "Some of the sites {} - {} in {} is not working properly".format( # noqa + local_site, remote_site, project_name ) ) diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index b85b045bd9..8a92697920 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -15,7 +15,7 @@ from openpype.client import ( get_representations, get_representation_by_id, ) -from openpype.modules import OpenPypeModule, ITrayModule +from openpype.modules import OpenPypeModule, ITrayModule, IPluginPaths from openpype.settings import ( get_project_settings, get_system_settings, @@ -34,12 +34,17 @@ from openpype.settings.constants import ( from .providers.local_drive import LocalDriveHandler from .providers import lib -from .utils import time_function, SyncStatus, SiteAlreadyPresentError +from .utils import ( + time_function, + SyncStatus, + SiteAlreadyPresentError, + SYNC_SERVER_ROOT, +) log = Logger.get_logger("SyncServer") -class SyncServerModule(OpenPypeModule, ITrayModule): +class SyncServerModule(OpenPypeModule, ITrayModule, IPluginPaths): """ Synchronization server that is syncing published files from local to any of implemented providers (like GDrive, S3 etc.) @@ -136,6 +141,27 @@ class SyncServerModule(OpenPypeModule, ITrayModule): # projects that long tasks are running on self.projects_processed = set() + def get_plugin_paths(self): + """Deadline plugin paths.""" + return { + "load": [os.path.join(SYNC_SERVER_ROOT, "plugins", "load")] + } + + def get_site_icons(self): + """Icons for sites. + + Returns: + dict[str, str]: Path to icon by site. + """ + + resource_path = os.path.join( + SYNC_SERVER_ROOT, "providers", "resources" + ) + return { + provider: "{}/{}.png".format(resource_path, provider) + for provider in ["studio", "local_drive", "gdrive"] + } + """ Start of Public API """ def add_site(self, project_name, representation_id, site_name=None, force=False, priority=None, reset_timer=False): @@ -204,6 +230,58 @@ class SyncServerModule(OpenPypeModule, ITrayModule): if remove_local_files: self._remove_local_file(project_name, representation_id, site_name) + def get_progress_for_repre(self, doc, active_site, remote_site): + """ + Calculates average progress for representation. + If site has created_dt >> fully available >> progress == 1 + Could be calculated in aggregate if it would be too slow + Args: + doc(dict): representation dict + Returns: + (dict) with active and remote sites progress + {'studio': 1.0, 'gdrive': -1} - gdrive site is not present + -1 is used to highlight the site should be added + {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not + uploaded yet + """ + progress = {active_site: -1, + remote_site: -1} + if not doc: + return progress + + files = {active_site: 0, remote_site: 0} + doc_files = doc.get("files") or [] + for doc_file in doc_files: + if not isinstance(doc_file, dict): + continue + + sites = doc_file.get("sites") or [] + for site in sites: + if ( + # Pype 2 compatibility + not isinstance(site, dict) + # Check if site name is one of progress sites + or site["name"] not in progress + ): + continue + + files[site["name"]] += 1 + norm_progress = max(progress[site["name"]], 0) + if site.get("created_dt"): + progress[site["name"]] = norm_progress + 1 + elif site.get("progress"): + progress[site["name"]] = norm_progress + site["progress"] + else: # site exists, might be failed, do not add again + progress[site["name"]] = 0 + + # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 + avg_progress = {} + avg_progress[active_site] = \ + progress[active_site] / max(files[active_site], 1) + avg_progress[remote_site] = \ + progress[remote_site] / max(files[remote_site], 1) + return avg_progress + def compute_resource_sync_sites(self, project_name): """Get available resource sync sites state for publish process. @@ -845,10 +923,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule): (str): full absolut path to directory with hooks for the module """ - return os.path.join( - os.path.dirname(os.path.abspath(__file__)), - "launch_hooks" - ) + return os.path.join(SYNC_SERVER_ROOT, "launch_hooks") # Needs to be refactored after Settings are updated # # Methods for Settings to get appriate values to fill forms diff --git a/openpype/modules/sync_server/utils.py b/openpype/modules/sync_server/utils.py index 4caa01e9d7..b2f855539f 100644 --- a/openpype/modules/sync_server/utils.py +++ b/openpype/modules/sync_server/utils.py @@ -1,9 +1,12 @@ +import os import time from openpype.lib import Logger log = Logger.get_logger("SyncServer") +SYNC_SERVER_ROOT = os.path.dirname(os.path.abspath(__file__)) + class ResumableError(Exception): """Error which could be temporary, skip current loop, try next time""" diff --git a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py index d6ae013403..76c3cca33e 100644 --- a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py +++ b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py @@ -1,4 +1,4 @@ -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostStartTimerHook(PostLaunchHook): @@ -7,6 +7,7 @@ class PostStartTimerHook(PostLaunchHook): This module requires enabled TimerManager module. """ order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index d656d58adc..8f370d389b 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -13,6 +13,7 @@ from .create import ( BaseCreator, Creator, AutoCreator, + HiddenCreator, CreatedInstance, CreatorError, @@ -88,11 +89,12 @@ from .context_tools import ( deregister_host, get_process_id, + get_global_context, get_current_context, get_current_host_name, get_current_project_name, get_current_asset_name, - get_current_task_name, + get_current_task_name ) install = install_host uninstall = uninstall_host @@ -113,6 +115,7 @@ __all__ = ( "BaseCreator", "Creator", "AutoCreator", + "HiddenCreator", "CreatedInstance", "CreatorError", @@ -186,6 +189,7 @@ __all__ = ( "deregister_host", "get_process_id", + "get_global_context", "get_current_context", "get_current_host_name", "get_current_project_name", diff --git a/openpype/pipeline/actions.py b/openpype/pipeline/actions.py index b488fe3e1f..feb1bd05d2 100644 --- a/openpype/pipeline/actions.py +++ b/openpype/pipeline/actions.py @@ -20,7 +20,13 @@ class LauncherAction(object): log.propagate = True def is_compatible(self, session): - """Return whether the class is compatible with the Session.""" + """Return whether the class is compatible with the Session. + + Args: + session (dict[str, Union[str, None]]): Session data with + AVALON_PROJECT, AVALON_ASSET and AVALON_TASK. + """ + return True def process(self, session, **kwargs): diff --git a/openpype/pipeline/anatomy.py b/openpype/pipeline/anatomy.py index 30748206a3..029b5cc1ff 100644 --- a/openpype/pipeline/anatomy.py +++ b/openpype/pipeline/anatomy.py @@ -5,17 +5,19 @@ import platform import collections import numbers +import ayon_api import six import time +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import ( get_local_settings, ) from openpype.settings.constants import ( DEFAULT_PROJECT_KEY ) - from openpype.client import get_project +from openpype.lib import Logger, get_local_site_id from openpype.lib.path_templates import ( TemplateUnsolved, TemplateResult, @@ -23,7 +25,6 @@ from openpype.lib.path_templates import ( TemplatesDict, FormatObject, ) -from openpype.lib.log import Logger from openpype.modules import ModulesManager log = Logger.get_logger(__name__) @@ -475,6 +476,13 @@ class Anatomy(BaseAnatomy): Union[Dict[str, str], None]): Local root overrides. """ + if AYON_SERVER_ENABLED: + if not project_name: + return + return ayon_api.get_project_roots_for_site( + project_name, get_local_site_id() + ) + if local_settings is None: local_settings = get_local_settings() diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 3f2d4891c1..2800050496 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -2,9 +2,12 @@ from copy import deepcopy import re import os import json -import platform import contextlib +import functools +import platform import tempfile +import warnings + from openpype import PACKAGE_DIR from openpype.settings import get_project_settings from openpype.lib import ( @@ -13,12 +16,65 @@ from openpype.lib import ( Logger ) from openpype.pipeline import Anatomy +from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS + log = Logger.get_logger(__name__) -class CashedData: +class CachedData: remapping = None + has_compatible_ocio_package = None + config_version_data = {} + ocio_config_colorspaces = {} + allowed_exts = { + ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS) + } + + +class DeprecatedWarning(DeprecationWarning): + pass + + +def deprecated(new_destination): + """Mark functions as deprecated. + + It will result in a warning being emitted when the function is used. + """ + + func = None + if callable(new_destination): + func = new_destination + new_destination = None + + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + warnings.simplefilter("always", DeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(decorated_func.__name__, warning_message), + category=DeprecatedWarning, + stacklevel=4 + ) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) @contextlib.contextmanager @@ -64,124 +120,264 @@ def get_ocio_config_script_path(): ) -def get_imageio_colorspace_from_filepath( - path, host_name, project_name, +def get_colorspace_name_from_filepath( + filepath, host_name, project_name, config_data=None, file_rules=None, project_settings=None, validate=True ): """Get colorspace name from filepath - ImageIO Settings file rules are tested for matching rule. - Args: - path (str): path string, file rule pattern is tested on it + filepath (str): path string, file rule pattern is tested on it host_name (str): host name project_name (str): project name - config_data (dict, optional): config path and template in dict. + config_data (Optional[dict]): config path and template in dict. Defaults to None. - file_rules (dict, optional): file rule data from settings. + file_rules (Optional[dict]): file rule data from settings. Defaults to None. - project_settings (dict, optional): project settings. Defaults to None. - validate (bool, optional): should resulting colorspace be validated - with config file? Defaults to True. + project_settings (Optional[dict]): project settings. Defaults to None. + validate (Optional[bool]): should resulting colorspace be validated + with config file? Defaults to True. Returns: str: name of colorspace """ - if not any([config_data, file_rules]): - project_settings = project_settings or get_project_settings( - project_name - ) - config_data = get_imageio_config( - project_name, host_name, project_settings) + project_settings, config_data, file_rules = _get_context_settings( + host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) - # in case host color management is not enabled - if not config_data: - return None + if not config_data: + # in case global or host color management is not enabled + return None - file_rules = get_imageio_file_rules( - project_name, host_name, project_settings) + # use ImageIO file rules + colorspace_name = get_imageio_file_rules_colorspace_from_filepath( + filepath, host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) - # match file rule from path - colorspace_name = None - for _frule_name, file_rule in file_rules.items(): - pattern = file_rule["pattern"] - extension = file_rule["ext"] - ext_match = re.match( - r".*(?=.{})".format(extension), path - ) - file_match = re.search( - pattern, path - ) + # try to get colorspace from OCIO v2 file rules + if ( + not colorspace_name + and compatibility_check_config_version(config_data["path"], major=2) + ): + colorspace_name = get_config_file_rules_colorspace_from_filepath( + config_data["path"], filepath) - if ext_match and file_match: - colorspace_name = file_rule["colorspace"] + # use parse colorspace from filepath as fallback + colorspace_name = colorspace_name or parse_colorspace_from_filepath( + filepath, config_path=config_data["path"] + ) if not colorspace_name: log.info("No imageio file rule matched input path: '{}'".format( - path + filepath )) return None # validate matching colorspace with config - if validate and config_data: + if validate: validate_imageio_colorspace_in_config( config_data["path"], colorspace_name) return colorspace_name -def parse_colorspace_from_filepath( - path, host_name, project_name, - config_data=None, +# TODO: remove this in future - backward compatibility +@deprecated("get_imageio_file_rules_colorspace_from_filepath") +def get_imageio_colorspace_from_filepath(*args, **kwargs): + return get_imageio_file_rules_colorspace_from_filepath(*args, **kwargs) + +# TODO: remove this in future - backward compatibility +@deprecated("get_imageio_file_rules_colorspace_from_filepath") +def get_colorspace_from_filepath(*args, **kwargs): + return get_imageio_file_rules_colorspace_from_filepath(*args, **kwargs) + + +def _get_context_settings( + host_name, project_name, + config_data=None, file_rules=None, project_settings=None +): + project_settings = project_settings or get_project_settings( + project_name + ) + + config_data = config_data or get_imageio_config( + project_name, host_name, project_settings) + + # in case host color management is not enabled + if not config_data: + return (None, None, None) + + file_rules = file_rules or get_imageio_file_rules( + project_name, host_name, project_settings) + + return project_settings, config_data, file_rules + + +def get_imageio_file_rules_colorspace_from_filepath( + filepath, host_name, project_name, + config_data=None, file_rules=None, + project_settings=None +): + """Get colorspace name from filepath + + ImageIO Settings file rules are tested for matching rule. + + Args: + filepath (str): path string, file rule pattern is tested on it + host_name (str): host name + project_name (str): project name + config_data (Optional[dict]): config path and template in dict. + Defaults to None. + file_rules (Optional[dict]): file rule data from settings. + Defaults to None. + project_settings (Optional[dict]): project settings. Defaults to None. + + Returns: + str: name of colorspace + """ + project_settings, config_data, file_rules = _get_context_settings( + host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) + + if not config_data: + # in case global or host color management is not enabled + return None + + # match file rule from path + colorspace_name = None + for file_rule in file_rules.values(): + pattern = file_rule["pattern"] + extension = file_rule["ext"] + ext_match = re.match( + r".*(?=.{})".format(extension), filepath + ) + file_match = re.search( + pattern, filepath + ) + + if ext_match and file_match: + colorspace_name = file_rule["colorspace"] + + return colorspace_name + + +def get_config_file_rules_colorspace_from_filepath(config_path, filepath): + """Get colorspace from file path wrapper. + + Wrapper function for getting colorspace from file path + with use of OCIO v2 file-rules. + + Args: + config_path (str): path leading to config.ocio file + filepath (str): path leading to a file + + Returns: + Any[str, None]: matching colorspace name + """ + if not compatibility_check(): + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + result_data = _get_wrapped_with_subprocess( + "colorspace", "get_config_file_rules_colorspace_from_filepath", + config_path=config_path, + filepath=filepath + ) + if result_data: + return result_data[0] + + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_config_file_rules_colorspace_from_filepath # noqa: E501 + + result_data = _get_config_file_rules_colorspace_from_filepath( + config_path, filepath) + + if result_data: + return result_data[0] + + +def parse_colorspace_from_filepath( + filepath, colorspaces=None, config_path=None ): """Parse colorspace name from filepath An input path can have colorspace name used as part of name or as folder name. + Example: + >>> config_path = "path/to/config.ocio" + >>> colorspaces = get_ocio_config_colorspaces(config_path) + >>> colorspace = parse_colorspace_from_filepath( + "path/to/file/acescg/file.exr", + colorspaces=colorspaces + ) + >>> print(colorspace) + acescg + Args: - path (str): path string - host_name (str): host name - project_name (str): project name - config_data (dict, optional): config path and template in dict. - Defaults to None. - project_settings (dict, optional): project settings. Defaults to None. + filepath (str): path string + colorspaces (Optional[dict[str]]): list of colorspaces + config_path (Optional[str]): path to config.ocio file Returns: str: name of colorspace """ - if not config_data: - project_settings = project_settings or get_project_settings( - project_name + def _get_colorspace_match_regex(colorspaces): + """Return a regex pattern + + Allows to search a colorspace match in a filename + + Args: + colorspaces (list): List of colorspace names + + Returns: + re.Pattern: regex pattern + """ + pattern = "|".join( + # Allow to match spaces also as underscores because the + # integrator replaces spaces with underscores in filenames + re.escape(colorspace) for colorspace in + # Sort by longest first so the regex matches longer matches + # over smaller matches, e.g. matching 'Output - sRGB' over 'sRGB' + sorted(colorspaces, key=len, reverse=True) ) - config_data = get_imageio_config( - project_name, host_name, project_settings) + return re.compile(pattern) - config_path = config_data["path"] + if not colorspaces and not config_path: + raise ValueError( + "Must provide `config_path` if `colorspaces` is not provided." + ) - # match file rule from path - colorspace_name = None - colorspaces = get_ocio_config_colorspaces(config_path) - for colorspace_key in colorspaces: - # check underscored variant of colorspace name - # since we are reformatting it in integrate.py - if colorspace_key.replace(" ", "_") in path: - colorspace_name = colorspace_key - break - if colorspace_key in path: - colorspace_name = colorspace_key - break + colorspaces = colorspaces or get_ocio_config_colorspaces(config_path) + underscored_colorspaces = { + key.replace(" ", "_"): key for key in colorspaces + if " " in key + } - if not colorspace_name: - log.info("No matching colorspace in config '{}' for path: '{}'".format( - config_path, path - )) - return None + # match colorspace from filepath + regex_pattern = _get_colorspace_match_regex( + list(colorspaces) + list(underscored_colorspaces)) + match = regex_pattern.search(filepath) + colorspace = match.group(0) if match else None - return colorspace_name + if colorspace in underscored_colorspaces: + return underscored_colorspaces[colorspace] + + if colorspace: + return colorspace + + log.info("No matching colorspace in config '{}' for path: '{}'".format( + config_path, filepath + )) + return None def validate_imageio_colorspace_in_config(config_path, colorspace_name): @@ -206,42 +402,101 @@ def validate_imageio_colorspace_in_config(config_path, colorspace_name): return True +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_data_subprocess(config_path, data_type): - """Get data via subprocess + """[Deprecated] Get data via subprocess Wrapper for Python 2 hosts. Args: config_path (str): path leading to config.ocio file """ + return _get_wrapped_with_subprocess( + "config", data_type, in_path=config_path, + ) + + +def _get_wrapped_with_subprocess(command_group, command, **kwargs): + """Get data via subprocess + + Wrapper for Python 2 hosts. + + Args: + command_group (str): command group name + command (str): command name + **kwargs: command arguments + + Returns: + Any[dict, None]: data + """ with _make_temp_json_file() as tmp_json_path: # Prepare subprocess arguments args = [ "run", get_ocio_config_script_path(), - "config", data_type, - "--in_path", config_path, - "--out_path", tmp_json_path - + command_group, command ] + + for key_, value_ in kwargs.items(): + args.extend(("--{}".format(key_), value_)) + + args.append("--out_path") + args.append(tmp_json_path) + log.info("Executing: {}".format(" ".join(args))) - process_kwargs = { - "logger": log - } - - run_openpype_process(*args, **process_kwargs) + run_openpype_process(*args, logger=log) # return all colorspaces - return_json_data = open(tmp_json_path).read() - return json.loads(return_json_data) + with open(tmp_json_path, "r") as f_: + return json.load(f_) +# TODO: this should be part of ocio_wrapper.py def compatibility_check(): """Making sure PyOpenColorIO is importable""" + if CachedData.has_compatible_ocio_package is not None: + return CachedData.has_compatible_ocio_package + try: import PyOpenColorIO # noqa: F401 + CachedData.has_compatible_ocio_package = True except ImportError: + CachedData.has_compatible_ocio_package = False + + # compatible + return CachedData.has_compatible_ocio_package + + +# TODO: this should be part of ocio_wrapper.py +def compatibility_check_config_version(config_path, major=1, minor=None): + """Making sure PyOpenColorIO config version is compatible""" + + if not CachedData.config_version_data.get(config_path): + if compatibility_check(): + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_version_data + + CachedData.config_version_data[config_path] = \ + _get_version_data(config_path) + + else: + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + CachedData.config_version_data[config_path] = \ + _get_wrapped_with_subprocess( + "config", "get_version", config_path=config_path + ) + + # check major version + if CachedData.config_version_data[config_path]["major"] != major: return False + + # check minor version + if minor and CachedData.config_version_data[config_path]["minor"] != minor: + return False + + # compatible return True @@ -257,18 +512,28 @@ def get_ocio_config_colorspaces(config_path): Returns: dict: colorspace and family in couple """ - if not compatibility_check(): - # python environment is not compatible with PyOpenColorIO - # needs to be run in subprocess - return get_colorspace_data_subprocess(config_path) + if not CachedData.ocio_config_colorspaces.get(config_path): + if not compatibility_check(): + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + CachedData.ocio_config_colorspaces[config_path] = \ + _get_wrapped_with_subprocess( + "config", "get_colorspace", in_path=config_path + ) + else: + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_colorspace_data - from openpype.scripts.ocio_wrapper import _get_colorspace_data + CachedData.ocio_config_colorspaces[config_path] = \ + _get_colorspace_data(config_path) - return _get_colorspace_data(config_path) + return CachedData.ocio_config_colorspaces[config_path] +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_colorspace_data_subprocess(config_path): - """Get colorspace data via subprocess + """[Deprecated] Get colorspace data via subprocess Wrapper for Python 2 hosts. @@ -278,7 +543,9 @@ def get_colorspace_data_subprocess(config_path): Returns: dict: colorspace and family in couple """ - return get_data_subprocess(config_path, "get_colorspace") + return _get_wrapped_with_subprocess( + "config", "get_colorspace", in_path=config_path + ) def get_ocio_config_views(config_path): @@ -296,15 +563,20 @@ def get_ocio_config_views(config_path): if not compatibility_check(): # python environment is not compatible with PyOpenColorIO # needs to be run in subprocess - return get_views_data_subprocess(config_path) + return _get_wrapped_with_subprocess( + "config", "get_views", in_path=config_path + ) + # TODO: refactor this so it is not imported but part of this file from openpype.scripts.ocio_wrapper import _get_views_data return _get_views_data(config_path) +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_views_data_subprocess(config_path): - """Get viewers data via subprocess + """[Deprecated] Get viewers data via subprocess Wrapper for Python 2 hosts. @@ -314,7 +586,9 @@ def get_views_data_subprocess(config_path): Returns: dict: `display/viewer` and viewer data """ - return get_data_subprocess(config_path, "get_views") + return _get_wrapped_with_subprocess( + "config", "get_views", in_path=config_path + ) def get_imageio_config( @@ -322,7 +596,8 @@ def get_imageio_config( host_name, project_settings=None, anatomy_data=None, - anatomy=None + anatomy=None, + env=None ): """Returns config data from settings @@ -335,6 +610,7 @@ def get_imageio_config( project_settings (Optional[dict]): Project settings. anatomy_data (Optional[dict]): anatomy formatting data. anatomy (Optional[Anatomy]): Anatomy object. + env (Optional[dict]): Environment variables. Returns: dict: config path data or empty dict @@ -407,13 +683,13 @@ def get_imageio_config( if override_global_config: config_data = _get_config_data( - host_ocio_config["filepath"], formatting_data + host_ocio_config["filepath"], formatting_data, env ) else: # get config path from global config_global = imageio_global["ocio_config"] config_data = _get_config_data( - config_global["filepath"], formatting_data + config_global["filepath"], formatting_data, env ) if not config_data: @@ -425,7 +701,7 @@ def get_imageio_config( return config_data -def _get_config_data(path_list, anatomy_data): +def _get_config_data(path_list, anatomy_data, env=None): """Return first existing path in path list. If template is used in path inputs, @@ -435,14 +711,17 @@ def _get_config_data(path_list, anatomy_data): Args: path_list (list[str]): list of abs paths anatomy_data (dict): formatting data + env (Optional[dict]): Environment variables. Returns: dict: config data """ formatting_data = deepcopy(anatomy_data) + environment_vars = env or dict(**os.environ) + # format the path for potential env vars - formatting_data.update(dict(**os.environ)) + formatting_data.update(environment_vars) # first try host config paths for path_ in path_list: @@ -534,15 +813,15 @@ def get_remapped_colorspace_to_native( Union[str, None]: native colorspace name defined in remapping or None """ - CashedData.remapping.setdefault(host_name, {}) - if CashedData.remapping[host_name].get("to_native") is None: + CachedData.remapping.setdefault(host_name, {}) + if CachedData.remapping[host_name].get("to_native") is None: remapping_rules = imageio_host_settings["remapping"]["rules"] - CashedData.remapping[host_name]["to_native"] = { + CachedData.remapping[host_name]["to_native"] = { rule["ocio_name"]: rule["host_native_name"] for rule in remapping_rules } - return CashedData.remapping[host_name]["to_native"].get( + return CachedData.remapping[host_name]["to_native"].get( ocio_colorspace_name) @@ -560,15 +839,15 @@ def get_remapped_colorspace_from_native( Union[str, None]: Ocio colorspace name defined in remapping or None. """ - CashedData.remapping.setdefault(host_name, {}) - if CashedData.remapping[host_name].get("from_native") is None: + CachedData.remapping.setdefault(host_name, {}) + if CachedData.remapping[host_name].get("from_native") is None: remapping_rules = imageio_host_settings["remapping"]["rules"] - CashedData.remapping[host_name]["from_native"] = { + CachedData.remapping[host_name]["from_native"] = { rule["host_native_name"]: rule["ocio_name"] for rule in remapping_rules } - return CashedData.remapping[host_name]["from_native"].get( + return CachedData.remapping[host_name]["from_native"].get( host_native_colorspace_name) @@ -589,3 +868,173 @@ def _get_imageio_settings(project_settings, host_name): imageio_host = project_settings.get(host_name, {}).get("imageio", {}) return imageio_global, imageio_host + + +def get_colorspace_settings_from_publish_context(context_data): + """Returns solved settings for the host context. + + Args: + context_data (publish.Context.data): publishing context data + + Returns: + tuple | bool: config, file rules or None + """ + if "imageioSettings" in context_data and context_data["imageioSettings"]: + return context_data["imageioSettings"] + + project_name = context_data["projectName"] + host_name = context_data["hostName"] + anatomy_data = context_data["anatomyData"] + project_settings_ = context_data["project_settings"] + + config_data = get_imageio_config( + project_name, host_name, + project_settings=project_settings_, + anatomy_data=anatomy_data + ) + + # caching invalid state, so it's not recalculated all the time + file_rules = None + if config_data: + file_rules = get_imageio_file_rules( + project_name, host_name, + project_settings=project_settings_ + ) + + # caching settings for future instance processing + context_data["imageioSettings"] = (config_data, file_rules) + + return config_data, file_rules + + +def set_colorspace_data_to_representation( + representation, context_data, + colorspace=None, + log=None +): + """Sets colorspace data to representation. + + Args: + representation (dict): publishing representation + context_data (publish.Context.data): publishing context data + colorspace (str, optional): colorspace name. Defaults to None. + log (logging.Logger, optional): logger instance. Defaults to None. + + Example: + ``` + { + # for other publish plugins and loaders + "colorspace": "linear", + "config": { + # for future references in case need + "path": "/abs/path/to/config.ocio", + # for other plugins within remote publish cases + "template": "{project[root]}/path/to/config.ocio" + } + } + ``` + + """ + log = log or Logger.get_logger(__name__) + + file_ext = representation["ext"] + + # check if `file_ext` in lower case is in CachedData.allowed_exts + if file_ext.lstrip(".").lower() not in CachedData.allowed_exts: + log.debug( + "Extension '{}' is not in allowed extensions.".format(file_ext) + ) + return + + # get colorspace settings + config_data, file_rules = get_colorspace_settings_from_publish_context( + context_data) + + # in case host color management is not enabled + if not config_data: + log.warning("Host's colorspace management is disabled.") + return + + log.debug("Config data is: `{}`".format(config_data)) + + project_name = context_data["projectName"] + host_name = context_data["hostName"] + project_settings = context_data["project_settings"] + + # get one filename + filename = representation["files"] + if isinstance(filename, list): + filename = filename[0] + + # get matching colorspace from rules + colorspace = colorspace or get_imageio_colorspace_from_filepath( + filename, host_name, project_name, + config_data=config_data, + file_rules=file_rules, + project_settings=project_settings + ) + + # infuse data to representation + if colorspace: + colorspace_data = { + "colorspace": colorspace, + "config": config_data + } + + # update data key + representation["colorspaceData"] = colorspace_data + + +def get_display_view_colorspace_name(config_path, display, view): + """Returns the colorspace attribute of the (display, view) pair. + + Args: + config_path (str): path string leading to config.ocio + display (str): display name e.g. "ACES" + view (str): view name e.g. "sRGB" + + Returns: + view color space name (str) e.g. "Output - sRGB" + """ + + if not compatibility_check(): + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + return get_display_view_colorspace_subprocess(config_path, + display, view) + + from openpype.scripts.ocio_wrapper import _get_display_view_colorspace_name # noqa + + return _get_display_view_colorspace_name(config_path, display, view) + + +def get_display_view_colorspace_subprocess(config_path, display, view): + """Returns the colorspace attribute of the (display, view) pair + via subprocess. + + Args: + config_path (str): path string leading to config.ocio + display (str): display name e.g. "ACES" + view (str): view name e.g. "sRGB" + + Returns: + view color space name (str) e.g. "Output - sRGB" + """ + + with _make_temp_json_file() as tmp_json_path: + # Prepare subprocess arguments + args = [ + "run", get_ocio_config_script_path(), + "config", "get_display_view_colorspace_name", + "--in_path", config_path, + "--out_path", tmp_json_path, + "--display", display, + "--view", view + ] + log.debug("Executing: {}".format(" ".join(args))) + + run_openpype_process(*args, logger=log) + + # return default view colorspace name + with open(tmp_json_path, "r") as f: + return json.load(f) diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index 97a5c1ba69..13630ae7ca 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -21,10 +21,14 @@ from openpype.client import ( from openpype.lib.events import emit_event from openpype.modules import load_modules, ModulesManager from openpype.settings import get_project_settings +from openpype.tests.lib import is_in_tests from .publish.lib import filter_pyblish_plugins from .anatomy import Anatomy -from .template_data import get_template_data_with_names +from .template_data import ( + get_template_data_with_names, + get_template_data +) from .workfile import ( get_workfile_template_key, get_custom_workfile_template_by_string_context, @@ -35,7 +39,7 @@ from . import ( register_inventory_action_path, register_creator_plugin_path, deregister_loader_plugin_path, - deregister_inventory_action_path, + deregister_inventory_action_path ) @@ -142,6 +146,10 @@ def install_host(host): else: pyblish.api.register_target("local") + if is_in_tests(): + print("Registering pyblish target: automated") + pyblish.api.register_target("automated") + project_name = os.environ.get("AVALON_PROJECT") host_name = os.environ.get("AVALON_APP") @@ -320,7 +328,7 @@ def get_current_host_name(): """Current host name. Function is based on currently registered host integration or environment - variant 'AVALON_APP'. + variable 'AVALON_APP'. Returns: Union[str, None]: Name of host integration in current process or None. @@ -333,6 +341,26 @@ def get_current_host_name(): def get_global_context(): + """Global context defined in environment variables. + + Values here may not reflect current context of host integration. The + function can be used on startup before a host is registered. + + Use 'get_current_context' to make sure you'll get current host integration + context info. + + Example: + { + "project_name": "Commercial", + "asset_name": "Bunny", + "task_name": "Animation", + } + + Returns: + dict[str, Union[str, None]]: Context defined with environment + variables. + """ + return { "project_name": os.environ.get("AVALON_PROJECT"), "asset_name": os.environ.get("AVALON_ASSET"), @@ -633,3 +661,70 @@ def get_process_id(): if _process_id is None: _process_id = str(uuid.uuid4()) return _process_id + + +def get_current_context_template_data(): + """Template data for template fill from current context + + Returns: + Dict[str, Any] of the following tokens and their values + Supported Tokens: + - Regular Tokens + - app + - user + - asset + - parent + - hierarchy + - folder[name] + - root[work, ...] + - studio[code, name] + - project[code, name] + - task[type, name, short] + + - Context Specific Tokens + - assetData[frameStart] + - assetData[frameEnd] + - assetData[handleStart] + - assetData[handleEnd] + - assetData[frameStartHandle] + - assetData[frameEndHandle] + - assetData[resolutionHeight] + - assetData[resolutionWidth] + + """ + + # pre-prepare get_template_data args + current_context = get_current_context() + project_name = current_context["project_name"] + asset_name = current_context["asset_name"] + anatomy = Anatomy(project_name) + + # prepare get_template_data args + project_doc = get_project(project_name) + asset_doc = get_asset_by_name(project_name, asset_name) + task_name = current_context["task_name"] + host_name = get_current_host_name() + + # get regular template data + template_data = get_template_data( + project_doc, asset_doc, task_name, host_name + ) + + template_data["root"] = anatomy.roots + + # get context specific vars + asset_data = asset_doc["data"].copy() + + # compute `frameStartHandle` and `frameEndHandle` + if "frameStart" in asset_data and "handleStart" in asset_data: + asset_data["frameStartHandle"] = \ + asset_data["frameStart"] - asset_data["handleStart"] + + if "frameEnd" in asset_data and "handleEnd" in asset_data: + asset_data["frameEndHandle"] = \ + asset_data["frameEnd"] + asset_data["handleEnd"] + + # add assetData + template_data["assetData"] = asset_data + + return template_data diff --git a/openpype/pipeline/create/__init__.py b/openpype/pipeline/create/__init__.py index 5eee18df0f..94d575a776 100644 --- a/openpype/pipeline/create/__init__.py +++ b/openpype/pipeline/create/__init__.py @@ -2,6 +2,7 @@ from .constants import ( SUBSET_NAME_ALLOWED_SYMBOLS, DEFAULT_SUBSET_TEMPLATE, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, ) from .utils import ( @@ -50,6 +51,7 @@ __all__ = ( "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", "get_last_versions_for_instances", "get_next_versions_for_instances", @@ -74,6 +76,8 @@ __all__ = ( "register_creator_plugin_path", "deregister_creator_plugin_path", + "cache_and_get_instances", + "CreatedInstance", "CreateContext", diff --git a/openpype/pipeline/create/constants.py b/openpype/pipeline/create/constants.py index 375cfc4a12..7d1d0154e9 100644 --- a/openpype/pipeline/create/constants.py +++ b/openpype/pipeline/create/constants.py @@ -1,10 +1,12 @@ SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_." DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}" PRE_CREATE_THUMBNAIL_KEY = "thumbnail_source" +DEFAULT_VARIANT_VALUE = "Main" __all__ = ( "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", ) diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 98fcee5fe5..f9e3f86652 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -1165,8 +1165,8 @@ class CreatedInstance: Args: instance_data (Dict[str, Any]): Data in a structure ready for 'CreatedInstance' object. - creator (Creator): Creator plugin which is creating the instance - of for which the instance belong. + creator (BaseCreator): Creator plugin which is creating the + instance of for which the instance belong. """ instance_data = copy.deepcopy(instance_data) @@ -1774,7 +1774,7 @@ class CreateContext: self.creator_discover_result = report for creator_class in report.plugins: if inspect.isabstract(creator_class): - self.log.info( + self.log.debug( "Skipping abstract Creator {}".format(str(creator_class)) ) continue @@ -1804,6 +1804,7 @@ class CreateContext: self, self.headless ) + if not creator.enabled: disabled_creators[creator_identifier] = creator continue @@ -1979,7 +1980,11 @@ class CreateContext: if pre_create_data is None: pre_create_data = {} - precreate_attr_defs = creator.get_pre_create_attr_defs() or [] + precreate_attr_defs = [] + # Hidden creators do not have or need the pre-create attributes. + if isinstance(creator, Creator): + precreate_attr_defs = creator.get_pre_create_attr_defs() + # Create default values of precreate data _pre_create_data = get_default_values(precreate_attr_defs) # Update passed precreate data to default values @@ -2121,7 +2126,7 @@ class CreateContext: def reset_instances(self): """Reload instances""" - self._instances_by_id = {} + self._instances_by_id = collections.OrderedDict() # Collect instances error_message = "Collection of instances for creator {} failed. {}" diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index 947a90ef08..6aa08cae70 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -1,17 +1,12 @@ -import os import copy import collections -from abc import ( - ABCMeta, - abstractmethod, - abstractproperty -) +from abc import ABCMeta, abstractmethod import six from openpype.settings import get_system_settings, get_project_settings -from openpype.lib import Logger +from openpype.lib import Logger, is_func_signature_supported from openpype.pipeline.plugin_discover import ( discover, register_plugin, @@ -20,6 +15,7 @@ from openpype.pipeline.plugin_discover import ( deregister_plugin_path ) +from .constants import DEFAULT_VARIANT_VALUE from .subset_name import get_subset_name from .utils import get_next_versions_for_instances from .legacy_create import LegacyCreator @@ -84,7 +80,8 @@ class SubsetConvertorPlugin(object): def host(self): return self._create_context.host - @abstractproperty + @property + @abstractmethod def identifier(self): """Converted identifier. @@ -161,7 +158,6 @@ class BaseCreator: Args: project_settings (Dict[str, Any]): Project settings. - system_settings (Dict[str, Any]): System settings. create_context (CreateContext): Context which initialized creator. headless (bool): Running in headless mode. """ @@ -208,10 +204,41 @@ class BaseCreator: # - we may use UI inside processing this attribute should be checked self.headless = headless - self.apply_settings(project_settings, system_settings) + expect_system_settings = False + if is_func_signature_supported( + self.apply_settings, project_settings + ): + self.apply_settings(project_settings) + else: + expect_system_settings = True + # Backwards compatibility for system settings + self.apply_settings(project_settings, system_settings) - def apply_settings(self, project_settings, system_settings): - """Method called on initialization of plugin to apply settings.""" + init_use_base = any( + self.__class__.__init__ is cls.__init__ + for cls in { + BaseCreator, + Creator, + HiddenCreator, + AutoCreator, + } + ) + if not init_use_base or expect_system_settings: + self.log.warning(( + "WARNING: Source - Create plugin {}." + " System settings argument will not be passed to" + " '__init__' and 'apply_settings' methods in future versions" + " of OpenPype. Planned version to drop the support" + " is 3.16.6 or 3.17.0. Please contact Ynput core team if you" + " need to keep system settings." + ).format(self.__class__.__name__)) + + def apply_settings(self, project_settings): + """Method called on initialization of plugin to apply settings. + + Args: + project_settings (dict[str, Any]): Project settings. + """ pass @@ -224,7 +251,8 @@ class BaseCreator: return self.family - @abstractproperty + @property + @abstractmethod def family(self): """Family that plugin represents.""" @@ -517,7 +545,7 @@ class Creator(BaseCreator): default_variants = [] # Default variant used in 'get_default_variant' - default_variant = None + _default_variant = None # Short description of family # - may not be used if `get_description` is overriden @@ -543,6 +571,21 @@ class Creator(BaseCreator): # - similar to instance attribute definitions pre_create_attr_defs = [] + def __init__(self, *args, **kwargs): + cls = self.__class__ + + # Fix backwards compatibility for plugins which override + # 'default_variant' attribute directly + if not isinstance(cls.default_variant, property): + # Move value from 'default_variant' to '_default_variant' + self._default_variant = self.default_variant + # Create property 'default_variant' on the class + cls.default_variant = property( + cls._get_default_variant_wrap, + cls._set_default_variant_wrap + ) + super(Creator, self).__init__(*args, **kwargs) + @property def show_order(self): """Order in which is creator shown in UI. @@ -595,10 +638,10 @@ class Creator(BaseCreator): def get_default_variants(self): """Default variant values for UI tooltips. - Replacement of `defatults` attribute. Using method gives ability to - have some "logic" other than attribute values. + Replacement of `default_variants` attribute. Using method gives + ability to have some "logic" other than attribute values. - By default returns `default_variants` value. + By default, returns `default_variants` value. Returns: List[str]: Whisper variants for user input. @@ -606,17 +649,63 @@ class Creator(BaseCreator): return copy.deepcopy(self.default_variants) - def get_default_variant(self): + def get_default_variant(self, only_explicit=False): """Default variant value that will be used to prefill variant input. This is for user input and value may not be content of result from `get_default_variants`. - Can return `None`. In that case first element from - `get_default_variants` should be used. + Note: + This method does not allow to have empty string as + default variant. + + Args: + only_explicit (Optional[bool]): If True, only explicit default + variant from '_default_variant' will be returned. + + Returns: + str: Variant value. """ - return self.default_variant + if only_explicit or self._default_variant: + return self._default_variant + + for variant in self.get_default_variants(): + return variant + return DEFAULT_VARIANT_VALUE + + def _get_default_variant_wrap(self): + """Default variant value that will be used to prefill variant input. + + Wrapper for 'get_default_variant'. + + Notes: + This method is wrapper for 'get_default_variant' + for 'default_variant' property, so creator can override + the method. + + Returns: + str: Variant value. + """ + + return self.get_default_variant() + + def _set_default_variant_wrap(self, variant): + """Set default variant value. + + This method is needed for automated settings overrides which are + changing attributes based on keys in settings. + + Args: + variant (str): New default variant value. + """ + + self._default_variant = variant + + default_variant = property( + _get_default_variant_wrap, + _set_default_variant_wrap + ) def get_pre_create_attr_defs(self): """Plugin attribute definitions needed for creation. @@ -660,12 +749,12 @@ def discover_convertor_plugins(*args, **kwargs): def discover_legacy_creator_plugins(): - from openpype.lib import Logger + from openpype.pipeline import get_current_project_name log = Logger.get_logger("CreatorDiscover") plugins = discover(LegacyCreator) - project_name = os.environ.get("AVALON_PROJECT") + project_name = get_current_project_name() system_settings = get_system_settings() project_settings = get_project_settings(project_name) for plugin in plugins: diff --git a/openpype/pipeline/create/subset_name.py b/openpype/pipeline/create/subset_name.py index 3f0692b46a..00025b19b8 100644 --- a/openpype/pipeline/create/subset_name.py +++ b/openpype/pipeline/create/subset_name.py @@ -14,6 +14,13 @@ class TaskNotSetError(KeyError): super(TaskNotSetError, self).__init__(msg) +class TemplateFillError(Exception): + def __init__(self, msg=None): + if not msg: + msg = "Creator's subset name template is missing key value." + super(TemplateFillError, self).__init__(msg) + + def get_subset_name_template( project_name, family, @@ -112,6 +119,10 @@ def get_subset_name( for project. Settings are queried if not passed. family_filter (Optional[str]): Use different family for subset template filtering. Value of 'family' is used when not passed. + + Raises: + TemplateFillError: If filled template contains placeholder key which is not + collected. """ if not family: @@ -154,4 +165,10 @@ def get_subset_name( for key, value in dynamic_data.items(): fill_pairs[key] = value - return template.format(**prepare_template_data(fill_pairs)) + try: + return template.format(**prepare_template_data(fill_pairs)) + except KeyError as exp: + raise TemplateFillError( + "Value for {} key is missing in template '{}'." + " Available values are {}".format(str(exp), template, fill_pairs) + ) diff --git a/openpype/pipeline/delivery.py b/openpype/pipeline/delivery.py index 500f54040a..bbd01f7a4e 100644 --- a/openpype/pipeline/delivery.py +++ b/openpype/pipeline/delivery.py @@ -157,6 +157,8 @@ def deliver_single_file( delivery_path = delivery_path.replace("..", ".") # Make sure path is valid for all platforms delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) + # Remove newlines from the end of the string to avoid OSError during copy + delivery_path = delivery_path.rstrip() delivery_folder = os.path.dirname(delivery_path) if not os.path.exists(delivery_folder): @@ -176,7 +178,9 @@ def deliver_sequence( anatomy_data, format_dict, report_items, - log + log, + has_renumbered_frame=False, + new_frame_start=0 ): """ For Pype2(mainly - works in 3 too) where representation might not contain files. @@ -292,17 +296,30 @@ def deliver_sequence( src_head = src_collection.head src_tail = src_collection.tail uploaded = 0 + first_frame = min(src_collection.indexes) for index in src_collection.indexes: src_padding = src_collection.format("{padding}") % index src_file_name = "{}{}{}".format(src_head, src_padding, src_tail) src = os.path.normpath( os.path.join(dir_path, src_file_name) ) - - dst_padding = dst_collection.format("{padding}") % index + dst_index = index + if has_renumbered_frame: + # Calculate offset between first frame and current frame + # - '0' for first frame + offset = new_frame_start - first_frame + # Add offset to new frame start + dst_index = index + offset + if dst_index < 0: + msg = "Renumber frame has a smaller number than original frame" # noqa + report_items[msg].append(src_file_name) + log.warning("{} <{}>".format(msg, context)) + return report_items, 0 + dst_padding = dst_collection.format("{padding}") % dst_index dst = "{}{}{}".format(dst_head, dst_padding, dst_tail) log.debug("Copying single: {} -> {}".format(src, dst)) _copy_file(src, dst) + uploaded += 1 return report_items, uploaded diff --git a/openpype/pipeline/farm/pyblish_functions.py b/openpype/pipeline/farm/pyblish_functions.py new file mode 100644 index 0000000000..7ef3439dbd --- /dev/null +++ b/openpype/pipeline/farm/pyblish_functions.py @@ -0,0 +1,890 @@ +import copy +import attr +import pyblish.api +import os +import clique +from copy import deepcopy +import re +import warnings + +from openpype.pipeline import ( + get_current_project_name, + get_representation_path, + Anatomy, +) +from openpype.client import ( + get_last_version_by_subset_name, + get_representations +) +from openpype.lib import Logger +from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline.farm.patterning import match_aov_pattern + + +@attr.s +class TimeData(object): + """Structure used to handle time related data.""" + start = attr.ib(type=int) + end = attr.ib(type=int) + fps = attr.ib() + step = attr.ib(default=1, type=int) + handle_start = attr.ib(default=0, type=int) + handle_end = attr.ib(default=0, type=int) + + +def remap_source(path, anatomy): + """Try to remap path to rootless path. + + Args: + path (str): Path to be remapped to rootless. + anatomy (Anatomy): Anatomy object to handle remapping + itself. + + Returns: + str: Remapped path. + + Throws: + ValueError: if the root cannot be found. + + """ + success, rootless_path = ( + anatomy.find_root_template_from_path(path) + ) + if success: + source = rootless_path + else: + raise ValueError( + "Root from template path cannot be found: {}".format(path)) + return source + + +def extend_frames(asset, subset, start, end): + """Get latest version of asset nad update frame range. + + Based on minimum and maximum values. + + Arguments: + asset (str): asset name + subset (str): subset name + start (int): start frame + end (int): end frame + + Returns: + (int, int): update frame start/end + + """ + # Frame comparison + prev_start = None + prev_end = None + + project_name = get_current_project_name() + version = get_last_version_by_subset_name( + project_name, + subset, + asset_name=asset + ) + + # Set prev start / end frames for comparison + if not prev_start and not prev_end: + prev_start = version["data"]["frameStart"] + prev_end = version["data"]["frameEnd"] + + updated_start = min(start, prev_start) + updated_end = max(end, prev_end) + + return updated_start, updated_end + + +def get_time_data_from_instance_or_context(instance): + """Get time data from instance (or context). + + If time data is not found on instance, data from context will be used. + + Args: + instance (pyblish.api.Instance): Source instance. + + Returns: + TimeData: dataclass holding time information. + + """ + context = instance.context + return TimeData( + start=instance.data.get("frameStart", context.data.get("frameStart")), + end=instance.data.get("frameEnd", context.data.get("frameEnd")), + fps=instance.data.get("fps", context.data.get("fps")), + step=instance.data.get("byFrameStep", instance.data.get("step", 1)), + handle_start=instance.data.get( + "handleStart", context.data.get("handleStart") + ), + handle_end=instance.data.get( + "handleEnd", context.data.get("handleEnd") + ) + ) + + +def get_transferable_representations(instance): + """Transfer representations from original instance. + + This will get all representations on the original instance that + are flagged with `publish_on_farm` and return them to be included + on skeleton instance if needed. + + Args: + instance (pyblish.api.Instance): Original instance to be processed. + + Return: + list of dicts: List of transferable representations. + + """ + anatomy = instance.context.data["anatomy"] # type: Anatomy + to_transfer = [] + + for representation in instance.data.get("representations", []): + if "publish_on_farm" not in representation.get("tags", []): + continue + + trans_rep = representation.copy() + + staging_dir = trans_rep.get("stagingDir") + + if staging_dir: + try: + trans_rep["stagingDir"] = remap_source(staging_dir, anatomy) + except ValueError: + log = Logger.get_logger("farm_publishing") + log.warning( + ("Could not find root path for remapping \"{}\". " + "This may cause issues on farm.").format(staging_dir)) + + to_transfer.append(trans_rep) + return to_transfer + + +def create_skeleton_instance( + instance, families_transfer=None, instance_transfer=None): + # type: (pyblish.api.Instance, list, dict) -> dict + """Create skeleton instance from original instance data. + + This will create dictionary containing skeleton + - common - data used for publishing rendered instances. + This skeleton instance is then extended with additional data + and serialized to be processed by farm job. + + Args: + instance (pyblish.api.Instance): Original instance to + be used as a source of data. + families_transfer (list): List of family names to transfer + from the original instance to the skeleton. + instance_transfer (dict): Dict with keys as families and + values as a list of property names to transfer to the + new skeleton. + + Returns: + dict: Dictionary with skeleton instance data. + + """ + # list of family names to transfer to new family if present + + context = instance.context + data = instance.data.copy() + anatomy = instance.context.data["anatomy"] # type: Anatomy + + # get time related data from instance (or context) + time_data = get_time_data_from_instance_or_context(instance) + + if data.get("extendFrames", False): + time_data.start, time_data.end = extend_frames( + data["asset"], + data["subset"], + time_data.start, + time_data.end, + ) + + source = data.get("source") or context.data.get("currentFile") + success, rootless_path = ( + anatomy.find_root_template_from_path(source) + ) + if success: + source = rootless_path + else: + # `rootless_path` is not set to `source` if none of roots match + log = Logger.get_logger("farm_publishing") + log.warning(("Could not find root path for remapping \"{}\". " + "This may cause issues.").format(source)) + + family = ("render" + if "prerender.farm" not in instance.data["families"] + else "prerender") + families = [family] + + # pass review to families if marked as review + if data.get("review"): + families.append("review") + + instance_skeleton_data = { + "family": family, + "subset": data["subset"], + "families": families, + "asset": data["asset"], + "frameStart": time_data.start, + "frameEnd": time_data.end, + "handleStart": time_data.handle_start, + "handleEnd": time_data.handle_end, + "frameStartHandle": time_data.start - time_data.handle_start, + "frameEndHandle": time_data.end + time_data.handle_end, + "comment": data.get("comment"), + "fps": time_data.fps, + "source": source, + "extendFrames": data.get("extendFrames"), + "overrideExistingFrame": data.get("overrideExistingFrame"), + "pixelAspect": data.get("pixelAspect", 1), + "resolutionWidth": data.get("resolutionWidth", 1920), + "resolutionHeight": data.get("resolutionHeight", 1080), + "multipartExr": data.get("multipartExr", False), + "jobBatchName": data.get("jobBatchName", ""), + "useSequenceForReview": data.get("useSequenceForReview", True), + # map inputVersions `ObjectId` -> `str` so json supports it + "inputVersions": list(map(str, data.get("inputVersions", []))), + "colorspace": data.get("colorspace") + } + + # skip locking version if we are creating v01 + instance_version = data.get("version") # take this if exists + if instance_version != 1: + instance_skeleton_data["version"] = instance_version + + # transfer specific families from original instance to new render + for item in families_transfer: + if item in instance.data.get("families", []): + instance_skeleton_data["families"] += [item] + + # transfer specific properties from original instance based on + # mapping dictionary `instance_transfer` + for key, values in instance_transfer.items(): + if key in instance.data.get("families", []): + for v in values: + instance_skeleton_data[v] = instance.data.get(v) + + representations = get_transferable_representations(instance) + instance_skeleton_data["representations"] = representations + + persistent = instance.data.get("stagingDir_persistent") is True + instance_skeleton_data["stagingDir_persistent"] = persistent + + return instance_skeleton_data + + +def _add_review_families(families): + """Adds review flag to families. + + Handles situation when new instances are created which should have review + in families. In that case they should have 'ftrack' too. + + TODO: This is ugly and needs to be refactored. Ftrack family should be + added in different way (based on if the module is enabled?) + + """ + # if we have one representation with preview tag + # flag whole instance for review and for ftrack + if "ftrack" not in families and os.environ.get("FTRACK_SERVER"): + families.append("ftrack") + if "review" not in families: + families.append("review") + return families + + +def prepare_representations(skeleton_data, exp_files, anatomy, aov_filter, + skip_integration_repre_list, + do_not_add_review, + context, + color_managed_plugin): + """Create representations for file sequences. + + This will return representations of expected files if they are not + in hierarchy of aovs. There should be only one sequence of files for + most cases, but if not - we create representation from each of them. + + Arguments: + skeleton_data (dict): instance data for which we are + setting representations + exp_files (list): list of expected files + anatomy (Anatomy): + aov_filter (dict): add review for specific aov names + skip_integration_repre_list (list): exclude specific extensions, + do_not_add_review (bool): explicitly skip review + color_managed_plugin (publish.ColormanagedPyblishPluginMixin) + Returns: + list of representations + + """ + representations = [] + host_name = os.environ.get("AVALON_APP", "") + collections, remainders = clique.assemble(exp_files) + + log = Logger.get_logger("farm_publishing") + + # create representation for every collected sequence + for collection in collections: + ext = collection.tail.lstrip(".") + preview = False + # TODO 'useSequenceForReview' is temporary solution which does + # not work for 100% of cases. We must be able to tell what + # expected files contains more explicitly and from what + # should be review made. + # - "review" tag is never added when is set to 'False' + if skeleton_data["useSequenceForReview"]: + # toggle preview on if multipart is on + if skeleton_data.get("multipartExr", False): + log.debug( + "Adding preview tag because its multipartExr" + ) + preview = True + else: + render_file_name = list(collection)[0] + # if filtered aov name is found in filename, toggle it for + # preview video rendering + preview = match_aov_pattern( + host_name, aov_filter, render_file_name + ) + + staging = os.path.dirname(list(collection)[0]) + success, rootless_staging_dir = ( + anatomy.find_root_template_from_path(staging) + ) + if success: + staging = rootless_staging_dir + else: + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(staging)) + + frame_start = int(skeleton_data.get("frameStartHandle")) + if skeleton_data.get("slate"): + frame_start -= 1 + + # explicitly disable review by user + preview = preview and not do_not_add_review + rep = { + "name": ext, + "ext": ext, + "files": [os.path.basename(f) for f in list(collection)], + "frameStart": frame_start, + "frameEnd": int(skeleton_data.get("frameEndHandle")), + # If expectedFile are absolute, we need only filenames + "stagingDir": staging, + "fps": skeleton_data.get("fps"), + "tags": ["review"] if preview else [], + } + + # poor man exclusion + if ext in skip_integration_repre_list: + rep["tags"].append("delete") + + if skeleton_data.get("multipartExr", False): + rep["tags"].append("multipartExr") + + # support conversion from tiled to scanline + if skeleton_data.get("convertToScanline"): + log.info("Adding scanline conversion.") + rep["tags"].append("toScanline") + + representations.append(rep) + + if preview: + skeleton_data["families"] = _add_review_families( + skeleton_data["families"]) + + # add remainders as representations + for remainder in remainders: + ext = remainder.split(".")[-1] + + staging = os.path.dirname(remainder) + success, rootless_staging_dir = ( + anatomy.find_root_template_from_path(staging) + ) + if success: + staging = rootless_staging_dir + else: + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(staging)) + + rep = { + "name": ext, + "ext": ext, + "files": os.path.basename(remainder), + "stagingDir": staging, + } + + preview = match_aov_pattern( + host_name, aov_filter, remainder + ) + preview = preview and not do_not_add_review + if preview: + rep.update({ + "fps": skeleton_data.get("fps"), + "tags": ["review"] + }) + skeleton_data["families"] = \ + _add_review_families(skeleton_data["families"]) + + already_there = False + for repre in skeleton_data.get("representations", []): + # might be added explicitly before by publish_on_farm + already_there = repre.get("files") == rep["files"] + if already_there: + log.debug("repre {} already_there".format(repre)) + break + + if not already_there: + representations.append(rep) + + for rep in representations: + # inject colorspace data + color_managed_plugin.set_representation_colorspace( + rep, context, + colorspace=skeleton_data["colorspace"] + ) + + return representations + + +def create_instances_for_aov(instance, skeleton, aov_filter, + skip_integration_repre_list, + do_not_add_review): + """Create instances from AOVs. + + This will create new pyblish.api.Instances by going over expected + files defined on original instance. + + Args: + instance (pyblish.api.Instance): Original instance. + skeleton (dict): Skeleton instance data. + skip_integration_repre_list (list): skip + + Returns: + list of pyblish.api.Instance: Instances created from + expected files. + + """ + # we cannot attach AOVs to other subsets as we consider every + # AOV subset of its own. + + log = Logger.get_logger("farm_publishing") + additional_color_data = { + "renderProducts": instance.data["renderProducts"], + "colorspaceConfig": instance.data["colorspaceConfig"], + "display": instance.data["colorspaceDisplay"], + "view": instance.data["colorspaceView"] + } + + # Get templated path from absolute config path. + anatomy = instance.context.data["anatomy"] + colorspace_template = instance.data["colorspaceConfig"] + try: + additional_color_data["colorspaceTemplate"] = remap_source( + colorspace_template, anatomy) + except ValueError as e: + log.warning(e) + additional_color_data["colorspaceTemplate"] = colorspace_template + + # if there are subset to attach to and more than one AOV, + # we cannot proceed. + if ( + len(instance.data.get("attachTo", [])) > 0 + and len(instance.data.get("expectedFiles")[0].keys()) != 1 + ): + raise KnownPublishError( + "attaching multiple AOVs or renderable cameras to " + "subset is not supported yet.") + + # create instances for every AOV we found in expected files. + # NOTE: this is done for every AOV and every render camera (if + # there are multiple renderable cameras in scene) + return _create_instances_for_aov( + instance, + skeleton, + aov_filter, + additional_color_data, + skip_integration_repre_list, + do_not_add_review + ) + + +def _create_instances_for_aov(instance, skeleton, aov_filter, additional_data, + skip_integration_repre_list, do_not_add_review): + """Create instance for each AOV found. + + This will create new instance for every AOV it can detect in expected + files list. + + Args: + instance (pyblish.api.Instance): Original instance. + skeleton (dict): Skeleton data for instance (those needed) later + by collector. + additional_data (dict): .. + skip_integration_repre_list (list): list of extensions that shouldn't + be published + do_not_addbe _review (bool): explicitly disable review + + + Returns: + list of instances + + Throws: + ValueError: + + """ + # TODO: this needs to be taking the task from context or instance + task = os.environ["AVALON_TASK"] + + anatomy = instance.context.data["anatomy"] + subset = skeleton["subset"] + cameras = instance.data.get("cameras", []) + exp_files = instance.data["expectedFiles"] + log = Logger.get_logger("farm_publishing") + + instances = [] + # go through AOVs in expected files + for aov, files in exp_files[0].items(): + cols, rem = clique.assemble(files) + # we shouldn't have any reminders. And if we do, it should + # be just one item for single frame renders. + if not cols and rem: + if len(rem) != 1: + raise ValueError("Found multiple non related files " + "to render, don't know what to do " + "with them.") + col = rem[0] + ext = os.path.splitext(col)[1].lstrip(".") + else: + # but we really expect only one collection. + # Nothing else make sense. + if len(cols) != 1: + raise ValueError("Only one image sequence type is expected.") # noqa: E501 + ext = cols[0].tail.lstrip(".") + col = list(cols[0]) + + # create subset name `familyTaskSubset_AOV` + # TODO refactor/remove me + family = skeleton["family"] + if not subset.startswith(family): + group_name = '{}{}{}{}{}'.format( + family, + task[0].upper(), task[1:], + subset[0].upper(), subset[1:]) + else: + group_name = subset + + # if there are multiple cameras, we need to add camera name + if isinstance(col, (list, tuple)): + cam = [c for c in cameras if c in col[0]] + else: + # in case of single frame + cam = [c for c in cameras if c in col] + if cam: + if aov: + subset_name = '{}_{}_{}'.format(group_name, cam, aov) + else: + subset_name = '{}_{}'.format(group_name, cam) + else: + if aov: + subset_name = '{}_{}'.format(group_name, aov) + else: + subset_name = '{}'.format(group_name) + + if isinstance(col, (list, tuple)): + staging = os.path.dirname(col[0]) + else: + staging = os.path.dirname(col) + + try: + staging = remap_source(staging, anatomy) + except ValueError as e: + log.warning(e) + + log.info("Creating data for: {}".format(subset_name)) + + app = os.environ.get("AVALON_APP", "") + + if isinstance(col, list): + render_file_name = os.path.basename(col[0]) + else: + render_file_name = os.path.basename(col) + aov_patterns = aov_filter + + preview = match_aov_pattern(app, aov_patterns, render_file_name) + # toggle preview on if multipart is on + if instance.data.get("multipartExr"): + log.debug("Adding preview tag because its multipartExr") + preview = True + + new_instance = deepcopy(skeleton) + new_instance["subset"] = subset_name + new_instance["subsetGroup"] = group_name + + # explicitly disable review by user + preview = preview and not do_not_add_review + if preview: + new_instance["review"] = True + + # create representation + if isinstance(col, (list, tuple)): + files = [os.path.basename(f) for f in col] + else: + files = os.path.basename(col) + + # Copy render product "colorspace" data to representation. + colorspace = "" + products = additional_data["renderProducts"].layer_data.products + for product in products: + if product.productName == aov: + colorspace = product.colorspace + break + + rep = { + "name": ext, + "ext": ext, + "files": files, + "frameStart": int(skeleton["frameStartHandle"]), + "frameEnd": int(skeleton["frameEndHandle"]), + # If expectedFile are absolute, we need only filenames + "stagingDir": staging, + "fps": new_instance.get("fps"), + "tags": ["review"] if preview else [], + "colorspaceData": { + "colorspace": colorspace, + "config": { + "path": additional_data["colorspaceConfig"], + "template": additional_data["colorspaceTemplate"] + }, + "display": additional_data["display"], + "view": additional_data["view"] + } + } + + # support conversion from tiled to scanline + if instance.data.get("convertToScanline"): + log.info("Adding scanline conversion.") + rep["tags"].append("toScanline") + + # poor man exclusion + if ext in skip_integration_repre_list: + rep["tags"].append("delete") + + if preview: + new_instance["families"] = _add_review_families( + new_instance["families"]) + + new_instance["representations"] = [rep] + + # if extending frames from existing version, copy files from there + # into our destination directory + if new_instance.get("extendFrames", False): + copy_extend_frames(new_instance, rep) + instances.append(new_instance) + log.debug("instances:{}".format(instances)) + return instances + + +def get_resources(project_name, version, extension=None): + """Get the files from the specific version. + + This will return all get all files from representation. + + Todo: + This is really weird function, and it's use is + highly controversial. First, it will not probably work + ar all in final release of AYON, second, the logic isn't sound. + It should try to find representation matching the current one - + because it is used to pull out files from previous version to + be included in this one. + + .. deprecated:: 3.15.5 + This won't work in AYON and even the logic must be refactored. + + Args: + project_name (str): Name of the project. + version (dict): Version document. + extension (str): extension used to filter + representations. + + Returns: + list: of files + + """ + warnings.warn(( + "This won't work in AYON and even " + "the logic must be refactored."), DeprecationWarning) + extensions = [] + if extension: + extensions = [extension] + + # there is a `context_filter` argument that won't probably work in + # final release of AYON. SO we'll rather not use it + repre_docs = list(get_representations( + project_name, version_ids=[version["_id"]])) + + filtered = [] + for doc in repre_docs: + if doc["context"]["ext"] in extensions: + filtered.append(doc) + + representation = filtered[0] + directory = get_representation_path(representation) + print("Source: ", directory) + resources = sorted( + [ + os.path.normpath(os.path.join(directory, file_name)) + for file_name in os.listdir(directory) + ] + ) + + return resources + + +def copy_extend_frames(instance, representation): + """Copy existing frames from latest version. + + This will copy all existing frames from subset's latest version back + to render directory and rename them to what renderer is expecting. + + Arguments: + instance (pyblish.plugin.Instance): instance to get required + data from + representation (dict): presentation to operate on + + """ + import speedcopy + + R_FRAME_NUMBER = re.compile( + r".+\.(?P[0-9]+)\..+") + + log = Logger.get_logger("farm_publishing") + log.info("Preparing to copy ...") + start = instance.data.get("frameStart") + end = instance.data.get("frameEnd") + project_name = instance.context.data["project"] + anatomy = instance.context.data["anatomy"] # type: Anatomy + + # get latest version of subset + # this will stop if subset wasn't published yet + + version = get_last_version_by_subset_name( + project_name, + instance.data.get("subset"), + asset_name=instance.data.get("asset") + ) + + # get its files based on extension + subset_resources = get_resources( + project_name, version, representation.get("ext") + ) + r_col, _ = clique.assemble(subset_resources) + + # if override remove all frames we are expecting to be rendered, + # so we'll copy only those missing from current render + if instance.data.get("overrideExistingFrame"): + for frame in range(start, end + 1): + if frame not in r_col.indexes: + continue + r_col.indexes.remove(frame) + + # now we need to translate published names from representation + # back. This is tricky, right now we'll just use same naming + # and only switch frame numbers + resource_files = [] + r_filename = os.path.basename( + representation.get("files")[0]) # first file + op = re.search(R_FRAME_NUMBER, r_filename) + pre = r_filename[:op.start("frame")] + post = r_filename[op.end("frame"):] + assert op is not None, "padding string wasn't found" + for frame in list(r_col): + fn = re.search(R_FRAME_NUMBER, frame) + # silencing linter as we need to compare to True, not to + # type + assert fn is not None, "padding string wasn't found" + # list of tuples (source, destination) + staging = representation.get("stagingDir") + staging = anatomy.fill_root(staging) + resource_files.append( + (frame, os.path.join( + staging, "{}{}{}".format(pre, fn["frame"], post))) + ) + + # test if destination dir exists and create it if not + output_dir = os.path.dirname(representation.get("files")[0]) + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + + # copy files + for source in resource_files: + speedcopy.copy(source[0], source[1]) + log.info(" > {}".format(source[1])) + + log.info("Finished copying %i files" % len(resource_files)) + + +def attach_instances_to_subset(attach_to, instances): + """Attach instance to subset. + + If we are attaching to other subsets, create copy of existing + instances, change data to match its subset and replace + existing instances with modified data. + + Args: + attach_to (list): List of instances to attach to. + instances (list): List of instances to attach. + + Returns: + list: List of attached instances. + + """ + new_instances = [] + for attach_instance in attach_to: + for i in instances: + new_inst = copy.deepcopy(i) + new_inst["version"] = attach_instance.get("version") + new_inst["subset"] = attach_instance.get("subset") + new_inst["family"] = attach_instance.get("family") + new_inst["append"] = True + # don't set subsetGroup if we are attaching + new_inst.pop("subsetGroup") + new_instances.append(new_inst) + return new_instances + + +def create_metadata_path(instance, anatomy): + ins_data = instance.data + # Ensure output dir exists + output_dir = ins_data.get( + "publishRenderMetadataFolder", ins_data["outputDir"]) + + log = Logger.get_logger("farm_publishing") + + try: + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + except OSError: + # directory is not available + log.warning("Path is unreachable: `{}`".format(output_dir)) + + metadata_filename = "{}_metadata.json".format(ins_data["subset"]) + + metadata_path = os.path.join(output_dir, metadata_filename) + + # Convert output dir to `{root}/rest/of/path/...` with Anatomy + success, rootless_mtdt_p = anatomy.find_root_template_from_path( + metadata_path) + if not success: + # `rootless_path` is not set to `output_dir` if none of roots match + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(output_dir)) + rootless_mtdt_p = metadata_path + + return metadata_path, rootless_mtdt_p diff --git a/openpype/pipeline/farm/pyblish_functions.pyi b/openpype/pipeline/farm/pyblish_functions.pyi new file mode 100644 index 0000000000..76f7c34dcd --- /dev/null +++ b/openpype/pipeline/farm/pyblish_functions.pyi @@ -0,0 +1,24 @@ +import pyblish.api +from openpype.pipeline import Anatomy +from typing import Tuple, Union, List + + +class TimeData: + start: int + end: int + fps: float | int + step: int + handle_start: int + handle_end: int + + def __init__(self, start: int, end: int, fps: float | int, step: int, handle_start: int, handle_end: int): + ... + ... + +def remap_source(source: str, anatomy: Anatomy): ... +def extend_frames(asset: str, subset: str, start: int, end: int) -> Tuple[int, int]: ... +def get_time_data_from_instance_or_context(instance: pyblish.api.Instance) -> TimeData: ... +def get_transferable_representations(instance: pyblish.api.Instance) -> list: ... +def create_skeleton_instance(instance: pyblish.api.Instance, families_transfer: list = ..., instance_transfer: dict = ...) -> dict: ... +def create_instances_for_aov(instance: pyblish.api.Instance, skeleton: dict, aov_filter: dict) -> List[pyblish.api.Instance]: ... +def attach_instances_to_subset(attach_to: list, instances: list) -> list: ... diff --git a/openpype/pipeline/farm/tools.py b/openpype/pipeline/farm/tools.py new file mode 100644 index 0000000000..f3acac7a32 --- /dev/null +++ b/openpype/pipeline/farm/tools.py @@ -0,0 +1,112 @@ +import os + + +def get_published_workfile_instance(context): + """Find workfile instance in context""" + for i in context: + is_workfile = ( + "workfile" in i.data.get("families", []) or + i.data["family"] == "workfile" + ) + if not is_workfile: + continue + + # test if there is instance of workfile waiting + # to be published. + if i.data["publish"] is not True: + continue + + return i + + +def from_published_scene(instance, replace_in_path=True): + """Switch work scene for published scene. + + If rendering/exporting from published scenes is enabled, this will + replace paths from working scene to published scene. + + Args: + instance (pyblish.api.Instance): Instance data to process. + replace_in_path (bool): if True, it will try to find + old scene name in path of expected files and replace it + with name of published scene. + + Returns: + str: Published scene path. + None: if no published scene is found. + + Note: + Published scene path is actually determined from project Anatomy + as at the time this plugin is running the scene can be still + un-published. + + """ + workfile_instance = get_published_workfile_instance(instance.context) + if workfile_instance is None: + return + + # determine published path from Anatomy. + template_data = workfile_instance.data.get("anatomyData") + rep = workfile_instance.data["representations"][0] + template_data["representation"] = rep.get("name") + template_data["ext"] = rep.get("ext") + template_data["comment"] = None + + anatomy = instance.context.data['anatomy'] + template_obj = anatomy.templates_obj["publish"]["path"] + template_filled = template_obj.format_strict(template_data) + file_path = os.path.normpath(template_filled) + + if not os.path.exists(file_path): + raise + + if not replace_in_path: + return file_path + + # now we need to switch scene in expected files + # because token will now point to published + # scene file and that might differ from current one + def _clean_name(path): + return os.path.splitext(os.path.basename(path))[0] + + new_scene = _clean_name(file_path) + orig_scene = _clean_name(instance.context.data["currentFile"]) + expected_files = instance.data.get("expectedFiles") + + if isinstance(expected_files[0], dict): + # we have aovs and we need to iterate over them + new_exp = {} + for aov, files in expected_files[0].items(): + replaced_files = [] + for f in files: + replaced_files.append( + str(f).replace(orig_scene, new_scene) + ) + new_exp[aov] = replaced_files + # [] might be too much here, TODO + instance.data["expectedFiles"] = [new_exp] + else: + new_exp = [] + for f in expected_files: + new_exp.append( + str(f).replace(orig_scene, new_scene) + ) + instance.data["expectedFiles"] = new_exp + + metadata_folder = instance.data.get("publishRenderMetadataFolder") + if metadata_folder: + metadata_folder = metadata_folder.replace(orig_scene, + new_scene) + instance.data["publishRenderMetadataFolder"] = metadata_folder + + return file_path + + +def iter_expected_files(exp): + if isinstance(exp[0], dict): + for _aov, files in exp[0].items(): + for file in files: + yield file + else: + for file in exp: + yield file diff --git a/openpype/pipeline/legacy_io.py b/openpype/pipeline/legacy_io.py index bde2b24c2a..60fa035c22 100644 --- a/openpype/pipeline/legacy_io.py +++ b/openpype/pipeline/legacy_io.py @@ -4,6 +4,7 @@ import sys import logging import functools +from openpype import AYON_SERVER_ENABLED from . import schema from .mongodb import AvalonMongoDB, session_data_from_environment @@ -39,8 +40,9 @@ def install(): _connection_object.Session.update(session) _connection_object.install() - module._mongo_client = _connection_object.mongo_client - module._database = module.database = _connection_object.database + if not AYON_SERVER_ENABLED: + module._mongo_client = _connection_object.mongo_client + module._database = module.database = _connection_object.database module._is_installed = True diff --git a/openpype/pipeline/load/__init__.py b/openpype/pipeline/load/__init__.py index 7320a9f0e8..ca11b26211 100644 --- a/openpype/pipeline/load/__init__.py +++ b/openpype/pipeline/load/__init__.py @@ -32,6 +32,7 @@ from .utils import ( loaders_from_repre_context, loaders_from_representation, + filter_repre_contexts_by_loader, any_outdated_containers, get_outdated_containers, @@ -85,6 +86,7 @@ __all__ = ( "loaders_from_repre_context", "loaders_from_representation", + "filter_repre_contexts_by_loader", "any_outdated_containers", "get_outdated_containers", diff --git a/openpype/pipeline/load/plugins.py b/openpype/pipeline/load/plugins.py index e380d65bbe..8acfcfdb6c 100644 --- a/openpype/pipeline/load/plugins.py +++ b/openpype/pipeline/load/plugins.py @@ -39,9 +39,6 @@ class LoaderPlugin(list): log = logging.getLogger("SubsetLoader") log.propagate = True - def __init__(self, context): - self.fname = self.filepath_from_context(context) - @classmethod def apply_settings(cls, project_settings, system_settings): host_name = os.environ.get("AVALON_APP") @@ -237,6 +234,19 @@ class LoaderPlugin(list): """ return cls.options or [] + @property + def fname(self): + """Backwards compatibility with deprecation warning""" + + self.log.warning(( + "DEPRECATION WARNING: Source - Loader plugin {}." + " The 'fname' property on the Loader plugin will be removed in" + " future versions of OpenPype. Planned version to drop the support" + " is 3.16.6 or 3.17.0." + ).format(self.__class__.__name__)) + if hasattr(self, "_fname"): + return self._fname + class SubsetLoaderPlugin(LoaderPlugin): """Load subset into host application @@ -246,9 +256,6 @@ class SubsetLoaderPlugin(LoaderPlugin): namespace (str, optional): Use pre-defined namespace """ - def __init__(self, context): - pass - def discover_loader_plugins(project_name=None): from openpype.lib import Logger diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index 2c40280ccd..c81aeff6bd 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -314,7 +314,13 @@ def load_with_repre_context( ) ) - loader = Loader(repre_context) + loader = Loader() + + # Backwards compatibility: Originally the loader's __init__ required the + # representation context to set `fname` attribute to the filename to load + # Deprecated - to be removed in OpenPype 3.16.6 or 3.17.0. + loader._fname = get_representation_path_from_context(repre_context) + return loader.load(repre_context, name, namespace, options) @@ -338,8 +344,7 @@ def load_with_subset_context( ) ) - loader = Loader(subset_context) - return loader.load(subset_context, name, namespace, options) + return Loader().load(subset_context, name, namespace, options) def load_with_subset_contexts( @@ -364,8 +369,7 @@ def load_with_subset_contexts( "Running '{}' on '{}'".format(Loader.__name__, joined_subset_names) ) - loader = Loader(subset_contexts) - return loader.load(subset_contexts, name, namespace, options) + return Loader().load(subset_contexts, name, namespace, options) def load_container( @@ -447,8 +451,7 @@ def remove_container(container): .format(container.get("loader")) ) - loader = Loader(get_representation_context(container["representation"])) - return loader.remove(container) + return Loader().remove(container) def update_container(container, version=-1): @@ -498,8 +501,7 @@ def update_container(container, version=-1): .format(container.get("loader")) ) - loader = Loader(get_representation_context(container["representation"])) - return loader.update(container, new_representation) + return Loader().update(container, new_representation) def switch_container(container, representation, loader_plugin=None): @@ -635,7 +637,7 @@ def get_representation_path(representation, root=None, dbcon=None): root = registered_root() - def path_from_represenation(): + def path_from_representation(): try: template = representation["data"]["template"] except KeyError: @@ -759,7 +761,7 @@ def get_representation_path(representation, root=None, dbcon=None): return os.path.normpath(path) return ( - path_from_represenation() or + path_from_representation() or path_from_config() or path_from_data() ) @@ -788,6 +790,24 @@ def loaders_from_repre_context(loaders, repre_context): ] +def filter_repre_contexts_by_loader(repre_contexts, loader): + """Filter representation contexts for loader. + + Args: + repre_contexts (list[dict[str, Ant]]): Representation context. + loader (LoaderPlugin): Loader plugin to filter contexts for. + + Returns: + list[dict[str, Any]]: Filtered representation contexts. + """ + + return [ + repre_context + for repre_context in repre_contexts + if is_compatible_loader(loader, repre_context) + ] + + def loaders_from_representation(loaders, representation): """Return all compatible loaders for a representation.""" diff --git a/openpype/pipeline/mongodb.py b/openpype/pipeline/mongodb.py index be2b67a5e7..41a44c7373 100644 --- a/openpype/pipeline/mongodb.py +++ b/openpype/pipeline/mongodb.py @@ -5,6 +5,7 @@ import logging import pymongo from uuid import uuid4 +from openpype import AYON_SERVER_ENABLED from openpype.client import OpenPypeMongoConnection from . import schema @@ -187,7 +188,8 @@ class AvalonMongoDB: return self._installed = True - self._database = self.mongo_client[str(os.environ["AVALON_DB"])] + if not AYON_SERVER_ENABLED: + self._database = self.mongo_client[str(os.environ["AVALON_DB"])] def uninstall(self): """Close any connection to the database""" diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py index 0c57915c05..3a82d6f565 100644 --- a/openpype/pipeline/publish/__init__.py +++ b/openpype/pipeline/publish/__init__.py @@ -40,6 +40,7 @@ from .lib import ( apply_plugin_settings_automatically, get_plugin_settings, get_publish_instance_label, + get_publish_instance_families, ) from .abstract_expected_files import ExpectedFiles @@ -87,6 +88,7 @@ __all__ = ( "apply_plugin_settings_automatically", "get_plugin_settings", "get_publish_instance_label", + "get_publish_instance_families", "ExpectedFiles", diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py index 6877d556c3..8a26402bd8 100644 --- a/openpype/pipeline/publish/abstract_collect_render.py +++ b/openpype/pipeline/publish/abstract_collect_render.py @@ -75,7 +75,6 @@ class RenderInstance(object): tilesY = attr.ib(default=0) # number of tiles in Y # submit_publish_job - toBeRenderedOn = attr.ib(default=None) deadlineSubmissionJob = attr.ib(default=None) anatomyData = attr.ib(default=None) outputDir = attr.ib(default=None) diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 0961d79234..4d9443f635 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -464,9 +464,8 @@ def apply_plugin_settings_automatically(plugin, settings, logger=None): for option, value in settings.items(): if logger: - logger.debug("Plugin {} - Attr: {} -> {}".format( - option, value, plugin.__name__ - )) + logger.debug("Plugin %s - Attr: %s -> %s", + plugin.__name__, option, value) setattr(plugin, option, value) @@ -537,44 +536,24 @@ def filter_pyblish_plugins(plugins): plugins.remove(plugin) -def find_close_plugin(close_plugin_name, log): - if close_plugin_name: - plugins = pyblish.api.discover() - for plugin in plugins: - if plugin.__name__ == close_plugin_name: - return plugin - - log.debug("Close plugin not found, app might not close.") - - -def remote_publish(log, close_plugin_name=None, raise_error=False): +def remote_publish(log): """Loops through all plugins, logs to console. Used for tests. Args: log (Logger) - close_plugin_name (str): name of plugin with responsibility to - close host app """ - # Error exit as soon as any error occurs. - error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" - close_plugin = find_close_plugin(close_plugin_name, log) + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error}\n{error.traceback}" for result in pyblish.util.publish_iter(): - for record in result["records"]: - log.info("{}: {}".format( - result["plugin"].label, record.msg)) + if not result["error"]: + continue - if result["error"]: - error_message = error_format.format(**result) - log.error(error_message) - if close_plugin: # close host app explicitly after error - context = pyblish.api.Context() - close_plugin().process(context) - if raise_error: - # Fatal Error is because of Deadline - error_message = "Fatal Error: " + error_format.format(**result) - raise RuntimeError(error_message) + error_message = error_format.format(**result) + log.error(error_message) + # 'Fatal Error: ' is because of Deadline + raise RuntimeError("Fatal Error: {}".format(error_message)) def get_errored_instances_from_context(context, plugin=None): @@ -869,6 +848,111 @@ def _validate_transient_template(project_name, template_name, anatomy): ).format(template_name, project_name)) +def get_published_workfile_instance(context): + """Find workfile instance in context""" + for i in context: + is_workfile = ( + "workfile" in i.data.get("families", []) or + i.data["family"] == "workfile" + ) + if not is_workfile: + continue + + # test if there is instance of workfile waiting + # to be published. + if not i.data.get("publish", True): + continue + + return i + + +def replace_with_published_scene_path(instance, replace_in_path=True): + """Switch work scene path for published scene. + If rendering/exporting from published scenes is enabled, this will + replace paths from working scene to published scene. + This only works if publish contains workfile instance! + Args: + instance (pyblish.api.Instance): Pyblish instance. + replace_in_path (bool): if True, it will try to find + old scene name in path of expected files and replace it + with name of published scene. + Returns: + str: Published scene path. + None: if no published scene is found. + Note: + Published scene path is actually determined from project Anatomy + as at the time this plugin is running scene can still not be + published. + """ + log = Logger.get_logger("published_workfile") + workfile_instance = get_published_workfile_instance(instance.context) + if workfile_instance is None: + return + + # determine published path from Anatomy. + template_data = workfile_instance.data.get("anatomyData") + rep = workfile_instance.data["representations"][0] + template_data["representation"] = rep.get("name") + template_data["ext"] = rep.get("ext") + template_data["comment"] = None + + anatomy = instance.context.data['anatomy'] + anatomy_filled = anatomy.format(template_data) + template_filled = anatomy_filled["publish"]["path"] + file_path = os.path.normpath(template_filled) + + log.info("Using published scene for render {}".format(file_path)) + + if not os.path.exists(file_path): + log.error("published scene does not exist!") + raise + + if not replace_in_path: + return file_path + + # now we need to switch scene in expected files + # because token will now point to published + # scene file and that might differ from current one + def _clean_name(path): + return os.path.splitext(os.path.basename(path))[0] + + new_scene = _clean_name(file_path) + orig_scene = _clean_name(instance.context.data["currentFile"]) + expected_files = instance.data.get("expectedFiles") + + if isinstance(expected_files[0], dict): + # we have aovs and we need to iterate over them + new_exp = {} + for aov, files in expected_files[0].items(): + replaced_files = [] + for f in files: + replaced_files.append( + str(f).replace(orig_scene, new_scene) + ) + new_exp[aov] = replaced_files + # [] might be too much here, TODO + instance.data["expectedFiles"] = [new_exp] + else: + new_exp = [] + for f in expected_files: + new_exp.append( + str(f).replace(orig_scene, new_scene) + ) + instance.data["expectedFiles"] = new_exp + + metadata_folder = instance.data.get("publishRenderMetadataFolder") + if metadata_folder: + metadata_folder = metadata_folder.replace(orig_scene, + new_scene) + instance.data["publishRenderMetadataFolder"] = metadata_folder + + log.info("Scene name was switched {} -> {}".format( + orig_scene, new_scene + )) + + return file_path + + def add_repre_files_for_cleanup(instance, repre): """ Explicitly mark repre files to be deleted. @@ -877,7 +961,16 @@ def add_repre_files_for_cleanup(instance, repre): """ files = repre["files"] staging_dir = repre.get("stagingDir") - if not staging_dir: + + # first make sure representation level is not persistent + if ( + not staging_dir + or repre.get("stagingDir_persistent") + ): + return + + # then look into instance level if it's not persistent + if instance.data.get("stagingDir_persistent"): return if isinstance(files, str): @@ -909,3 +1002,27 @@ def get_publish_instance_label(instance): or instance.data.get("name") or str(instance) ) + + +def get_publish_instance_families(instance): + """Get all families of the instance. + + Look for families under 'family' and 'families' keys in instance data. + Value of 'family' is used as first family and then all other families + in random order. + + Args: + pyblish.api.Instance: Instance to get families from. + + Returns: + list[str]: List of families. + """ + + family = instance.data.get("family") + families = set(instance.data.get("families") or []) + output = [] + if family: + output.append(family) + families.discard(family) + output.extend(families) + return output diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index ba3be6397e..ae6cbc42d1 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -1,6 +1,5 @@ import inspect from abc import ABCMeta -from pprint import pformat import pyblish.api from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS @@ -14,9 +13,8 @@ from .lib import ( ) from openpype.pipeline.colorspace import ( - get_imageio_colorspace_from_filepath, - get_imageio_config, - get_imageio_file_rules + get_colorspace_settings_from_publish_context, + set_colorspace_data_to_representation ) @@ -306,12 +304,8 @@ class ColormanagedPyblishPluginMixin(object): matching colorspace from rules. Finally, it infuses this data into the representation. """ - allowed_ext = set( - ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS) - ) - @staticmethod - def get_colorspace_settings(context): + def get_colorspace_settings(self, context): """Returns solved settings for the host context. Args: @@ -320,50 +314,18 @@ class ColormanagedPyblishPluginMixin(object): Returns: tuple | bool: config, file rules or None """ - if "imageioSettings" in context.data: - return context.data["imageioSettings"] - - project_name = context.data["projectName"] - host_name = context.data["hostName"] - anatomy_data = context.data["anatomyData"] - project_settings_ = context.data["project_settings"] - - config_data = get_imageio_config( - project_name, host_name, - project_settings=project_settings_, - anatomy_data=anatomy_data - ) - - # in case host color management is not enabled - if not config_data: - return None - - file_rules = get_imageio_file_rules( - project_name, host_name, - project_settings=project_settings_ - ) - - # caching settings for future instance processing - context.data["imageioSettings"] = (config_data, file_rules) - - return config_data, file_rules + return get_colorspace_settings_from_publish_context(context.data) def set_representation_colorspace( self, representation, context, colorspace=None, - colorspace_settings=None ): """Sets colorspace data to representation. Args: representation (dict): publishing representation context (publish.Context): publishing context - config_data (dict): host resolved config data - file_rules (dict): host resolved file rules data colorspace (str, optional): colorspace name. Defaults to None. - colorspace_settings (tuple[dict, dict], optional): - Settings for config_data and file_rules. - Defaults to None. Example: ``` @@ -380,64 +342,10 @@ class ColormanagedPyblishPluginMixin(object): ``` """ - ext = representation["ext"] - # check extension - self.log.debug("__ ext: `{}`".format(ext)) - # check if ext in lower case is in self.allowed_ext - if ext.lstrip(".").lower() not in self.allowed_ext: - self.log.debug( - "Extension '{}' is not in allowed extensions.".format(ext) - ) - return - - if colorspace_settings is None: - colorspace_settings = self.get_colorspace_settings(context) - - # in case host color management is not enabled - if not colorspace_settings: - self.log.warning("Host's colorspace management is disabled.") - return - - # unpack colorspace settings - config_data, file_rules = colorspace_settings - - if not config_data: - # warn in case no colorspace path was defined - self.log.warning("No colorspace management was defined") - return - - self.log.debug("Config data is: `{}`".format(config_data)) - - project_name = context.data["projectName"] - host_name = context.data["hostName"] - project_settings = context.data["project_settings"] - - # get one filename - filename = representation["files"] - if isinstance(filename, list): - filename = filename[0] - - self.log.debug("__ filename: `{}`".format(filename)) - - # get matching colorspace from rules - colorspace = colorspace or get_imageio_colorspace_from_filepath( - filename, host_name, project_name, - config_data=config_data, - file_rules=file_rules, - project_settings=project_settings + # using cached settings if available + set_colorspace_data_to_representation( + representation, context.data, + colorspace, + log=self.log ) - self.log.debug("__ colorspace: `{}`".format(colorspace)) - - # infuse data to representation - if colorspace: - colorspace_data = { - "colorspace": colorspace, - "config": config_data - } - - # update data key - representation["colorspaceData"] = colorspace_data - - self.log.debug("__ colorspace_data: `{}`".format( - pformat(colorspace_data))) diff --git a/openpype/pipeline/schema.py b/openpype/pipeline/schema/__init__.py similarity index 92% rename from openpype/pipeline/schema.py rename to openpype/pipeline/schema/__init__.py index 7e96bfe1b1..d7b33f2621 100644 --- a/openpype/pipeline/schema.py +++ b/openpype/pipeline/schema/__init__.py @@ -24,6 +24,7 @@ log_ = logging.getLogger(__name__) ValidationError = jsonschema.ValidationError SchemaError = jsonschema.SchemaError +CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) _CACHED = False @@ -121,17 +122,14 @@ def _precache(): """Store available schemas in-memory for reduced disk access""" global _CACHED - repos_root = os.environ["OPENPYPE_REPOS_ROOT"] - schema_dir = os.path.join(repos_root, "schema") - - for schema in os.listdir(schema_dir): + for schema in os.listdir(CURRENT_DIR): if schema.startswith(("_", ".")): continue if not schema.endswith(".json"): continue - if not os.path.isfile(os.path.join(schema_dir, schema)): + if not os.path.isfile(os.path.join(CURRENT_DIR, schema)): continue - with open(os.path.join(schema_dir, schema)) as f: + with open(os.path.join(CURRENT_DIR, schema)) as f: log_.debug("Installing schema '%s'.." % schema) _cache[schema] = json.load(f) _CACHED = True diff --git a/schema/application-1.0.json b/openpype/pipeline/schema/application-1.0.json similarity index 100% rename from schema/application-1.0.json rename to openpype/pipeline/schema/application-1.0.json diff --git a/schema/asset-1.0.json b/openpype/pipeline/schema/asset-1.0.json similarity index 100% rename from schema/asset-1.0.json rename to openpype/pipeline/schema/asset-1.0.json diff --git a/schema/asset-2.0.json b/openpype/pipeline/schema/asset-2.0.json similarity index 100% rename from schema/asset-2.0.json rename to openpype/pipeline/schema/asset-2.0.json diff --git a/schema/asset-3.0.json b/openpype/pipeline/schema/asset-3.0.json similarity index 100% rename from schema/asset-3.0.json rename to openpype/pipeline/schema/asset-3.0.json diff --git a/schema/config-1.0.json b/openpype/pipeline/schema/config-1.0.json similarity index 100% rename from schema/config-1.0.json rename to openpype/pipeline/schema/config-1.0.json diff --git a/schema/config-1.1.json b/openpype/pipeline/schema/config-1.1.json similarity index 100% rename from schema/config-1.1.json rename to openpype/pipeline/schema/config-1.1.json diff --git a/schema/config-2.0.json b/openpype/pipeline/schema/config-2.0.json similarity index 100% rename from schema/config-2.0.json rename to openpype/pipeline/schema/config-2.0.json diff --git a/schema/container-1.0.json b/openpype/pipeline/schema/container-1.0.json similarity index 100% rename from schema/container-1.0.json rename to openpype/pipeline/schema/container-1.0.json diff --git a/schema/container-2.0.json b/openpype/pipeline/schema/container-2.0.json similarity index 100% rename from schema/container-2.0.json rename to openpype/pipeline/schema/container-2.0.json diff --git a/schema/hero_version-1.0.json b/openpype/pipeline/schema/hero_version-1.0.json similarity index 100% rename from schema/hero_version-1.0.json rename to openpype/pipeline/schema/hero_version-1.0.json diff --git a/schema/inventory-1.0.json b/openpype/pipeline/schema/inventory-1.0.json similarity index 100% rename from schema/inventory-1.0.json rename to openpype/pipeline/schema/inventory-1.0.json diff --git a/schema/inventory-1.1.json b/openpype/pipeline/schema/inventory-1.1.json similarity index 100% rename from schema/inventory-1.1.json rename to openpype/pipeline/schema/inventory-1.1.json diff --git a/schema/project-2.0.json b/openpype/pipeline/schema/project-2.0.json similarity index 100% rename from schema/project-2.0.json rename to openpype/pipeline/schema/project-2.0.json diff --git a/schema/project-2.1.json b/openpype/pipeline/schema/project-2.1.json similarity index 100% rename from schema/project-2.1.json rename to openpype/pipeline/schema/project-2.1.json diff --git a/schema/project-3.0.json b/openpype/pipeline/schema/project-3.0.json similarity index 100% rename from schema/project-3.0.json rename to openpype/pipeline/schema/project-3.0.json diff --git a/schema/representation-1.0.json b/openpype/pipeline/schema/representation-1.0.json similarity index 100% rename from schema/representation-1.0.json rename to openpype/pipeline/schema/representation-1.0.json diff --git a/schema/representation-2.0.json b/openpype/pipeline/schema/representation-2.0.json similarity index 100% rename from schema/representation-2.0.json rename to openpype/pipeline/schema/representation-2.0.json diff --git a/schema/session-1.0.json b/openpype/pipeline/schema/session-1.0.json similarity index 100% rename from schema/session-1.0.json rename to openpype/pipeline/schema/session-1.0.json diff --git a/schema/session-2.0.json b/openpype/pipeline/schema/session-2.0.json similarity index 100% rename from schema/session-2.0.json rename to openpype/pipeline/schema/session-2.0.json diff --git a/schema/session-3.0.json b/openpype/pipeline/schema/session-3.0.json similarity index 100% rename from schema/session-3.0.json rename to openpype/pipeline/schema/session-3.0.json diff --git a/schema/shaders-1.0.json b/openpype/pipeline/schema/shaders-1.0.json similarity index 100% rename from schema/shaders-1.0.json rename to openpype/pipeline/schema/shaders-1.0.json diff --git a/schema/subset-1.0.json b/openpype/pipeline/schema/subset-1.0.json similarity index 100% rename from schema/subset-1.0.json rename to openpype/pipeline/schema/subset-1.0.json diff --git a/schema/subset-2.0.json b/openpype/pipeline/schema/subset-2.0.json similarity index 100% rename from schema/subset-2.0.json rename to openpype/pipeline/schema/subset-2.0.json diff --git a/schema/subset-3.0.json b/openpype/pipeline/schema/subset-3.0.json similarity index 100% rename from schema/subset-3.0.json rename to openpype/pipeline/schema/subset-3.0.json diff --git a/schema/thumbnail-1.0.json b/openpype/pipeline/schema/thumbnail-1.0.json similarity index 100% rename from schema/thumbnail-1.0.json rename to openpype/pipeline/schema/thumbnail-1.0.json diff --git a/schema/version-1.0.json b/openpype/pipeline/schema/version-1.0.json similarity index 100% rename from schema/version-1.0.json rename to openpype/pipeline/schema/version-1.0.json diff --git a/schema/version-2.0.json b/openpype/pipeline/schema/version-2.0.json similarity index 100% rename from schema/version-2.0.json rename to openpype/pipeline/schema/version-2.0.json diff --git a/schema/version-3.0.json b/openpype/pipeline/schema/version-3.0.json similarity index 100% rename from schema/version-3.0.json rename to openpype/pipeline/schema/version-3.0.json diff --git a/schema/workfile-1.0.json b/openpype/pipeline/schema/workfile-1.0.json similarity index 100% rename from schema/workfile-1.0.json rename to openpype/pipeline/schema/workfile-1.0.json diff --git a/openpype/pipeline/template_data.py b/openpype/pipeline/template_data.py index 627eba5c3d..a48f0721b6 100644 --- a/openpype/pipeline/template_data.py +++ b/openpype/pipeline/template_data.py @@ -94,6 +94,9 @@ def get_asset_template_data(asset_doc, project_name): return { "asset": asset_doc["name"], + "folder": { + "name": asset_doc["name"] + }, "hierarchy": hierarchy, "parent": parent_name } @@ -128,7 +131,7 @@ def get_task_template_data(project_doc, asset_doc, task_name): Args: project_doc (Dict[str, Any]): Queried project document. asset_doc (Dict[str, Any]): Queried asset document. - tas_name (str): Name of task for which data should be returned. + task_name (str): Name of task for which data should be returned. Returns: Dict[str, Dict[str, str]]: Template data diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index 39f3e17893..63c55d0c19 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -2,6 +2,8 @@ import os import copy import logging +from openpype import AYON_SERVER_ENABLED +from openpype.lib import Logger from openpype.client import get_project from . import legacy_io from .anatomy import Anatomy @@ -10,13 +12,13 @@ from .plugin_discover import ( register_plugin, register_plugin_path, ) -log = logging.getLogger(__name__) def get_thumbnail_binary(thumbnail_entity, thumbnail_type, dbcon=None): if not thumbnail_entity: return + log = Logger.get_logger(__name__) resolvers = discover_thumbnail_resolvers() resolvers = sorted(resolvers, key=lambda cls: cls.priority) if dbcon is None: @@ -131,6 +133,66 @@ class BinaryThumbnail(ThumbnailResolver): return thumbnail_entity["data"].get("binary_data") +class ServerThumbnailResolver(ThumbnailResolver): + _cache = None + + @classmethod + def _get_cache(cls): + if cls._cache is None: + from openpype.client.server.thumbnails import AYONThumbnailCache + + cls._cache = AYONThumbnailCache() + return cls._cache + + def process(self, thumbnail_entity, thumbnail_type): + if not AYON_SERVER_ENABLED: + return None + data = thumbnail_entity["data"] + entity_type = data.get("entity_type") + entity_id = data.get("entity_id") + if not entity_type or not entity_id: + return None + + import ayon_api + + project_name = self.dbcon.active_project() + thumbnail_id = thumbnail_entity["_id"] + + cache = self._get_cache() + filepath = cache.get_thumbnail_filepath(project_name, thumbnail_id) + if filepath: + with open(filepath, "rb") as stream: + return stream.read() + + # This is new way how thumbnails can be received from server + # - output is 'ThumbnailContent' object + # NOTE Use 'get_server_api_connection' because public function + # 'get_thumbnail_by_id' does not return output of 'ServerAPI' + # method. + con = ayon_api.get_server_api_connection() + if hasattr(con, "get_thumbnail_by_id"): + result = con.get_thumbnail_by_id(thumbnail_id) + if result.is_valid: + filepath = cache.store_thumbnail( + project_name, + thumbnail_id, + result.content, + result.content_type + ) + else: + # Backwards compatibility for ayon api where 'get_thumbnail_by_id' + # is not implemented and output is filepath + filepath = con.get_thumbnail( + project_name, entity_type, entity_id, thumbnail_id + ) + + if not filepath: + return None + + with open(filepath, "rb") as stream: + return stream.read() + + # Thumbnail resolvers def discover_thumbnail_resolvers(): return discover(ThumbnailResolver) @@ -146,3 +208,4 @@ def register_thumbnail_resolver_path(path): register_thumbnail_resolver(TemplateResolver) register_thumbnail_resolver(BinaryThumbnail) +register_thumbnail_resolver(ServerThumbnailResolver) diff --git a/openpype/pipeline/version_start.py b/openpype/pipeline/version_start.py new file mode 100644 index 0000000000..0240ab0c7a --- /dev/null +++ b/openpype/pipeline/version_start.py @@ -0,0 +1,37 @@ +from openpype.lib.profiles_filtering import filter_profiles +from openpype.settings import get_project_settings + + +def get_versioning_start( + project_name, + host_name, + task_name=None, + task_type=None, + family=None, + subset=None, + project_settings=None, +): + """Get anatomy versioning start""" + if not project_settings: + project_settings = get_project_settings(project_name) + + version_start = 1 + settings = project_settings["global"] + profiles = settings.get("version_start_category", {}).get("profiles", []) + + if not profiles: + return version_start + + filtering_criteria = { + "host_names": host_name, + "families": family, + "task_names": task_name, + "task_types": task_type, + "subsets": subset + } + profile = filter_profiles(profiles, filtering_criteria) + + if profile is None: + return version_start + + return profile["version_start"] diff --git a/openpype/pipeline/workfile/build_workfile.py b/openpype/pipeline/workfile/build_workfile.py index 8329487839..7b153d37b9 100644 --- a/openpype/pipeline/workfile/build_workfile.py +++ b/openpype/pipeline/workfile/build_workfile.py @@ -9,7 +9,6 @@ from '~/openpype/pipeline/workfile/workfile_template_builder'. Which gives more abilities to define how build happens but require more code to achive it. """ -import os import re import collections import json @@ -26,7 +25,6 @@ from openpype.lib import ( filter_profiles, Logger, ) -from openpype.pipeline import legacy_io from openpype.pipeline.load import ( discover_loader_plugins, IncompatibleLoaderError, @@ -102,11 +100,17 @@ class BuildWorkfile: List[Dict[str, Any]]: Loaded containers during build. """ + from openpype.pipeline.context_tools import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name, + ) + loaded_containers = [] # Get current asset name and entity - project_name = legacy_io.active_project() - current_asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + current_asset_name = get_current_asset_name() current_asset_entity = get_asset_by_name( project_name, current_asset_name ) @@ -135,7 +139,7 @@ class BuildWorkfile: return loaded_containers # Get current task name - current_task_name = legacy_io.Session["AVALON_TASK"] + current_task_name = get_current_task_name() # Load workfile presets for task self.build_presets = self.get_build_presets( @@ -236,9 +240,14 @@ class BuildWorkfile: Dict[str, Any]: preset per entered task name """ - host_name = os.environ["AVALON_APP"] + from openpype.pipeline.context_tools import ( + get_current_host_name, + get_current_project_name, + ) + + host_name = get_current_host_name() project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + get_current_project_name() ) host_settings = project_settings.get(host_name) or {} @@ -651,13 +660,15 @@ class BuildWorkfile: ``` """ + from openpype.pipeline.context_tools import get_current_project_name + output = {} if not asset_docs: return output asset_docs_by_ids = {asset["_id"]: asset for asset in asset_docs} - project_name = legacy_io.active_project() + project_name = get_current_project_name() subsets = list(get_subsets( project_name, asset_ids=asset_docs_by_ids.keys() )) diff --git a/openpype/pipeline/workfile/path_resolving.py b/openpype/pipeline/workfile/path_resolving.py index 15689f4d99..78acee20da 100644 --- a/openpype/pipeline/workfile/path_resolving.py +++ b/openpype/pipeline/workfile/path_resolving.py @@ -10,7 +10,7 @@ from openpype.lib import ( Logger, StringTemplate, ) -from openpype.pipeline import Anatomy +from openpype.pipeline import version_start, Anatomy from openpype.pipeline.template_data import get_template_data @@ -316,7 +316,13 @@ def get_last_workfile( ) if filename is None: data = copy.deepcopy(fill_data) - data["version"] = 1 + data["version"] = version_start.get_versioning_start( + data["project"]["name"], + data["app"], + task_name=data["task"]["name"], + task_type=data["task"]["type"], + family="workfile" + ) data.pop("comment", None) if not data.get("ext"): data["ext"] = extensions[0] diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py index e1013b2645..b218a34868 100644 --- a/openpype/pipeline/workfile/workfile_template_builder.py +++ b/openpype/pipeline/workfile/workfile_template_builder.py @@ -28,8 +28,7 @@ from openpype.settings import ( get_project_settings, get_system_settings, ) -from openpype.host import IWorkfileHost -from openpype.host import HostBase +from openpype.host import IWorkfileHost, HostBase from openpype.lib import ( Logger, StringTemplate, @@ -37,7 +36,7 @@ from openpype.lib import ( attribute_definitions, ) from openpype.lib.attribute_definitions import get_attributes_keys -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy from openpype.pipeline.load import ( get_loaders_by_name, get_contexts_for_repre_docs, @@ -125,15 +124,30 @@ class AbstractTemplateBuilder(object): @property def project_name(self): - return legacy_io.active_project() + if isinstance(self._host, HostBase): + return self._host.get_current_project_name() + return os.getenv("AVALON_PROJECT") @property def current_asset_name(self): - return legacy_io.Session["AVALON_ASSET"] + if isinstance(self._host, HostBase): + return self._host.get_current_asset_name() + return os.getenv("AVALON_ASSET") @property def current_task_name(self): - return legacy_io.Session["AVALON_TASK"] + if isinstance(self._host, HostBase): + return self._host.get_current_task_name() + return os.getenv("AVALON_TASK") + + def get_current_context(self): + if isinstance(self._host, HostBase): + return self._host.get_current_context() + return { + "project_name": self.project_name, + "asset_name": self.current_asset_name, + "task_name": self.current_task_name + } @property def system_settings(self): @@ -790,10 +804,9 @@ class AbstractTemplateBuilder(object): fill_data["root"] = anatomy.roots fill_data["project"] = { "name": project_name, - "code": anatomy["attributes"]["code"] + "code": anatomy.project_code, } - result = StringTemplate.format_template(path, fill_data) if result.solved: path = result.normalized() @@ -1599,7 +1612,7 @@ class PlaceholderLoadMixin(object): pass - def delete_placeholder(self, placeholder, failed): + def delete_placeholder(self, placeholder): """Called when all item population is done.""" self.log.debug("Clean up of placeholder is not implemented.") @@ -1705,9 +1718,10 @@ class PlaceholderCreateMixin(object): creator_plugin = self.builder.get_creators_by_name()[creator_name] # create subset name - project_name = legacy_io.Session["AVALON_PROJECT"] - task_name = legacy_io.Session["AVALON_TASK"] - asset_name = legacy_io.Session["AVALON_ASSET"] + context = self._builder.get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] if legacy_create: asset_doc = get_asset_by_name( @@ -1767,6 +1781,17 @@ class PlaceholderCreateMixin(object): self.post_placeholder_process(placeholder, failed) + if failed: + self.log.debug( + "Placeholder cleanup skipped due to failed placeholder " + "population." + ) + return + + if not placeholder.data.get("keep_placeholder", True): + self.delete_placeholder(placeholder) + + def create_failed(self, placeholder, creator_data): if hasattr(placeholder, "create_failed"): placeholder.create_failed(creator_data) @@ -1786,9 +1811,12 @@ class PlaceholderCreateMixin(object): representation. failed (bool): Loading of representation failed. """ - pass + def delete_placeholder(self, placeholder): + """Called when all item population is done.""" + self.log.debug("Clean up of placeholder is not implemented.") + def _before_instance_create(self, placeholder): """Can be overriden. Is called before instance is created.""" diff --git a/openpype/plugin.py b/openpype/plugin.py deleted file mode 100644 index 7e906b4451..0000000000 --- a/openpype/plugin.py +++ /dev/null @@ -1,128 +0,0 @@ -import functools -import warnings - -import pyblish.api - -# New location of orders: openpype.pipeline.publish.constants -# - can be imported as -# 'from openpype.pipeline.publish import ValidatePipelineOrder' -ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 -ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 -ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 -ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 - - -class PluginDeprecatedWarning(DeprecationWarning): - pass - - -def _deprecation_warning(item_name, warning_message): - warnings.simplefilter("always", PluginDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(item_name, warning_message), - category=PluginDeprecatedWarning, - stacklevel=4 - ) - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - _deprecation_warning(decorated_func.__name__, warning_message) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -# Classes just inheriting from pyblish classes -# - seems to be unused in code (not 100% sure) -# - they should be removed but because it is not clear if they're used -# we'll keep then and log deprecation warning -# Deprecated since 3.14.* will be removed in 3.16.* -class ContextPlugin(pyblish.api.ContextPlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.ContextPlugin'." - ) - super(ContextPlugin, self).__init__(*args, **kwargs) - - -# Deprecated since 3.14.* will be removed in 3.16.* -class InstancePlugin(pyblish.api.InstancePlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.InstancePlugin'." - ) - super(InstancePlugin, self).__init__(*args, **kwargs) - - -class Extractor(pyblish.api.InstancePlugin): - """Extractor base class. - - The extractor base class implements a "staging_dir" function used to - generate a temporary directory for an instance to extract to. - - This temporary directory is generated through `tempfile.mkdtemp()` - - """ - - order = 2.0 - - def staging_dir(self, instance): - """Provide a temporary directory in which to store extracted files - - Upon calling this method the staging directory is stored inside - the instance.data['stagingDir'] - """ - - from openpype.pipeline.publish import get_instance_staging_dir - - return get_instance_staging_dir(instance) - - -@deprecated("openpype.pipeline.publish.context_plugin_should_run") -def contextplugin_should_run(plugin, context): - """Return whether the ContextPlugin should run on the given context. - - This is a helper function to work around a bug pyblish-base#250 - Whenever a ContextPlugin sets specific families it will still trigger even - when no instances are present that have those families. - - This actually checks it correctly and returns whether it should run. - - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import context_plugin_should_run - - return context_plugin_should_run(plugin, context) diff --git a/openpype/plugins/actions/open_file_explorer.py b/openpype/plugins/actions/open_file_explorer.py new file mode 100644 index 0000000000..1568c41fbd --- /dev/null +++ b/openpype/plugins/actions/open_file_explorer.py @@ -0,0 +1,121 @@ +import os +import platform +import subprocess + +from string import Formatter +from openpype.client import ( + get_project, + get_asset_by_name, +) +from openpype.pipeline import ( + Anatomy, + LauncherAction, +) +from openpype.pipeline.template_data import get_template_data + + +class OpenTaskPath(LauncherAction): + name = "open_task_path" + label = "Explore here" + icon = "folder-open" + order = 500 + + def is_compatible(self, session): + """Return whether the action is compatible with the session""" + return bool(session.get("AVALON_ASSET")) + + def process(self, session, **kwargs): + from qtpy import QtCore, QtWidgets + + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session.get("AVALON_TASK", None) + + path = self._get_workdir(project_name, asset_name, task_name) + if not path: + return + + app = QtWidgets.QApplication.instance() + ctrl_pressed = QtCore.Qt.ControlModifier & app.keyboardModifiers() + if ctrl_pressed: + # Copy path to clipboard + self.copy_path_to_clipboard(path) + else: + self.open_in_explorer(path) + + def _find_first_filled_path(self, path): + if not path: + return "" + + fields = set() + for item in Formatter().parse(path): + _, field_name, format_spec, conversion = item + if not field_name: + continue + conversion = "!{}".format(conversion) if conversion else "" + format_spec = ":{}".format(format_spec) if format_spec else "" + orig_key = "{{{}{}{}}}".format( + field_name, conversion, format_spec) + fields.add(orig_key) + + for field in fields: + path = path.split(field, 1)[0] + return path + + def _get_workdir(self, project_name, asset_name, task_name): + project = get_project(project_name) + asset = get_asset_by_name(project_name, asset_name) + + data = get_template_data(project, asset, task_name) + + anatomy = Anatomy(project_name) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + + # Remove any potential un-formatted parts of the path + valid_workdir = self._find_first_filled_path(workdir) + + # Path is not filled at all + if not valid_workdir: + raise AssertionError("Failed to calculate workdir.") + + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + + data.pop("task", None) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + valid_workdir = self._find_first_filled_path(workdir) + if valid_workdir: + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + raise AssertionError("Folder does not exist yet.") + + @staticmethod + def open_in_explorer(path): + platform_name = platform.system().lower() + if platform_name == "windows": + args = ["start", path] + elif platform_name == "darwin": + args = ["open", "-na", path] + elif platform_name == "linux": + args = ["xdg-open", path] + else: + raise RuntimeError(f"Unknown platform {platform.system()}") + # Make sure path is converted correctly for 'os.system' + os.system(subprocess.list2cmdline(args)) + + @staticmethod + def copy_path_to_clipboard(path): + from qtpy import QtWidgets + + path = path.replace("\\", "/") + print(f"Copied to clipboard: {path}") + app = QtWidgets.QApplication.instance() + assert app, "Must have running QApplication instance" + + # Set to Clipboard + clipboard = QtWidgets.QApplication.clipboard() + clipboard.setText(os.path.normpath(path)) diff --git a/openpype/plugins/load/copy_file.py b/openpype/plugins/load/copy_file.py index 163f56a83a..7fd56c8a6a 100644 --- a/openpype/plugins/load/copy_file.py +++ b/openpype/plugins/load/copy_file.py @@ -14,8 +14,9 @@ class CopyFile(load.LoaderPlugin): color = get_default_entity_icon_color() def load(self, context, name=None, namespace=None, data=None): - self.log.info("Added copy to clipboard: {0}".format(self.fname)) - self.copy_file_to_clipboard(self.fname) + path = self.filepath_from_context(context) + self.log.info("Added copy to clipboard: {0}".format(path)) + self.copy_file_to_clipboard(path) @staticmethod def copy_file_to_clipboard(path): diff --git a/openpype/plugins/load/copy_file_path.py b/openpype/plugins/load/copy_file_path.py index 569e5c8780..b055494e85 100644 --- a/openpype/plugins/load/copy_file_path.py +++ b/openpype/plugins/load/copy_file_path.py @@ -14,8 +14,9 @@ class CopyFilePath(load.LoaderPlugin): color = "#999999" def load(self, context, name=None, namespace=None, data=None): - self.log.info("Added file path to clipboard: {0}".format(self.fname)) - self.copy_path_to_clipboard(self.fname) + path = self.filepath_from_context(context) + self.log.info("Added file path to clipboard: {0}".format(path)) + self.copy_path_to_clipboard(path) @staticmethod def copy_path_to_clipboard(path): diff --git a/openpype/plugins/load/delivery.py b/openpype/plugins/load/delivery.py index 4bd4f6e9cf..3b493989bd 100644 --- a/openpype/plugins/load/delivery.py +++ b/openpype/plugins/load/delivery.py @@ -95,6 +95,12 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): template_label.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor)) template_label.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse) + renumber_frame = QtWidgets.QCheckBox() + + first_frame_start = QtWidgets.QSpinBox() + max_int = (1 << 32) // 2 + first_frame_start.setRange(0, max_int - 1) + root_line_edit = QtWidgets.QLineEdit() repre_checkboxes_layout = QtWidgets.QFormLayout() @@ -118,6 +124,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): input_layout.addRow("Selected representations", selected_label) input_layout.addRow("Delivery template", dropdown) input_layout.addRow("Template value", template_label) + input_layout.addRow("Renumber Frame", renumber_frame) + input_layout.addRow("Renumber start frame", first_frame_start) input_layout.addRow("Root", root_line_edit) input_layout.addRow("Representations", repre_checkboxes_layout) @@ -145,6 +153,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): self.selected_label = selected_label self.template_label = template_label self.dropdown = dropdown + self.first_frame_start = first_frame_start + self.renumber_frame = renumber_frame self.root_line_edit = root_line_edit self.progress_bar = progress_bar self.text_area = text_area @@ -181,6 +191,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): datetime_data = get_datetime_data() template_name = self.dropdown.currentText() format_dict = get_format_dict(self.anatomy, self.root_line_edit.text()) + renumber_frame = self.renumber_frame.isChecked() + frame_offset = self.first_frame_start.value() for repre in self._representations: if repre["name"] not in selected_repres: continue @@ -218,9 +230,31 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): src_paths.append(src_path) sources_and_frames = collect_frames(src_paths) + frames = set(sources_and_frames.values()) + frames.discard(None) + first_frame = None + if frames: + first_frame = min(frames) + for src_path, frame in sources_and_frames.items(): args[0] = src_path - if frame: + # Renumber frames + if renumber_frame and frame is not None: + # Calculate offset between + # first frame and current frame + # - '0' for first frame + offset = frame_offset - int(first_frame) + # Add offset to new frame start + dst_frame = int(frame) + offset + if dst_frame < 0: + msg = "Renumber frame has a smaller number than original frame" # noqa + report_items[msg].append(src_path) + self.log.warning("{} <{}>".format( + msg, dst_frame)) + continue + frame = dst_frame + + if frame is not None: anatomy_data["frame"] = frame new_report_items, uploaded = deliver_single_file(*args) report_items.update(new_report_items) diff --git a/openpype/plugins/load/open_djv.py b/openpype/plugins/load/open_djv.py index 9c36e7f405..5c679f6a51 100644 --- a/openpype/plugins/load/open_djv.py +++ b/openpype/plugins/load/open_djv.py @@ -33,9 +33,11 @@ class OpenInDJV(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - directory = os.path.dirname(self.fname) import clique + path = self.filepath_from_context(context) + directory = os.path.dirname(path) + pattern = clique.PATTERNS["frames"] files = os.listdir(directory) collections, remainder = clique.assemble( @@ -48,7 +50,7 @@ class OpenInDJV(load.LoaderPlugin): sequence = collections[0] first_image = list(sequence)[0] else: - first_image = self.fname + first_image = path filepath = os.path.normpath(os.path.join(directory, first_image)) self.log.info("Opening : {}".format(filepath)) diff --git a/openpype/plugins/load/open_file.py b/openpype/plugins/load/open_file.py index 00b2ecd7c5..5c4f4901d1 100644 --- a/openpype/plugins/load/open_file.py +++ b/openpype/plugins/load/open_file.py @@ -28,7 +28,7 @@ class OpenFile(load.LoaderPlugin): def load(self, context, name, namespace, data): - path = self.fname + path = self.filepath_from_context(context) if not os.path.exists(path): raise RuntimeError("File not found: {}".format(path)) diff --git a/openpype/plugins/publish/cleanup.py b/openpype/plugins/publish/cleanup.py index 573cd829e4..6c122ddf09 100644 --- a/openpype/plugins/publish/cleanup.py +++ b/openpype/plugins/publish/cleanup.py @@ -69,7 +69,7 @@ class CleanUp(pyblish.api.InstancePlugin): skip_cleanup_filepaths.add(os.path.normpath(path)) if self.remove_temp_renders: - self.log.info("Cleaning renders new...") + self.log.debug("Cleaning renders new...") self.clean_renders(instance, skip_cleanup_filepaths) if [ef for ef in self.exclude_families @@ -95,10 +95,12 @@ class CleanUp(pyblish.api.InstancePlugin): return if instance.data.get("stagingDir_persistent"): - self.log.info("Staging dir: %s should be persistent" % staging_dir) + self.log.debug( + "Staging dir {} should be persistent".format(staging_dir) + ) return - self.log.info("Removing staging directory {}".format(staging_dir)) + self.log.debug("Removing staging directory {}".format(staging_dir)) shutil.rmtree(staging_dir) def clean_renders(self, instance, skip_cleanup_filepaths): diff --git a/openpype/plugins/publish/cleanup_farm.py b/openpype/plugins/publish/cleanup_farm.py index 8052f13734..e655437ced 100644 --- a/openpype/plugins/publish/cleanup_farm.py +++ b/openpype/plugins/publish/cleanup_farm.py @@ -26,10 +26,10 @@ class CleanUpFarm(pyblish.api.ContextPlugin): # Skip process if is not in list of source hosts in which this # plugin should run if src_host_name not in self.allowed_hosts: - self.log.info(( + self.log.debug( "Source host \"{}\" is not in list of enabled hosts {}." - " Skipping" - ).format(str(src_host_name), str(self.allowed_hosts))) + " Skipping".format(src_host_name, self.allowed_hosts) + ) return self.log.debug("Preparing filepaths to remove") @@ -47,7 +47,7 @@ class CleanUpFarm(pyblish.api.ContextPlugin): dirpaths_to_remove.add(os.path.normpath(staging_dir)) if not dirpaths_to_remove: - self.log.info("Nothing to remove. Skipping") + self.log.debug("Nothing to remove. Skipping") return self.log.debug("Filepaths to remove are:\n{}".format( diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index 128ad90b4f..b4f4d6a16a 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -32,6 +32,7 @@ from openpype.client import ( get_subsets, get_last_versions ) +from openpype.pipeline.version_start import get_versioning_start class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): @@ -187,25 +188,13 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): project_task_types = project_doc["config"]["tasks"] for instance in context: - if self.follow_workfile_version: - version_number = context.data('version') - else: - version_number = instance.data.get("version") - # If version is not specified for instance or context - if version_number is None: - # TODO we should be able to change default version by studio - # preferences (like start with version number `0`) - version_number = 1 - # use latest version (+1) if already any exist - latest_version = instance.data["latestVersion"] - if latest_version is not None: - version_number += int(latest_version) - anatomy_updates = { "asset": instance.data["asset"], + "folder": { + "name": instance.data["asset"], + }, "family": instance.data["family"], "subset": instance.data["subset"], - "version": version_number } # Hierarchy @@ -225,6 +214,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): anatomy_updates["parent"] = parent_name # Task + task_type = None task_name = instance.data.get("task") if task_name: asset_tasks = asset_doc["data"]["tasks"] @@ -240,6 +230,30 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): "short": task_code } + # Define version + if self.follow_workfile_version: + version_number = context.data('version') + else: + version_number = instance.data.get("version") + + # use latest version (+1) if already any exist + if version_number is None: + latest_version = instance.data["latestVersion"] + if latest_version is not None: + version_number = int(latest_version) + 1 + + # If version is not specified for instance or context + if version_number is None: + version_number = get_versioning_start( + context.data["projectName"], + instance.context.data["hostName"], + task_name=task_name, + task_type=task_type, + family=instance.data["family"], + subset=instance.data["subset"] + ) + anatomy_updates["version"] = version_number + # Additional data resolution_width = instance.data.get("resolutionWidth") if resolution_width: diff --git a/openpype/plugins/publish/collect_audio.py b/openpype/plugins/publish/collect_audio.py index 3a0ddb3281..6aaadfc568 100644 --- a/openpype/plugins/publish/collect_audio.py +++ b/openpype/plugins/publish/collect_audio.py @@ -53,8 +53,8 @@ class CollectAudio(pyblish.api.ContextPlugin): ): # Skip instances that already have audio filled if instance.data.get("audio"): - self.log.info( - "Skipping Audio collecion. It is already collected" + self.log.debug( + "Skipping Audio collection. It is already collected" ) continue filtered_instances.append(instance) @@ -70,7 +70,7 @@ class CollectAudio(pyblish.api.ContextPlugin): instances_by_asset_name[asset_name].append(instance) asset_names = set(instances_by_asset_name.keys()) - self.log.info(( + self.log.debug(( "Searching for audio subset '{subset}' in assets {assets}" ).format( subset=self.audio_subset_name, @@ -100,7 +100,7 @@ class CollectAudio(pyblish.api.ContextPlugin): "offset": 0, "filename": repre_path }] - self.log.info("Audio Data added to instance ...") + self.log.debug("Audio Data added to instance ...") def query_representations(self, project_name, asset_names): """Query representations related to audio subsets for passed assets. diff --git a/openpype/plugins/publish/collect_current_context.py b/openpype/plugins/publish/collect_current_context.py index 7e42700d7d..8b12a3f77f 100644 --- a/openpype/plugins/publish/collect_current_context.py +++ b/openpype/plugins/publish/collect_current_context.py @@ -6,7 +6,7 @@ Provides: """ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_context class CollectCurrentContext(pyblish.api.ContextPlugin): @@ -19,29 +19,32 @@ class CollectCurrentContext(pyblish.api.ContextPlugin): label = "Collect Current context" def process(self, context): - # Make sure 'legacy_io' is intalled - legacy_io.install() - # Check if values are already set project_name = context.data.get("projectName") asset_name = context.data.get("asset") task_name = context.data.get("task") + + current_context = get_current_context() if not project_name: - project_name = legacy_io.current_project() - context.data["projectName"] = project_name + context.data["projectName"] = current_context["project_name"] if not asset_name: - asset_name = legacy_io.Session.get("AVALON_ASSET") - context.data["asset"] = asset_name + context.data["asset"] = current_context["asset_name"] if not task_name: - task_name = legacy_io.Session.get("AVALON_TASK") - context.data["task"] = task_name + context.data["task"] = current_context["task_name"] # QUESTION should we be explicit with keys? (the same on instances) # - 'asset' -> 'assetName' # - 'task' -> 'taskName' self.log.info(( - "Collected project context\nProject: {}\nAsset: {}\nTask: {}" - ).format(project_name, asset_name, task_name)) + "Collected project context\n" + "Project: {project_name}\n" + "Asset: {asset_name}\n" + "Task: {task_name}" + ).format( + project_name=context.data["projectName"], + asset_name=context.data["asset"], + task_name=context.data["task"] + )) diff --git a/openpype/plugins/publish/collect_farm_target.py b/openpype/plugins/publish/collect_farm_target.py new file mode 100644 index 0000000000..adcd842b48 --- /dev/null +++ b/openpype/plugins/publish/collect_farm_target.py @@ -0,0 +1,35 @@ +# -*- coding: utf-8 -*- +import pyblish.api + + +class CollectFarmTarget(pyblish.api.InstancePlugin): + """Collects the render target for the instance + """ + + order = pyblish.api.CollectorOrder + 0.499 + label = "Collect Farm Target" + targets = ["local"] + + def process(self, instance): + if not instance.data.get("farm"): + return + + context = instance.context + + farm_name = "" + op_modules = context.data.get("openPypeModules") + + for farm_renderer in ["deadline", "royalrender", "muster"]: + op_module = op_modules.get(farm_renderer, False) + + if op_module and op_module.enabled: + farm_name = farm_renderer + elif not op_module: + self.log.error("Cannot get OpenPype {0} module.".format( + farm_renderer)) + + if farm_name: + self.log.debug("Collected render target: {0}".format(farm_name)) + instance.data["toBeRenderedOn"] = farm_name + else: + AssertionError("No OpenPype renderer module found") diff --git a/openpype/plugins/publish/collect_hierarchy.py b/openpype/plugins/publish/collect_hierarchy.py index 687397be8a..b5fd1e4bb9 100644 --- a/openpype/plugins/publish/collect_hierarchy.py +++ b/openpype/plugins/publish/collect_hierarchy.py @@ -24,7 +24,7 @@ class CollectHierarchy(pyblish.api.ContextPlugin): final_context[project_name]['entity_type'] = 'Project' for instance in context: - self.log.info("Processing instance: `{}` ...".format(instance)) + self.log.debug("Processing instance: `{}` ...".format(instance)) # shot data dict shot_data = {} diff --git a/openpype/plugins/publish/collect_input_representations_to_versions.py b/openpype/plugins/publish/collect_input_representations_to_versions.py index 54a3214647..2b8c745d3d 100644 --- a/openpype/plugins/publish/collect_input_representations_to_versions.py +++ b/openpype/plugins/publish/collect_input_representations_to_versions.py @@ -46,3 +46,10 @@ class CollectInputRepresentationsToVersions(pyblish.api.ContextPlugin): version_id = representation_id_to_version_id.get(repre_id) if version_id: input_versions.append(version_id) + else: + self.log.debug( + "Representation id {} skipped because its version is " + "not found in current project. Likely it is loaded " + "from a library project or uses a deleted " + "representation or version.".format(repre_id) + ) diff --git a/openpype/plugins/publish/collect_rendered_files.py b/openpype/plugins/publish/collect_rendered_files.py index 6c8d1e9ca5..8a5a5a83f1 100644 --- a/openpype/plugins/publish/collect_rendered_files.py +++ b/openpype/plugins/publish/collect_rendered_files.py @@ -91,25 +91,28 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): # now we can just add instances from json file and we are done for instance_data in data.get("instances"): - self.log.info(" - processing instance for {}".format( + self.log.debug(" - processing instance for {}".format( instance_data.get("subset"))) instance = self._context.create_instance( instance_data.get("subset") ) - self.log.info("Filling stagingDir...") + self.log.debug("Filling stagingDir...") self._fill_staging_dir(instance_data, anatomy) instance.data.update(instance_data) # stash render job id for later validation instance.data["render_job_id"] = data.get("job").get("_id") - + staging_dir_persistent = instance.data.get( + "stagingDir_persistent", False + ) representations = [] for repre_data in instance_data.get("representations") or []: self._fill_staging_dir(repre_data, anatomy) representations.append(repre_data) - add_repre_files_for_cleanup(instance, repre_data) + if not staging_dir_persistent: + add_repre_files_for_cleanup(instance, repre_data) instance.data["representations"] = representations @@ -121,9 +124,11 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): "offset": 0 }] }) - self.log.info( + self.log.debug( f"Adding audio to instance: {instance.data['audio']}") + return staging_dir_persistent + def process(self, context): self._context = context @@ -137,11 +142,11 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): # Using already collected Anatomy anatomy = context.data["anatomy"] - self.log.info("Getting root setting for project \"{}\"".format( + self.log.debug("Getting root setting for project \"{}\"".format( anatomy.project_name )) - self.log.info("anatomy: {}".format(anatomy.roots)) + self.log.debug("anatomy: {}".format(anatomy.roots)) try: session_is_set = False for path in paths: @@ -156,13 +161,16 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): if remapped: session_data["AVALON_WORKDIR"] = remapped - self.log.info("Setting session using data from file") + self.log.debug("Setting session using data from file") legacy_io.Session.update(session_data) os.environ.update(session_data) session_is_set = True - self._process_path(data, anatomy) - context.data["cleanupFullPaths"].append(path) - context.data["cleanupEmptyDirs"].append(os.path.dirname(path)) + staging_dir_persistent = self._process_path(data, anatomy) + if not staging_dir_persistent: + context.data["cleanupFullPaths"].append(path) + context.data["cleanupEmptyDirs"].append( + os.path.dirname(path) + ) except Exception as e: self.log.error(e, exc_info=True) raise Exception("Error") from e diff --git a/openpype/plugins/publish/collect_resources_path.py b/openpype/plugins/publish/collect_resources_path.py index f96dd0ae18..57a612c5ae 100644 --- a/openpype/plugins/publish/collect_resources_path.py +++ b/openpype/plugins/publish/collect_resources_path.py @@ -62,7 +62,8 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): "effect", "staticMesh", "skeletalMesh", - "xgen" + "xgen", + "yeticacheUE" ] def process(self, instance): diff --git a/openpype/plugins/publish/collect_scene_version.py b/openpype/plugins/publish/collect_scene_version.py index cd3231a07d..7920c1e82b 100644 --- a/openpype/plugins/publish/collect_scene_version.py +++ b/openpype/plugins/publish/collect_scene_version.py @@ -3,6 +3,7 @@ import pyblish.api from openpype.lib import get_version_from_path from openpype.tests.lib import is_in_tests +from openpype.pipeline import KnownPublishError class CollectSceneVersion(pyblish.api.ContextPlugin): @@ -38,11 +39,15 @@ class CollectSceneVersion(pyblish.api.ContextPlugin): if ( os.environ.get("HEADLESS_PUBLISH") and not is_in_tests() - and context.data["hostName"] in self.skip_hosts_headless_publish): + and context.data["hostName"] in self.skip_hosts_headless_publish + ): self.log.debug("Skipping for headless publishing") return - assert context.data.get('currentFile'), "Cannot get current file" + if not context.data.get('currentFile'): + raise KnownPublishError("Cannot get current workfile path. " + "Make sure your scene is saved.") + filename = os.path.basename(context.data.get('currentFile')) if '' in filename: @@ -53,8 +58,11 @@ class CollectSceneVersion(pyblish.api.ContextPlugin): ) version = get_version_from_path(filename) - assert version, "Cannot determine version" + if version is None: + raise KnownPublishError("Unable to retrieve version number from " + "filename: {}".format(filename)) - rootVersion = int(version) - context.data['version'] = rootVersion - self.log.info('Scene Version: %s' % context.data.get('version')) + context.data['version'] = int(version) + self.log.debug( + "Collected scene version: {}".format(context.data.get('version')) + ) diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index e67739e842..dc8aab6ce4 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -52,8 +52,9 @@ class ExtractBurnin(publish.Extractor): "photoshop", "flame", "houdini", - "max" - # "resolve" + "max", + "blender", + "unreal" ] optional = True @@ -82,7 +83,7 @@ class ExtractBurnin(publish.Extractor): return if not instance.data.get("representations"): - self.log.info( + self.log.debug( "Instance does not have filled representations. Skipping") return @@ -134,11 +135,11 @@ class ExtractBurnin(publish.Extractor): burnin_defs, repre["tags"] ) if not repre_burnin_defs: - self.log.info(( + self.log.debug( "Skipped representation. All burnin definitions from" - " selected profile does not match to representation's" - " tags. \"{}\"" - ).format(str(repre["tags"]))) + " selected profile do not match to representation's" + " tags. \"{}\"".format(repre["tags"]) + ) continue filtered_repres.append((repre, repre_burnin_defs)) @@ -163,7 +164,7 @@ class ExtractBurnin(publish.Extractor): logger=self.log) if not profile: - self.log.info(( + self.log.debug(( "Skipped instance. None of profiles in presets are for" " Host: \"{}\" | Families: \"{}\" | Task \"{}\"" " | Task type \"{}\" | Subset \"{}\" " @@ -175,7 +176,7 @@ class ExtractBurnin(publish.Extractor): # Pre-filter burnin definitions by instance families burnin_defs = self.filter_burnins_defs(profile, instance) if not burnin_defs: - self.log.info(( + self.log.debug(( "Skipped instance. Burnin definitions are not set for profile" " Host: \"{}\" | Families: \"{}\" | Task \"{}\"" " | Profile \"{}\"" @@ -222,10 +223,10 @@ class ExtractBurnin(publish.Extractor): # If result is None the requirement of conversion can't be # determined if do_convert is None: - self.log.info(( + self.log.debug( "Can't determine if representation requires conversion." " Skipped." - )) + ) continue # Do conversion if needed diff --git a/openpype/plugins/publish/extract_color_transcode.py b/openpype/plugins/publish/extract_color_transcode.py index f7c8af9318..dbf1b6c8a6 100644 --- a/openpype/plugins/publish/extract_color_transcode.py +++ b/openpype/plugins/publish/extract_color_transcode.py @@ -320,7 +320,7 @@ class ExtractOIIOTranscode(publish.Extractor): logger=self.log) if not profile: - self.log.info(( + self.log.debug(( "Skipped instance. None of profiles in presets are for" " Host: \"{}\" | Families: \"{}\" | Task \"{}\"" " | Task type \"{}\" | Subset \"{}\" " diff --git a/openpype/plugins/publish/extract_colorspace_data.py b/openpype/plugins/publish/extract_colorspace_data.py index 363df28fb5..8873dcd637 100644 --- a/openpype/plugins/publish/extract_colorspace_data.py +++ b/openpype/plugins/publish/extract_colorspace_data.py @@ -30,7 +30,7 @@ class ExtractColorspaceData(publish.Extractor, def process(self, instance): representations = instance.data.get("representations") if not representations: - self.log.info("No representations at instance : `{}`".format( + self.log.debug("No representations at instance : `{}`".format( instance)) return diff --git a/openpype/plugins/publish/extract_hierarchy_avalon.py b/openpype/plugins/publish/extract_hierarchy_avalon.py index 493780645c..d70f0cbdd7 100644 --- a/openpype/plugins/publish/extract_hierarchy_avalon.py +++ b/openpype/plugins/publish/extract_hierarchy_avalon.py @@ -1,6 +1,7 @@ import collections from copy import deepcopy import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_assets, get_archived_assets @@ -16,8 +17,11 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): families = ["clip", "shot"] def process(self, context): + if AYON_SERVER_ENABLED: + return + if "hierarchyContext" not in context.data: - self.log.info("skipping IntegrateHierarchyToAvalon") + self.log.debug("skipping ExtractHierarchyToAvalon") return if not legacy_io.Session: diff --git a/openpype/plugins/publish/extract_hierarchy_to_ayon.py b/openpype/plugins/publish/extract_hierarchy_to_ayon.py new file mode 100644 index 0000000000..0d9131718b --- /dev/null +++ b/openpype/plugins/publish/extract_hierarchy_to_ayon.py @@ -0,0 +1,277 @@ +import collections +import copy +import json +import uuid +import pyblish.api + +from ayon_api import slugify_string +from ayon_api.entity_hub import EntityHub + +from openpype import AYON_SERVER_ENABLED +from openpype.client import get_assets +from openpype.pipeline.template_data import ( + get_asset_template_data, + get_task_template_data, +) + + +def _default_json_parse(value): + return str(value) + + +class ExtractHierarchyToAYON(pyblish.api.ContextPlugin): + """Create entities in AYON based on collected data.""" + + order = pyblish.api.ExtractorOrder - 0.01 + label = "Extract Hierarchy To AYON" + families = ["clip", "shot"] + + def process(self, context): + if not AYON_SERVER_ENABLED: + return + + hierarchy_context = context.data.get("hierarchyContext") + if not hierarchy_context: + self.log.debug("Skipping ExtractHierarchyToAYON") + return + + project_name = context.data["projectName"] + self._create_hierarchy(context, project_name) + self._fill_instance_entities(context, project_name) + + def _fill_instance_entities(self, context, project_name): + instances_by_asset_name = collections.defaultdict(list) + for instance in context: + if instance.data.get("publish") is False: + continue + + instance_entity = instance.data.get("assetEntity") + if instance_entity: + continue + + # Skip if instance asset does not match + instance_asset_name = instance.data.get("asset") + instances_by_asset_name[instance_asset_name].append(instance) + + project_doc = context.data["projectEntity"] + asset_docs = get_assets( + project_name, asset_names=instances_by_asset_name.keys() + ) + asset_docs_by_name = { + asset_doc["name"]: asset_doc + for asset_doc in asset_docs + } + for asset_name, instances in instances_by_asset_name.items(): + asset_doc = asset_docs_by_name[asset_name] + asset_data = get_asset_template_data(asset_doc, project_name) + for instance in instances: + task_name = instance.data.get("task") + template_data = get_task_template_data( + project_doc, asset_doc, task_name) + template_data.update(copy.deepcopy(asset_data)) + + instance.data["anatomyData"].update(template_data) + instance.data["assetEntity"] = asset_doc + + def _create_hierarchy(self, context, project_name): + hierarchy_context = self._filter_hierarchy(context) + if not hierarchy_context: + self.log.debug("All folders were filtered out") + return + + self.log.debug("Hierarchy_context: {}".format( + json.dumps(hierarchy_context, default=_default_json_parse) + )) + + entity_hub = EntityHub(project_name) + project = entity_hub.project_entity + + hierarchy_match_queue = collections.deque() + hierarchy_match_queue.append((project, hierarchy_context)) + while hierarchy_match_queue: + item = hierarchy_match_queue.popleft() + entity, entity_info = item + + # Update attributes of entities + for attr_name, attr_value in entity_info["attributes"].items(): + if attr_name in entity.attribs: + entity.attribs[attr_name] = attr_value + + # Check if info has any children to sync + children_info = entity_info["children"] + tasks_info = entity_info["tasks"] + if not tasks_info and not children_info: + continue + + # Prepare children by lowered name to easily find matching entities + children_by_low_name = { + child.name.lower(): child + for child in entity.children + } + + # Create tasks if are not available + for task_info in tasks_info: + task_label = task_info["name"] + task_name = slugify_string(task_label) + if task_name == task_label: + task_label = None + task_entity = children_by_low_name.get(task_name.lower()) + # TODO propagate updates of tasks if there are any + # TODO check if existing entity have 'task' type + if task_entity is None: + task_entity = entity_hub.add_new_task( + task_info["type"], + parent_id=entity.id, + name=task_name + ) + + if task_label: + task_entity.label = task_label + + # Create/Update sub-folders + for child_info in children_info: + child_label = child_info["name"] + child_name = slugify_string(child_label) + if child_name == child_label: + child_label = None + # TODO check if existing entity have 'folder' type + child_entity = children_by_low_name.get(child_name.lower()) + if child_entity is None: + child_entity = entity_hub.add_new_folder( + child_info["entity_type"], + parent_id=entity.id, + name=child_name + ) + + if child_label: + child_entity.label = child_label + + # Add folder to queue + hierarchy_match_queue.append((child_entity, child_info)) + + entity_hub.commit_changes() + + def _filter_hierarchy(self, context): + """Filter hierarchy context by active folder names. + + Hierarchy context is filtered to folder names on active instances. + + Change hierarchy context to unified structure which suits logic in + entity creation. + + Output example: + { + "name": "MyProject", + "entity_type": "Project", + "attributes": {}, + "tasks": [], + "children": [ + { + "name": "seq_01", + "entity_type": "Sequence", + "attributes": {}, + "tasks": [], + "children": [ + ... + ] + }, + ... + ] + } + + Todos: + Change how active folder are defined (names won't be enough in + AYON). + + Args: + context (pyblish.api.Context): Pyblish context. + + Returns: + dict[str, Any]: Hierarchy structure filtered by folder names. + """ + + # filter only the active publishing instances + active_folder_names = set() + for instance in context: + if instance.data.get("publish") is not False: + active_folder_names.add(instance.data.get("asset")) + + active_folder_names.discard(None) + + self.log.debug("Active folder names: {}".format(active_folder_names)) + if not active_folder_names: + return None + + project_item = None + project_children_context = None + for key, value in context.data["hierarchyContext"].items(): + project_item = copy.deepcopy(value) + project_children_context = project_item.pop("childs", None) + project_item["name"] = key + project_item["tasks"] = [] + project_item["attributes"] = project_item.pop( + "custom_attributes", {} + ) + project_item["children"] = [] + + if not project_children_context: + return None + + project_id = uuid.uuid4().hex + items_by_id = {project_id: project_item} + parent_id_by_item_id = {project_id: None} + valid_ids = set() + + hierarchy_queue = collections.deque() + hierarchy_queue.append((project_id, project_children_context)) + while hierarchy_queue: + queue_item = hierarchy_queue.popleft() + parent_id, children_context = queue_item + if not children_context: + continue + + for asset_name, asset_info in children_context.items(): + if ( + asset_name not in active_folder_names + and not asset_info.get("childs") + ): + continue + item_id = uuid.uuid4().hex + new_item = copy.deepcopy(asset_info) + new_item["name"] = asset_name + new_item["children"] = [] + new_children_context = new_item.pop("childs", None) + tasks = new_item.pop("tasks", {}) + task_items = [] + for task_name, task_info in tasks.items(): + task_info["name"] = task_name + task_items.append(task_info) + new_item["tasks"] = task_items + new_item["attributes"] = new_item.pop("custom_attributes", {}) + + items_by_id[item_id] = new_item + parent_id_by_item_id[item_id] = parent_id + + if asset_name in active_folder_names: + valid_ids.add(item_id) + hierarchy_queue.append((item_id, new_children_context)) + + if not valid_ids: + return None + + for item_id in set(valid_ids): + parent_id = parent_id_by_item_id[item_id] + while parent_id is not None and parent_id not in valid_ids: + valid_ids.add(parent_id) + parent_id = parent_id_by_item_id[parent_id] + + valid_ids.discard(project_id) + for item_id in valid_ids: + parent_id = parent_id_by_item_id[item_id] + item = items_by_id[item_id] + parent_item = items_by_id[parent_id] + parent_item["children"].append(item) + + if not project_item["children"]: + return None + return project_item diff --git a/openpype/plugins/publish/extract_otio_audio_tracks.py b/openpype/plugins/publish/extract_otio_audio_tracks.py index e19b7eeb13..4f17731452 100644 --- a/openpype/plugins/publish/extract_otio_audio_tracks.py +++ b/openpype/plugins/publish/extract_otio_audio_tracks.py @@ -1,7 +1,7 @@ import os import pyblish from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess ) import tempfile @@ -20,9 +20,6 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): label = "Extract OTIO Audio Tracks" hosts = ["hiero", "resolve", "flame"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - def process(self, context): """Convert otio audio track's content to audio representations @@ -91,13 +88,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): # temp audio file audio_fpath = self.create_temp_file(name) - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-ss", str(start_sec), "-t", str(duration_sec), "-i", audio_file, audio_fpath - ] + ) # run subprocess self.log.debug("Executing: {}".format(" ".join(cmd))) @@ -210,13 +207,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): max_duration_sec = max(end_secs) # create empty cmd - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-f", "lavfi", "-i", "anullsrc=channel_layout=stereo:sample_rate=48000", "-t", str(max_duration_sec), empty_fpath - ] + ) # generate empty with ffmpeg # run subprocess @@ -295,7 +292,7 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): filters_tmp_filepath = tmp_file.name tmp_file.write(",".join(filters)) - args = [self.ffmpeg_path] + args = get_ffmpeg_tool_args("ffmpeg") args.extend(input_args) args.extend([ "-filter_complex_script", filters_tmp_filepath, diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 9ebcad2af1..699207df8a 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -20,7 +20,7 @@ import opentimelineio as otio from pyblish import api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -338,8 +338,6 @@ class ExtractOTIOReview(publish.Extractor): Returns: otio.time.TimeRange: trimmed available range """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path and frame start to destination output_path, out_frame_start = self._get_ffmpeg_output() @@ -348,7 +346,7 @@ class ExtractOTIOReview(publish.Extractor): out_frame_start += end_offset # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") input_extension = None if sequence: diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index 70726338aa..67ff6c538c 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -11,7 +11,7 @@ from copy import deepcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -75,14 +75,12 @@ class ExtractOTIOTrimmingVideo(publish.Extractor): otio_range (opentime.TimeRange): range to trim to """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path to destination output_path = self._get_ffmpeg_output(input_file_path) # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") video_path = input_file_path frame_start = otio_range.start_time.value diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index f053d1b500..0ae941511c 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -3,6 +3,7 @@ import re import copy import json import shutil +import subprocess from abc import ABCMeta, abstractmethod import six @@ -11,7 +12,7 @@ import speedcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, filter_profiles, path_to_subprocess_arg, run_subprocess, @@ -20,6 +21,7 @@ from openpype.lib.transcoding import ( IMAGE_EXTENSIONS, get_ffprobe_streams, should_convert_for_ffmpeg, + get_review_layer_name, convert_input_paths_for_ffmpeg, get_transcode_temp_directory, ) @@ -72,9 +74,6 @@ class ExtractReview(pyblish.api.InstancePlugin): alpha_exts = ["exr", "png", "dpx"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Preset attributes profiles = None @@ -268,6 +267,8 @@ class ExtractReview(pyblish.api.InstancePlugin): )) continue + layer_name = get_review_layer_name(first_input_path) + # Do conversion if needed # - change staging dir of source representation # - must be set back after output definitions processing @@ -286,7 +287,8 @@ class ExtractReview(pyblish.api.InstancePlugin): instance, repre, src_repre_staging_dir, - filtered_output_defs + filtered_output_defs, + layer_name ) finally: @@ -300,7 +302,12 @@ class ExtractReview(pyblish.api.InstancePlugin): shutil.rmtree(new_staging_dir) def _render_output_definitions( - self, instance, repre, src_repre_staging_dir, output_definitions + self, + instance, + repre, + src_repre_staging_dir, + output_definitions, + layer_name ): fill_data = copy.deepcopy(instance.data["anatomyData"]) for _output_def in output_definitions: @@ -372,7 +379,12 @@ class ExtractReview(pyblish.api.InstancePlugin): try: # temporary until oiiotool is supported cross platform ffmpeg_args = self._ffmpeg_arguments( - output_def, instance, new_repre, temp_data, fill_data + output_def, + instance, + new_repre, + temp_data, + fill_data, + layer_name, ) except ZeroDivisionError: # TODO recalculate width and height using OIIO before @@ -533,7 +545,13 @@ class ExtractReview(pyblish.api.InstancePlugin): } def _ffmpeg_arguments( - self, output_def, instance, new_repre, temp_data, fill_data + self, + output_def, + instance, + new_repre, + temp_data, + fill_data, + layer_name ): """Prepares ffmpeg arguments for expected extraction. @@ -601,6 +619,10 @@ class ExtractReview(pyblish.api.InstancePlugin): duration_seconds = float(output_frames_len / temp_data["fps"]) + # Define which layer should be used + if layer_name: + ffmpeg_input_args.extend(["-layer", layer_name]) + if temp_data["input_is_sequence"]: # Set start frame of input sequence (just frame in filename) # - definition of input filepath @@ -787,8 +809,9 @@ class ExtractReview(pyblish.api.InstancePlugin): arg = arg.replace(identifier, "").strip() audio_filters.append(arg) - all_args = [] - all_args.append(path_to_subprocess_arg(self.ffmpeg_path)) + all_args = [ + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")) + ] all_args.extend(input_args) if video_filters: all_args.append("-filter:v") diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index fca3d96ca6..d89fbb90c4 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,5 +1,6 @@ import os import re +import subprocess from pprint import pformat import pyblish.api @@ -7,13 +8,14 @@ import pyblish.api from openpype.lib import ( path_to_subprocess_arg, run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_data, get_ffprobe_streams, get_ffmpeg_codec_args, get_ffmpeg_format_args, ) from openpype.pipeline import publish +from openpype.pipeline.publish import KnownPublishError class ExtractReviewSlate(publish.Extractor): @@ -45,9 +47,7 @@ class ExtractReviewSlate(publish.Extractor): "*": inst_data["slateFrame"] } - self.log.info("_ slates_data: {}".format(pformat(slates_data))) - - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + self.log.debug("_ slates_data: {}".format(pformat(slates_data))) if "reviewToWidth" in inst_data: use_legacy_code = True @@ -77,7 +77,7 @@ class ExtractReviewSlate(publish.Extractor): ) # get slate data slate_path = self._get_slate_path(input_file, slates_data) - self.log.info("_ slate_path: {}".format(slate_path)) + self.log.debug("_ slate_path: {}".format(slate_path)) slate_width, slate_height = self._get_slates_resolution(slate_path) @@ -86,14 +86,18 @@ class ExtractReviewSlate(publish.Extractor): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) = self._get_video_metadata(streams) + if input_pixel_aspect: + pixel_aspect = input_pixel_aspect # Raise exception of any stream didn't define input resolution if input_width is None: - raise AssertionError(( + raise KnownPublishError( "FFprobe couldn't read resolution from input file: \"{}\"" - ).format(input_path)) + .format(input_path) + ) ( audio_codec, @@ -260,7 +264,7 @@ class ExtractReviewSlate(publish.Extractor): _remove_at_end.append(slate_v_path) slate_args = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")), " ".join(input_args), " ".join(output_args) ] @@ -281,7 +285,6 @@ class ExtractReviewSlate(publish.Extractor): os.path.splitext(slate_v_path)) _remove_at_end.append(slate_silent_path) self._create_silent_slate( - ffmpeg_path, slate_v_path, slate_silent_path, audio_codec, @@ -309,12 +312,12 @@ class ExtractReviewSlate(publish.Extractor): "[0:v] [1:v] concat=n=2:v=1:a=0 [v]", "-map", '[v]' ] - concat_args = [ - ffmpeg_path, + concat_args = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", slate_v_path, "-i", input_path, - ] + ) concat_args.extend(fmap) if offset_timecode: concat_args.extend(["-timecode", offset_timecode]) @@ -421,6 +424,7 @@ class ExtractReviewSlate(publish.Extractor): input_width = None input_height = None input_frame_rate = None + input_pixel_aspect = None for stream in streams: if stream.get("codec_type") != "video": continue @@ -438,6 +442,16 @@ class ExtractReviewSlate(publish.Extractor): input_width = width input_height = height + input_pixel_aspect = stream.get("sample_aspect_ratio") + if input_pixel_aspect is not None: + try: + input_pixel_aspect = float( + eval(str(input_pixel_aspect).replace(':', '/'))) + except Exception: + self.log.debug( + "__Converting pixel aspect to float failed: {}".format( + input_pixel_aspect)) + tags = stream.get("tags") or {} input_timecode = tags.get("timecode") or "" @@ -448,7 +462,8 @@ class ExtractReviewSlate(publish.Extractor): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) def _get_audio_metadata(self, streams): @@ -490,7 +505,6 @@ class ExtractReviewSlate(publish.Extractor): def _create_silent_slate( self, - ffmpeg_path, src_path, dst_path, audio_codec, @@ -515,8 +529,8 @@ class ExtractReviewSlate(publish.Extractor): one_frame_duration = str(int(one_frame_duration)) + "us" self.log.debug("One frame duration is {}".format(one_frame_duration)) - slate_silent_args = [ - ffmpeg_path, + slate_silent_args = get_ffmpeg_tool_args( + "ffmpeg", "-i", src_path, "-f", "lavfi", "-i", "anullsrc=r={}:cl={}:d={}".format( @@ -531,7 +545,7 @@ class ExtractReviewSlate(publish.Extractor): "-shortest", "-y", dst_path - ] + ) # run slate generation subprocess self.log.debug("Silent Slate Executing: {}".format( " ".join(slate_silent_args) diff --git a/openpype/plugins/publish/extract_scanline_exr.py b/openpype/plugins/publish/extract_scanline_exr.py index 0e4c0ca65f..747155689b 100644 --- a/openpype/plugins/publish/extract_scanline_exr.py +++ b/openpype/plugins/publish/extract_scanline_exr.py @@ -5,7 +5,12 @@ import shutil import pyblish.api -from openpype.lib import run_subprocess, get_oiio_tools_path +from openpype.lib import ( + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) +from openpype.pipeline import KnownPublishError class ExtractScanlineExr(pyblish.api.InstancePlugin): @@ -24,32 +29,32 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): representations_new = [] for repre in representations: - self.log.info( + self.log.debug( "Processing representation {}".format(repre.get("name"))) tags = repre.get("tags", []) if "toScanline" not in tags: - self.log.info(" - missing toScanline tag") + self.log.debug(" - missing toScanline tag") continue # run only on exrs if repre.get("ext") != "exr": - self.log.info("- not EXR files") + self.log.debug("- not EXR files") continue if not isinstance(repre['files'], (list, tuple)): input_files = [repre['files']] - self.log.info("We have a single frame") + self.log.debug("We have a single frame") else: input_files = repre['files'] - self.log.info("We have a sequence") + self.log.debug("We have a sequence") stagingdir = os.path.normpath(repre.get("stagingDir")) - oiio_tool_path = get_oiio_tools_path() - if not os.path.exists(oiio_tool_path): - self.log.error( - "OIIO tool not found in {}".format(oiio_tool_path)) - raise AssertionError("OIIO tool not found") + try: + oiio_tool_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + self.log.error("OIIO tool not found.") + raise KnownPublishError("OIIO tool not found") for file in input_files: @@ -57,14 +62,13 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): temp_name = os.path.join(stagingdir, "__{}".format(file)) # move original render to temp location shutil.move(original_name, temp_name) - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = oiio_tool_args + [ os.path.join(stagingdir, temp_name), "--scanline", "-o", os.path.join(stagingdir, original_name) ] subprocess_exr = " ".join(oiio_cmd) - self.log.info(f"running: {subprocess_exr}") + self.log.debug(f"running: {subprocess_exr}") run_subprocess(subprocess_exr, logger=self.log) # raise error if there is no ouptput diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index 1d86741470..ff42dee1f3 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -1,10 +1,11 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -43,12 +44,12 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): # Skip if instance have 'review' key in data set to 'False' if not self._is_review_instance(instance): - self.log.info("Skipping - no review set on instance.") + self.log.debug("Skipping - no review set on instance.") return # Check if already has thumbnail created if self._already_has_thumbnail(instance_repres): - self.log.info("Thumbnail representation already present.") + self.log.debug("Thumbnail representation already present.") return # skip crypto passes. @@ -58,15 +59,15 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): # representation that can be determined much earlier and # with better precision. if "crypto" in subset_name.lower(): - self.log.info("Skipping crypto passes.") + self.log.debug("Skipping crypto passes.") return filtered_repres = self._get_filtered_repres(instance) if not filtered_repres: - self.log.info(( - "Instance don't have representations" - " that can be used as source for thumbnail. Skipping" - )) + self.log.info( + "Instance doesn't have representations that can be used " + "as source for thumbnail. Skipping thumbnail extraction." + ) return # Create temp directory for thumbnail @@ -117,10 +118,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): # oiiotool isn't available if not thumbnail_created: if oiio_supported: - self.log.info(( + self.log.debug( "Converting with FFMPEG because input" " can't be read by OIIO." - )) + ) thumbnail_created = self.create_thumbnail_ffmpeg( full_input_path, full_output_path @@ -175,8 +176,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): continue if not repre.get("files"): - self.log.info(( - "Representation \"{}\" don't have files. Skipping" + self.log.debug(( + "Representation \"{}\" doesn't have files. Skipping" ).format(repre["name"])) continue @@ -193,6 +194,21 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): view ): self.log.info("Extracting thumbnail {}".format(dst_path)) + oiio_cmd = get_oiio_tool_args( + "oiiotool", + "-a", src_path, + "-o", dst_path + ) + self.log.debug("running: {}".format(" ".join(oiio_cmd))) + try: + run_subprocess(oiio_cmd, logger=self.log) + return True + except Exception: + self.log.warning( + "Failed to create thumbnail using oiiotool", + exc_info=True + ) + return False convert_colorspace( src_path, @@ -207,29 +223,29 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): return dst_path def create_thumbnail_ffmpeg(self, src_path, dst_path): - self.log.info("outputting {}".format(dst_path)) + self.log.debug("Extracting thumbnail with FFMPEG: {}".format(dst_path)) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} - jpeg_items = [] - jpeg_items.append(path_to_subprocess_arg(ffmpeg_path)) - # override file if already exists - jpeg_items.append("-y") + jpeg_items = [ + subprocess.list2cmdline(ffmpeg_path_args) + ] # flag for large file sizes max_int = 2147483647 - jpeg_items.append("-analyzeduration {}".format(max_int)) - jpeg_items.append("-probesize {}".format(max_int)) + jpeg_items.extend([ + "-y", + "-analyzeduration", str(max_int), + "-probesize", str(max_int), + ]) # use same input args like with mov jpeg_items.extend(ffmpeg_args.get("input") or []) # input file - jpeg_items.append("-i {}".format( - path_to_subprocess_arg(src_path) - )) + jpeg_items.extend(["-i", path_to_subprocess_arg(src_path)]) # output arguments from presets jpeg_items.extend(ffmpeg_args.get("output") or []) # we just want one frame from movie files - jpeg_items.append("-vframes 1") + jpeg_items.extend(["-vframes", "1"]) # output file jpeg_items.append(path_to_subprocess_arg(dst_path)) subprocess_command = " ".join(jpeg_items) @@ -240,7 +256,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): return True except Exception: self.log.warning( - "Failed to create thubmnail using ffmpeg", + "Failed to create thumbnail using ffmpeg", exc_info=True ) return False diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index a9c95d6065..401a5d615d 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -17,8 +17,8 @@ import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -49,7 +49,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): # Check if already has thumbnail created if self._instance_has_thumbnail(instance): - self.log.info("Thumbnail representation already present.") + self.log.debug("Thumbnail representation already present.") return dst_filepath = self._create_thumbnail( @@ -98,7 +98,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): thumbnail_created = False oiio_supported = is_oiio_supported() - self.log.info("Thumbnail source: {}".format(thumbnail_source)) + self.log.debug("Thumbnail source: {}".format(thumbnail_source)) src_basename = os.path.basename(thumbnail_source) dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg" full_output_path = os.path.join(dst_staging, dst_filename) @@ -115,10 +115,10 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): # oiiotool isn't available if not thumbnail_created: if oiio_supported: - self.log.info(( + self.log.info( "Converting with FFMPEG because input" " can't be read by OIIO." - )) + ) thumbnail_created = self.create_thumbnail_ffmpeg( thumbnail_source, full_output_path @@ -128,7 +128,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): if thumbnail_created: return full_output_path - self.log.warning("Thumbanil has not been created.") + self.log.warning("Thumbnail has not been created.") def _instance_has_thumbnail(self, instance): if "representations" not in instance.data: @@ -143,45 +143,43 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): return False def create_thumbnail_oiio(self, src_path, dst_path): - self.log.info("outputting {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + self.log.debug("Outputting thumbnail with OIIO: {}".format(dst_path)) + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, + "--ch", "R,G,B", "-o", dst_path - ] - self.log.info("Running: {}".format(" ".join(oiio_cmd))) + ) + self.log.debug("Running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) return True except Exception: self.log.warning( - "Failed to create thubmnail using oiiotool", + "Failed to create thumbnail using oiiotool", exc_info=True ) return False def create_thumbnail_ffmpeg(self, src_path, dst_path): - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - max_int = str(2147483647) - ffmpeg_cmd = [ - ffmpeg_path, + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-analyzeduration", max_int, "-probesize", max_int, "-i", src_path, "-vframes", "1", dst_path - ] + ) - self.log.info("Running: {}".format(" ".join(ffmpeg_cmd))) + self.log.debug("Running: {}".format(" ".join(ffmpeg_cmd))) try: run_subprocess(ffmpeg_cmd, logger=self.log) return True except Exception: self.log.warning( - "Failed to create thubmnail using ffmpeg", + "Failed to create thumbnail using ffmpeg", exc_info=True ) return False diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py index b951136391..5e00cfc96f 100644 --- a/openpype/plugins/publish/extract_trim_video_audio.py +++ b/openpype/plugins/publish/extract_trim_video_audio.py @@ -4,7 +4,7 @@ from pprint import pformat import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -32,11 +32,11 @@ class ExtractTrimVideoAudio(publish.Extractor): instance.data["representations"] = list() # get ffmpet path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_tool_args = get_ffmpeg_tool_args("ffmpeg") # get staging dir staging_dir = self.staging_dir(instance) - self.log.info("Staging dir set to: `{}`".format(staging_dir)) + self.log.debug("Staging dir set to: `{}`".format(staging_dir)) # Generate mov file. fps = instance.data["fps"] @@ -59,7 +59,7 @@ class ExtractTrimVideoAudio(publish.Extractor): extensions = [output_file_type] for ext in extensions: - self.log.info("Processing ext: `{}`".format(ext)) + self.log.debug("Processing ext: `{}`".format(ext)) if not ext.startswith("."): ext = "." + ext @@ -76,8 +76,7 @@ class ExtractTrimVideoAudio(publish.Extractor): if "trimming" not in fml ] - ffmpeg_args = [ - ffmpeg_path, + ffmpeg_args = ffmpeg_tool_args + [ "-ss", str(clip_start_h / fps), "-i", video_file_path, "-t", str(clip_dur_h / fps) @@ -99,7 +98,7 @@ class ExtractTrimVideoAudio(publish.Extractor): ffmpeg_args.append(clip_trimed_path) joined_args = " ".join(ffmpeg_args) - self.log.info(f"Processing: {joined_args}") + self.log.debug(f"Processing: {joined_args}") run_subprocess( ffmpeg_args, logger=self.log ) diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index f392cf67f7..ce24dad1b5 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -2,9 +2,10 @@ import os import logging import sys import copy +import datetime + import clique import six - from bson.objectid import ObjectId import pyblish.api @@ -137,7 +138,9 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "mvUsdOverride", "simpleUnrealTexture", "online", - "uasset" + "uasset", + "blendScene", + "yeticacheUE" ] default_template_name = "publish" @@ -148,24 +151,18 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "project", "asset", "task", "subset", "version", "representation", "family", "hierarchy", "username", "user", "output" ] - skip_host_families = [] def process(self, instance): - if self._temp_skip_instance_by_settings(instance): - return - - # Mark instance as processed for legacy integrator - instance.data["processedWithNewIntegrator"] = True # Instance should be integrated on a farm if instance.data.get("farm"): - self.log.info( + self.log.debug( "Instance is marked to be processed on farm. Skipping") return # Instance is marked to not get integrated if not instance.data.get("integrate", True): - self.log.info("Instance is marked to skip integrating. Skipping") + self.log.debug("Instance is marked to skip integrating. Skipping") return filtered_repres = self.filter_representations(instance) @@ -201,39 +198,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # the try, except. file_transactions.finalize() - def _temp_skip_instance_by_settings(self, instance): - """Decide if instance will be processed with new or legacy integrator. - - This is temporary solution until we test all usecases with new (this) - integrator plugin. - """ - - host_name = instance.context.data["hostName"] - instance_family = instance.data["family"] - instance_families = set(instance.data.get("families") or []) - - skip = False - for item in self.skip_host_families: - if host_name not in item["host"]: - continue - - families = set(item["families"]) - if instance_family in families: - skip = True - break - - for family in instance_families: - if family in families: - skip = True - break - - if skip: - break - - if skip: - self.log.debug("Instance is marked to be skipped by settings.") - return skip - def filter_representations(self, instance): # Prepare repsentations that should be integrated repres = instance.data.get("representations") @@ -343,7 +307,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # increase if the file transaction takes a long time. op_session.commit() - self.log.info("Subset {subset[name]} and Version {version[name]} " + self.log.info("Subset '{subset[name]}' version {version[name]} " "written to database..".format(subset=subset, version=version)) @@ -358,10 +322,16 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # Get the accessible sites for Site Sync modules_by_name = instance.context.data["openPypeModules"] - sync_server_module = modules_by_name["sync_server"] - sites = sync_server_module.compute_resource_sync_sites( - project_name=instance.data["projectEntity"]["name"] - ) + sync_server_module = modules_by_name.get("sync_server") + if sync_server_module is None: + sites = [{ + "name": "studio", + "created_dt": datetime.datetime.now() + }] + else: + sites = sync_server_module.compute_resource_sync_sites( + project_name=instance.data["projectEntity"]["name"] + ) self.log.debug("Sync Server Sites: {}".format(sites)) # Compute the resource file infos once (files belonging to the @@ -423,8 +393,13 @@ class IntegrateAsset(pyblish.api.InstancePlugin): p["representation"]["_id"]: p for p in prepared_representations } - self.log.info("Registered {} representations" - "".format(len(prepared_representations))) + self.log.info( + "Registered {} representations: {}".format( + len(prepared_representations), + ", ".join(p["representation"]["name"] + for p in prepared_representations) + ) + ) def prepare_subset(self, instance, op_session, project_name): asset_doc = instance.data["assetEntity"] diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index b71207c24f..9f0f7fe7f3 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -6,6 +6,7 @@ import shutil import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_version_by_id, get_hero_version_by_subset_id, @@ -141,6 +142,12 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): )) return + if AYON_SERVER_ENABLED and src_version_entity["name"] == 0: + self.log.debug( + "Version 0 cannot have hero version. Skipping." + ) + return + all_copied_files = [] transfers = instance.data.get("transfers", list()) for _src, dst in transfers: @@ -195,11 +202,20 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): entity_id = None if old_version: entity_id = old_version["_id"] - new_hero_version = new_hero_version_doc( - src_version_entity["_id"], - src_version_entity["parent"], - entity_id=entity_id - ) + + if AYON_SERVER_ENABLED: + new_hero_version = new_hero_version_doc( + src_version_entity["parent"], + copy.deepcopy(src_version_entity["data"]), + src_version_entity["name"], + entity_id=entity_id + ) + else: + new_hero_version = new_hero_version_doc( + src_version_entity["_id"], + src_version_entity["parent"], + entity_id=entity_id + ) if old_version: self.log.debug("Replacing old hero version.") @@ -259,10 +275,10 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): backup_hero_publish_dir = _backup_hero_publish_dir break except Exception: - self.log.info(( + self.log.info( "Could not remove previous backup folder." - " Trying to add index to folder name" - )) + " Trying to add index to folder name." + ) _backup_hero_publish_dir = ( backup_hero_publish_dir + str(idx) diff --git a/openpype/plugins/publish/integrate_inputlinks.py b/openpype/plugins/publish/integrate_inputlinks.py index 6964f2d938..3baa462a81 100644 --- a/openpype/plugins/publish/integrate_inputlinks.py +++ b/openpype/plugins/publish/integrate_inputlinks.py @@ -3,6 +3,7 @@ from collections import OrderedDict from bson.objectid import ObjectId import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io @@ -34,6 +35,7 @@ class IntegrateInputLinks(pyblish.api.ContextPlugin): plugin. """ + workfile = None publishing = [] @@ -133,3 +135,7 @@ class IntegrateInputLinks(pyblish.api.ContextPlugin): {"_id": version_doc["_id"]}, {"$set": {"data.inputLinks": input_links}} ) + + +if AYON_SERVER_ENABLED: + del IntegrateInputLinks diff --git a/openpype/plugins/publish/integrate_inputlinks_ayon.py b/openpype/plugins/publish/integrate_inputlinks_ayon.py new file mode 100644 index 0000000000..28684aa889 --- /dev/null +++ b/openpype/plugins/publish/integrate_inputlinks_ayon.py @@ -0,0 +1,207 @@ +import collections + +import pyblish.api +from ayon_api import ( + create_link, + make_sure_link_type_exists, + get_versions_links, +) + +from openpype import AYON_SERVER_ENABLED + + +class IntegrateInputLinksAYON(pyblish.api.ContextPlugin): + """Connecting version level dependency links""" + + order = pyblish.api.IntegratorOrder + 0.2 + label = "Connect Dependency InputLinks AYON" + + def process(self, context): + """Connect dependency links for all instances, globally + + Code steps: + - filter instances that integrated version + - have "versionEntity" entry in data + - separate workfile instance within filtered instances + - when workfile instance is available: + - link all `loadedVersions` as input of the workfile + - link workfile as input of all other integrated versions + - link version's inputs if it's instance have "inputVersions" entry + - + + inputVersions: + The "inputVersions" in instance.data should be a list of + version ids (str), which are the dependencies of the publishing + instance that should be extracted from working scene by the DCC + specific publish plugin. + """ + + workfile_instance, other_instances = self.split_instances(context) + + # Variable where links are stored in submethods + new_links_by_type = collections.defaultdict(list) + + self.create_workfile_links( + workfile_instance, other_instances, new_links_by_type) + + self.create_generative_links(other_instances, new_links_by_type) + + self.create_links_on_server(context, new_links_by_type) + + def split_instances(self, context): + workfile_instance = None + other_instances = [] + + for instance in context: + # Skip inactive instances + if not instance.data.get("publish", True): + continue + + version_doc = instance.data.get("versionEntity") + if not version_doc: + self.log.debug( + "Instance {} doesn't have version.".format(instance)) + continue + + family = instance.data.get("family") + if family == "workfile": + workfile_instance = instance + else: + other_instances.append(instance) + return workfile_instance, other_instances + + def add_link(self, new_links_by_type, link_type, input_id, output_id): + """Add dependency link data into temporary variable. + + Args: + new_links_by_type (dict[str, list[dict[str, Any]]]): Object where + output is stored. + link_type (str): Type of link, one of 'reference' or 'generative' + input_id (str): Input version id. + output_id (str): Output version id. + """ + + new_links_by_type[link_type].append((input_id, output_id)) + + def create_workfile_links( + self, workfile_instance, other_instances, new_links_by_type + ): + if workfile_instance is None: + self.log.warn("No workfile in this publish session.") + return + + workfile_version_id = workfile_instance.data["versionEntity"]["_id"] + # link workfile to all publishing versions + for instance in other_instances: + self.add_link( + new_links_by_type, + "generative", + workfile_version_id, + instance.data["versionEntity"]["_id"], + ) + + loaded_versions = workfile_instance.context.get("loadedVersions") + if not loaded_versions: + return + + # link all loaded versions in scene into workfile + for version in loaded_versions: + self.add_link( + new_links_by_type, + "reference", + version["version"], + workfile_version_id, + ) + + def create_generative_links(self, other_instances, new_links_by_type): + for instance in other_instances: + input_versions = instance.data.get("inputVersions") + if not input_versions: + continue + + version_entity = instance.data["versionEntity"] + for input_version in input_versions: + self.add_link( + new_links_by_type, + "generative", + input_version, + version_entity["_id"], + ) + + def _get_existing_links(self, project_name, link_type, entity_ids): + """Find all existing links for given version ids. + + Args: + project_name (str): Name of project. + link_type (str): Type of link. + entity_ids (set[str]): Set of version ids. + + Returns: + dict[str, set[str]]: Existing links by version id. + """ + + output = collections.defaultdict(set) + if not entity_ids: + return output + + existing_in_links = get_versions_links( + project_name, entity_ids, [link_type], "output" + ) + + for entity_id, links in existing_in_links.items(): + if not links: + continue + for link in links: + output[entity_id].add(link["entityId"]) + return output + + def create_links_on_server(self, context, new_links): + """Create new links on server. + + Args: + dict[str, list[tuple[str, str]]]: Version links by link type. + """ + + if not new_links: + return + + project_name = context.data["projectName"] + + # Make sure link types are available on server + for link_type in new_links.keys(): + make_sure_link_type_exists( + project_name, link_type, "version", "version" + ) + + # Create link themselves + for link_type, items in new_links.items(): + mapping = collections.defaultdict(set) + # Make sure there are no duplicates of src > dst ids + for item in items: + _input_id, _output_id = item + mapping[_input_id].add(_output_id) + + existing_links_by_in_id = self._get_existing_links( + project_name, link_type, set(mapping.keys()) + ) + + for input_id, output_ids in mapping.items(): + existing_links = existing_links_by_in_id[input_id] + for output_id in output_ids: + # Skip creation of link if already exists + # NOTE: AYON server does not support + # to have same links + if output_id in existing_links: + continue + create_link( + project_name, + link_type, + input_id, + "version", + output_id, + "version" + ) + + +if not AYON_SERVER_ENABLED: + del IntegrateInputLinksAYON diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py deleted file mode 100644 index c238cca633..0000000000 --- a/openpype/plugins/publish/integrate_legacy.py +++ /dev/null @@ -1,1299 +0,0 @@ -import os -from os.path import getsize -import logging -import sys -import copy -import clique -import errno -import six -import re -import shutil -from collections import deque, defaultdict -from datetime import datetime - -from bson.objectid import ObjectId -from pymongo import DeleteOne, InsertOne -import pyblish.api - -from openpype.client import ( - get_asset_by_name, - get_subset_by_id, - get_subset_by_name, - get_version_by_id, - get_version_by_name, - get_representations, - get_archived_representations, -) -from openpype.lib import ( - prepare_template_data, - create_hard_link, - StringTemplate, - TemplateUnsolved, - source_hash, - filter_profiles, - get_local_site_id, -) -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import get_publish_template_name - -# this is needed until speedcopy for linux is fixed -if sys.platform == "win32": - from speedcopy import copyfile -else: - from shutil import copyfile - -log = logging.getLogger(__name__) - - -class IntegrateAssetNew(pyblish.api.InstancePlugin): - """Resolve any dependency issues - - This plug-in resolves any paths which, if not updated might break - the published file. - - The order of families is important, when working with lookdev you want to - first publish the texture, update the texture paths in the nodes and then - publish the shading network. Same goes for file dependent assets. - - Requirements for instance to be correctly integrated - - instance.data['representations'] - must be a list and each member - must be a dictionary with following data: - 'files': list of filenames for sequence, string for single file. - Only the filename is allowed, without the folder path. - 'stagingDir': "path/to/folder/with/files" - 'name': representation name (usually the same as extension) - 'ext': file extension - optional data - "frameStart" - "frameEnd" - 'fps' - "data": additional metadata for each representation. - """ - - label = "Integrate Asset (legacy)" - # Make sure it happens after new integrator - order = pyblish.api.IntegratorOrder + 0.00001 - families = ["workfile", - "pointcache", - "pointcloud", - "proxyAbc", - "camera", - "animation", - "model", - "maxScene", - "mayaAscii", - "mayaScene", - "setdress", - "layout", - "ass", - "vdbcache", - "scene", - "vrayproxy", - "vrayscene_layer", - "render", - "prerender", - "imagesequence", - "review", - "rendersetup", - "rig", - "plate", - "look", - "audio", - "yetiRig", - "yeticache", - "nukenodes", - "gizmo", - "source", - "matchmove", - "image", - "assembly", - "fbx", - "gltf", - "textures", - "action", - "harmony.template", - "harmony.palette", - "editorial", - "background", - "camerarig", - "redshiftproxy", - "effect", - "xgen", - "hda", - "usd", - "staticMesh", - "skeletalMesh", - "mvLook", - "mvUsdComposition", - "mvUsdOverride", - "simpleUnrealTexture" - ] - exclude_families = ["render.farm"] - db_representation_context_keys = [ - "project", "asset", "task", "subset", "version", "representation", - "family", "hierarchy", "task", "username", "user" - ] - default_template_name = "publish" - - # suffix to denote temporary files, use without '.' - TMP_FILE_EXT = 'tmp' - - # file_url : file_size of all published and uploaded files - integrated_file_sizes = {} - - # Attributes set by settings - subset_grouping_profiles = None - - def process(self, instance): - if instance.data.get("processedWithNewIntegrator"): - self.log.debug( - "Instance was already processed with new integrator" - ) - return - - for ef in self.exclude_families: - if ( - instance.data["family"] == ef or - ef in instance.data["families"]): - self.log.debug("Excluded family '{}' in '{}' or {}".format( - ef, instance.data["family"], instance.data["families"])) - return - - # instance should be published on a farm - if instance.data.get("farm"): - return - - # Prepare repsentations that should be integrated - repres = instance.data.get("representations") - # Raise error if instance don't have any representations - if not repres: - raise ValueError( - "Instance {} has no files to transfer".format( - instance.data["family"] - ) - ) - - # Validate type of stored representations - if not isinstance(repres, (list, tuple)): - raise TypeError( - "Instance 'files' must be a list, got: {0} {1}".format( - str(type(repres)), str(repres) - ) - ) - - # Filter representations - filtered_repres = [] - for repre in repres: - if "delete" in repre.get("tags", []): - continue - filtered_repres.append(repre) - - # Skip instance if there are not representations to integrate - # all representations should not be integrated - if not filtered_repres: - self.log.warning(( - "Skipping, there are no representations" - " to integrate for instance {}" - ).format(instance.data["family"])) - return - - self.integrated_file_sizes = {} - try: - self.register(instance, filtered_repres) - self.log.info("Integrated Asset in to the database ...") - self.log.info("instance.data: {}".format(instance.data)) - self.handle_destination_files(self.integrated_file_sizes, - 'finalize') - except Exception: - # clean destination - self.log.critical("Error when registering", exc_info=True) - self.handle_destination_files(self.integrated_file_sizes, 'remove') - six.reraise(*sys.exc_info()) - - def register(self, instance, repres): - # Required environment variables - anatomy_data = instance.data["anatomyData"] - - legacy_io.install() - - context = instance.context - - project_entity = instance.data["projectEntity"] - project_name = project_entity["name"] - - context_asset_name = None - context_asset_doc = context.data.get("assetEntity") - if context_asset_doc: - context_asset_name = context_asset_doc["name"] - - asset_name = instance.data["asset"] - asset_entity = instance.data.get("assetEntity") - if not asset_entity or asset_entity["name"] != context_asset_name: - asset_entity = get_asset_by_name(project_name, asset_name) - assert asset_entity, ( - "No asset found by the name \"{0}\" in project \"{1}\"" - ).format(asset_name, project_entity["name"]) - - instance.data["assetEntity"] = asset_entity - - # update anatomy data with asset specific keys - # - name should already been set - hierarchy = "" - parents = asset_entity["data"]["parents"] - if parents: - hierarchy = "/".join(parents) - anatomy_data["hierarchy"] = hierarchy - - # Make sure task name in anatomy data is same as on instance.data - asset_tasks = ( - asset_entity.get("data", {}).get("tasks") - ) or {} - task_name = instance.data.get("task") - if task_name: - task_info = asset_tasks.get(task_name) or {} - task_type = task_info.get("type") - - project_task_types = project_entity["config"]["tasks"] - task_code = project_task_types.get(task_type, {}).get("short_name") - anatomy_data["task"] = { - "name": task_name, - "type": task_type, - "short": task_code - } - - elif "task" in anatomy_data: - # Just set 'task_name' variable to context task - task_name = anatomy_data["task"]["name"] - task_type = anatomy_data["task"]["type"] - - else: - task_name = None - task_type = None - - # Fill family in anatomy data - anatomy_data["family"] = instance.data.get("family") - - stagingdir = instance.data.get("stagingDir") - if not stagingdir: - self.log.debug(( - "{0} is missing reference to staging directory." - " Will try to get it from representation." - ).format(instance)) - - else: - self.log.debug( - "Establishing staging directory @ {0}".format(stagingdir) - ) - - subset = self.get_subset(project_name, asset_entity, instance) - instance.data["subsetEntity"] = subset - - version_number = instance.data["version"] - self.log.debug("Next version: v{}".format(version_number)) - - version_data = self.create_version_data(context, instance) - - version_data_instance = instance.data.get('versionData') - if version_data_instance: - version_data.update(version_data_instance) - - # TODO rename method from `create_version` to - # `prepare_version` or similar... - version = self.create_version( - subset=subset, - version_number=version_number, - data=version_data - ) - - self.log.debug("Creating version ...") - - new_repre_names_low = [ - _repre["name"].lower() - for _repre in repres - ] - - existing_version = get_version_by_name( - project_name, version_number, subset["_id"] - ) - - if existing_version is None: - version_id = legacy_io.insert_one(version).inserted_id - else: - # Check if instance have set `append` mode which cause that - # only replicated representations are set to archive - append_repres = instance.data.get("append", False) - - # Update version data - # TODO query by _id and - legacy_io.update_many({ - 'type': 'version', - 'parent': subset["_id"], - 'name': version_number - }, { - '$set': version - }) - version_id = existing_version['_id'] - - # Find representations of existing version and archive them - current_repres = list(get_representations( - project_name, version_ids=[version_id] - )) - bulk_writes = [] - for repre in current_repres: - if append_repres: - # archive only duplicated representations - if repre["name"].lower() not in new_repre_names_low: - continue - # Representation must change type, - # `_id` must be stored to other key and replaced with new - # - that is because new representations should have same ID - repre_id = repre["_id"] - bulk_writes.append(DeleteOne({"_id": repre_id})) - - repre["orig_id"] = repre_id - repre["_id"] = ObjectId() - repre["type"] = "archived_representation" - bulk_writes.append(InsertOne(repre)) - - # bulk updates - if bulk_writes: - legacy_io.database[project_name].bulk_write( - bulk_writes - ) - - version = get_version_by_id(project_name, version_id) - instance.data["versionEntity"] = version - - existing_repres = list(get_archived_representations( - project_name, - version_ids=[version_id] - )) - - instance.data['version'] = version['name'] - - intent_value = instance.context.data.get("intent") - if intent_value and isinstance(intent_value, dict): - intent_value = intent_value.get("value") - - if intent_value: - anatomy_data["intent"] = intent_value - - anatomy = instance.context.data['anatomy'] - - # Find the representations to transfer amongst the files - # Each should be a single representation (as such, a single extension) - representations = [] - destination_list = [] - - orig_transfers = [] - if 'transfers' not in instance.data: - instance.data['transfers'] = [] - else: - orig_transfers = list(instance.data['transfers']) - - family = self.main_family_from_instance(instance) - - template_name = get_publish_template_name( - project_name, - instance.context.data["hostName"], - family, - task_name=task_info.get("name"), - task_type=task_info.get("type"), - project_settings=instance.context.data["project_settings"], - logger=self.log - ) - - published_representations = {} - for idx, repre in enumerate(repres): - published_files = [] - - # create template data for Anatomy - template_data = copy.deepcopy(anatomy_data) - if intent_value is not None: - template_data["intent"] = intent_value - - resolution_width = repre.get("resolutionWidth") - resolution_height = repre.get("resolutionHeight") - fps = instance.data.get("fps") - - if resolution_width: - template_data["resolution_width"] = resolution_width - if resolution_width: - template_data["resolution_height"] = resolution_height - if resolution_width: - template_data["fps"] = fps - - if "originalBasename" in instance.data: - template_data.update({ - "originalBasename": instance.data.get("originalBasename") - }) - - files = repre['files'] - if repre.get('stagingDir'): - stagingdir = repre['stagingDir'] - - if repre.get("outputName"): - template_data["output"] = repre['outputName'] - - template_data["representation"] = repre["name"] - - ext = repre["ext"] - if ext.startswith("."): - self.log.warning(( - "Implementaion warning: <\"{}\">" - " Representation's extension stored under \"ext\" key " - " started with dot (\"{}\")." - ).format(repre["name"], ext)) - ext = ext[1:] - repre["ext"] = ext - template_data["ext"] = ext - - self.log.info(template_name) - template = os.path.normpath( - anatomy.templates[template_name]["path"]) - - sequence_repre = isinstance(files, list) - repre_context = None - if sequence_repre: - self.log.debug( - "files: {}".format(files)) - src_collections, remainder = clique.assemble(files) - self.log.debug( - "src_tail_collections: {}".format(str(src_collections))) - src_collection = src_collections[0] - - # Assert that each member has identical suffix - src_head = src_collection.format("{head}") - src_tail = src_collection.format("{tail}") - - # fix dst_padding - valid_files = [x for x in files if src_collection.match(x)] - padd_len = len( - valid_files[0].replace(src_head, "").replace(src_tail, "") - ) - src_padding_exp = "%0{}d".format(padd_len) - - test_dest_files = list() - for i in [1, 2]: - template_data["representation"] = repre['ext'] - if not repre.get("udim"): - template_data["frame"] = src_padding_exp % i - else: - template_data["udim"] = src_padding_exp % i - - template_obj = anatomy.templates_obj[template_name]["path"] - template_filled = template_obj.format_strict(template_data) - if repre_context is None: - repre_context = template_filled.used_values - test_dest_files.append( - os.path.normpath(template_filled) - ) - if not repre.get("udim"): - template_data["frame"] = repre_context["frame"] - else: - template_data["udim"] = repre_context["udim"] - - self.log.debug( - "test_dest_files: {}".format(str(test_dest_files))) - - dst_collections, remainder = clique.assemble(test_dest_files) - dst_collection = dst_collections[0] - dst_head = dst_collection.format("{head}") - dst_tail = dst_collection.format("{tail}") - - index_frame_start = None - - # TODO use frame padding from right template group - if repre.get("frameStart") is not None: - frame_start_padding = int( - anatomy.templates["render"].get( - "frame_padding", - anatomy.templates["render"].get("padding") - ) - ) - - index_frame_start = int(repre.get("frameStart")) - - # exception for slate workflow - if index_frame_start and "slate" in instance.data["families"]: - index_frame_start -= 1 - - dst_padding_exp = src_padding_exp - dst_start_frame = None - collection_start = list(src_collection.indexes)[0] - for i in src_collection.indexes: - # TODO 1.) do not count padding in each index iteration - # 2.) do not count dst_padding from src_padding before - # index_frame_start check - frame_number = i - collection_start - src_padding = src_padding_exp % i - - src_file_name = "{0}{1}{2}".format( - src_head, src_padding, src_tail) - - dst_padding = src_padding_exp % frame_number - - if index_frame_start is not None: - dst_padding_exp = "%0{}d".format(frame_start_padding) - dst_padding = dst_padding_exp % (index_frame_start + frame_number) # noqa: E501 - elif repre.get("udim"): - dst_padding = int(i) - - dst = "{0}{1}{2}".format( - dst_head, - dst_padding, - dst_tail - ) - - self.log.debug("destination: `{}`".format(dst)) - src = os.path.join(stagingdir, src_file_name) - - self.log.debug("source: {}".format(src)) - instance.data["transfers"].append([src, dst]) - - published_files.append(dst) - - # for adding first frame into db - if not dst_start_frame: - dst_start_frame = dst_padding - - # Store used frame value to template data - if repre.get("frame"): - template_data["frame"] = dst_start_frame - - dst = "{0}{1}{2}".format( - dst_head, - dst_start_frame, - dst_tail - ) - repre['published_path'] = dst - - else: - # Single file - # _______ - # | |\ - # | | - # | | - # | | - # |_______| - # - template_data.pop("frame", None) - fname = files - assert not os.path.isabs(fname), ( - "Given file name is a full path" - ) - - template_data["representation"] = repre['ext'] - # Store used frame value to template data - if repre.get("udim"): - template_data["udim"] = repre["udim"][0] - src = os.path.join(stagingdir, fname) - template_obj = anatomy.templates_obj[template_name]["path"] - template_filled = template_obj.format_strict(template_data) - repre_context = template_filled.used_values - dst = os.path.normpath(template_filled) - - instance.data["transfers"].append([src, dst]) - - published_files.append(dst) - repre['published_path'] = dst - self.log.debug("__ dst: {}".format(dst)) - - if not instance.data.get("publishDir"): - instance.data["publishDir"] = ( - anatomy.templates_obj[template_name]["folder"] - .format_strict(template_data) - ) - if repre.get("udim"): - repre_context["udim"] = repre.get("udim") # store list - - repre["publishedFiles"] = published_files - - for key in self.db_representation_context_keys: - value = template_data.get(key) - if not value: - continue - repre_context[key] = template_data[key] - - # Use previous representation's id if there are any - repre_id = None - repre_name_low = repre["name"].lower() - for _repre in existing_repres: - # NOTE should we check lowered names? - if repre_name_low == _repre["name"]: - repre_id = _repre["orig_id"] - break - - # Create new id if existing representations does not match - if repre_id is None: - repre_id = ObjectId() - - data = repre.get("data") or {} - data.update({'path': dst, 'template': template}) - representation = { - "_id": repre_id, - "schema": "openpype:representation-2.0", - "type": "representation", - "parent": version_id, - "name": repre['name'], - "data": data, - "dependencies": instance.data.get("dependencies", "").split(), - - # Imprint shortcut to context - # for performance reasons. - "context": repre_context - } - - if repre.get("outputName"): - representation["context"]["output"] = repre['outputName'] - - if sequence_repre and repre.get("frameStart") is not None: - representation['context']['frame'] = ( - dst_padding_exp % int(repre.get("frameStart")) - ) - - # any file that should be physically copied is expected in - # 'transfers' or 'hardlinks' - if instance.data.get('transfers', False) or \ - instance.data.get('hardlinks', False): - # could throw exception, will be caught in 'process' - # all integration to DB is being done together lower, - # so no rollback needed - self.log.debug("Integrating source files to destination ...") - self.integrated_file_sizes.update(self.integrate(instance)) - self.log.debug("Integrated files {}". - format(self.integrated_file_sizes)) - - # get 'files' info for representation and all attached resources - self.log.debug("Preparing files information ...") - representation["files"] = self.get_files_info( - instance, - self.integrated_file_sizes) - - self.log.debug("__ representation: {}".format(representation)) - destination_list.append(dst) - self.log.debug("__ destination_list: {}".format(destination_list)) - instance.data['destination_list'] = destination_list - representations.append(representation) - published_representations[repre_id] = { - "representation": representation, - "anatomy_data": template_data, - "published_files": published_files - } - self.log.debug("__ representations: {}".format(representations)) - # reset transfers for next representation - # instance.data['transfers'] is used as a global variable - # in current codebase - instance.data['transfers'] = list(orig_transfers) - - # Remove old representations if there are any (before insertion of new) - if existing_repres: - repre_ids_to_remove = [] - for repre in existing_repres: - repre_ids_to_remove.append(repre["_id"]) - legacy_io.delete_many({"_id": {"$in": repre_ids_to_remove}}) - - for rep in instance.data["representations"]: - self.log.debug("__ rep: {}".format(rep)) - - legacy_io.insert_many(representations) - instance.data["published_representations"] = ( - published_representations - ) - # self.log.debug("Representation: {}".format(representations)) - self.log.info("Registered {} items".format(len(representations))) - - def integrate(self, instance): - """ Move the files. - - Through `instance.data["transfers"]` - - Args: - instance: the instance to integrate - Returns: - integrated_file_sizes: dictionary of destination file url and - its size in bytes - """ - # store destination url and size for reporting and rollback - integrated_file_sizes = {} - transfers = list(instance.data.get("transfers", list())) - for src, dest in transfers: - if os.path.normpath(src) != os.path.normpath(dest): - dest = self.get_dest_temp_url(dest) - self.copy_file(src, dest) - # TODO needs to be updated during site implementation - integrated_file_sizes[dest] = os.path.getsize(dest) - - # Produce hardlinked copies - # Note: hardlink can only be produced between two files on the same - # server/disk and editing one of the two will edit both files at once. - # As such it is recommended to only make hardlinks between static files - # to ensure publishes remain safe and non-edited. - hardlinks = instance.data.get("hardlinks", list()) - for src, dest in hardlinks: - dest = self.get_dest_temp_url(dest) - self.log.debug("Hardlinking file ... {} -> {}".format(src, dest)) - if not os.path.exists(dest): - self.hardlink_file(src, dest) - - # TODO needs to be updated during site implementation - integrated_file_sizes[dest] = os.path.getsize(dest) - - return integrated_file_sizes - - def copy_file(self, src, dst): - """ Copy given source to destination - - Arguments: - src (str): the source file which needs to be copied - dst (str): the destination of the sourc file - Returns: - None - """ - src = os.path.normpath(src) - dst = os.path.normpath(dst) - self.log.debug("Copying file ... {} -> {}".format(src, dst)) - dirname = os.path.dirname(dst) - try: - os.makedirs(dirname) - except OSError as e: - if e.errno == errno.EEXIST: - pass - else: - self.log.critical("An unexpected error occurred.") - six.reraise(*sys.exc_info()) - - # copy file with speedcopy and check if size of files are simetrical - while True: - if not shutil._samefile(src, dst): - copyfile(src, dst) - else: - self.log.critical( - "files are the same {} to {}".format(src, dst) - ) - os.remove(dst) - try: - shutil.copyfile(src, dst) - self.log.debug("Copying files with shutil...") - except OSError as e: - self.log.critical("Cannot copy {} to {}".format(src, dst)) - self.log.critical(e) - six.reraise(*sys.exc_info()) - if str(getsize(src)) in str(getsize(dst)): - break - - def hardlink_file(self, src, dst): - dirname = os.path.dirname(dst) - - try: - os.makedirs(dirname) - except OSError as e: - if e.errno == errno.EEXIST: - pass - else: - self.log.critical("An unexpected error occurred.") - six.reraise(*sys.exc_info()) - - create_hard_link(src, dst) - - def get_subset(self, project_name, asset, instance): - subset_name = instance.data["subset"] - subset = get_subset_by_name(project_name, subset_name, asset["_id"]) - - if subset is None: - self.log.info("Subset '%s' not found, creating ..." % subset_name) - self.log.debug("families. %s" % instance.data.get('families')) - self.log.debug( - "families. %s" % type(instance.data.get('families'))) - - family = instance.data.get("family") - families = [] - if family: - families.append(family) - - for _family in (instance.data.get("families") or []): - if _family not in families: - families.append(_family) - - _id = legacy_io.insert_one({ - "schema": "openpype:subset-3.0", - "type": "subset", - "name": subset_name, - "data": { - "families": families - }, - "parent": asset["_id"] - }).inserted_id - - subset = get_subset_by_id(project_name, _id) - - # QUESTION Why is changing of group and updating it's - # families in 'get_subset'? - self._set_subset_group(instance, subset["_id"]) - - # Update families on subset. - families = [instance.data["family"]] - families.extend(instance.data.get("families", [])) - legacy_io.update_many( - {"type": "subset", "_id": ObjectId(subset["_id"])}, - {"$set": {"data.families": families}} - ) - - return subset - - def _set_subset_group(self, instance, subset_id): - """ - Mark subset as belonging to group in DB. - - Uses Settings > Global > Publish plugins > IntegrateAssetNew - - Args: - instance (dict): processed instance - subset_id (str): DB's subset _id - - """ - # Fist look into instance data - subset_group = instance.data.get("subsetGroup") - if not subset_group: - subset_group = self._get_subset_group(instance) - - if subset_group: - legacy_io.update_many({ - 'type': 'subset', - '_id': ObjectId(subset_id) - }, {'$set': {'data.subsetGroup': subset_group}}) - - def _get_subset_group(self, instance): - """Look into subset group profiles set by settings. - - Attribute 'subset_grouping_profiles' is defined by OpenPype settings. - """ - # Skip if 'subset_grouping_profiles' is empty - if not self.subset_grouping_profiles: - return None - - # QUESTION - # - is there a chance that task name is not filled in anatomy - # data? - # - should we use context task in that case? - anatomy_data = instance.data["anatomyData"] - task_name = None - task_type = None - if "task" in anatomy_data: - task_name = anatomy_data["task"]["name"] - task_type = anatomy_data["task"]["type"] - filtering_criteria = { - "families": instance.data["family"], - "hosts": instance.context.data["hostName"], - "tasks": task_name, - "task_types": task_type - } - matching_profile = filter_profiles( - self.subset_grouping_profiles, - filtering_criteria - ) - # Skip if there is not matchin profile - if not matching_profile: - return None - - filled_template = None - template = matching_profile["template"] - fill_pairs = ( - ("family", filtering_criteria["families"]), - ("task", filtering_criteria["tasks"]), - ("host", filtering_criteria["hosts"]), - ("subset", instance.data["subset"]), - ("renderlayer", instance.data.get("renderlayer")) - ) - fill_pairs = prepare_template_data(fill_pairs) - - try: - filled_template = StringTemplate.format_strict_template( - template, fill_pairs - ) - except (KeyError, TemplateUnsolved): - keys = [] - if fill_pairs: - keys = fill_pairs.keys() - - msg = "Subset grouping failed. " \ - "Only {} are expected in Settings".format(','.join(keys)) - self.log.warning(msg) - - return filled_template - - def create_version(self, subset, version_number, data=None): - """ Copy given source to destination - - Args: - subset (dict): the registered subset of the asset - version_number (int): the version number - - Returns: - dict: collection of data to create a version - """ - - return {"schema": "openpype:version-3.0", - "type": "version", - "parent": subset["_id"], - "name": version_number, - "data": data} - - def create_version_data(self, context, instance): - """Create the data collection for the version - - Args: - context: the current context - instance: the current instance being published - - Returns: - dict: the required information with instance.data as key - """ - - families = [] - current_families = instance.data.get("families", list()) - instance_family = instance.data.get("family", None) - - if instance_family is not None: - families.append(instance_family) - families += current_families - - # create relative source path for DB - source = instance.data.get("source") - if not source: - source = context.data["currentFile"] - anatomy = instance.context.data["anatomy"] - source = self.get_rootless_path(anatomy, source) - - self.log.debug("Source: {}".format(source)) - version_data = { - "families": families, - "time": context.data["time"], - "author": context.data["user"], - "source": source, - "comment": instance.data["comment"], - "machine": context.data.get("machine"), - "fps": context.data.get( - "fps", instance.data.get("fps") - ) - } - - intent_value = instance.context.data.get("intent") - if intent_value and isinstance(intent_value, dict): - intent_value = intent_value.get("value") - - if intent_value: - version_data["intent"] = intent_value - - # Include optional data if present in - optionals = [ - "frameStart", "frameEnd", "step", - "handleEnd", "handleStart", "sourceHashes" - ] - for key in optionals: - if key in instance.data: - version_data[key] = instance.data[key] - - return version_data - - def main_family_from_instance(self, instance): - """Returns main family of entered instance.""" - family = instance.data.get("family") - if not family: - family = instance.data["families"][0] - return family - - def get_rootless_path(self, anatomy, path): - """ Returns, if possible, path without absolute portion from host - (eg. 'c:\' or '/opt/..') - This information is host dependent and shouldn't be captured. - Example: - 'c:/projects/MyProject1/Assets/publish...' > - '{root}/MyProject1/Assets...' - - Args: - anatomy: anatomy part from instance - path: path (absolute) - Returns: - path: modified path if possible, or unmodified path - + warning logged - """ - success, rootless_path = ( - anatomy.find_root_template_from_path(path) - ) - if success: - path = rootless_path - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(path)) - return path - - def get_files_info(self, instance, integrated_file_sizes): - """ Prepare 'files' portion for attached resources and main asset. - Combining records from 'transfers' and 'hardlinks' parts from - instance. - All attached resources should be added, currently without - Context info. - - Arguments: - instance: the current instance being published - integrated_file_sizes: dictionary of destination path (absolute) - and its file size - Returns: - output_resources: array of dictionaries to be added to 'files' key - in representation - """ - resources = list(instance.data.get("transfers", [])) - resources.extend(list(instance.data.get("hardlinks", []))) - - self.log.debug("get_resource_files_info.resources:{}". - format(resources)) - - output_resources = [] - anatomy = instance.context.data["anatomy"] - for _src, dest in resources: - path = self.get_rootless_path(anatomy, dest) - dest = self.get_dest_temp_url(dest) - file_hash = source_hash(dest) - if self.TMP_FILE_EXT and \ - ',{}'.format(self.TMP_FILE_EXT) in file_hash: - file_hash = file_hash.replace(',{}'.format(self.TMP_FILE_EXT), - '') - - file_info = self.prepare_file_info(path, - integrated_file_sizes[dest], - file_hash, - instance=instance) - output_resources.append(file_info) - - return output_resources - - def get_dest_temp_url(self, dest): - """ Enhance destination path with TMP_FILE_EXT to denote temporary - file. - Temporary files will be renamed after successful registration - into DB and full copy to destination - - Arguments: - dest: destination url of published file (absolute) - Returns: - dest: destination path + '.TMP_FILE_EXT' - """ - if self.TMP_FILE_EXT and '.{}'.format(self.TMP_FILE_EXT) not in dest: - dest += '.{}'.format(self.TMP_FILE_EXT) - return dest - - def prepare_file_info(self, path, size=None, file_hash=None, - sites=None, instance=None): - """ Prepare information for one file (asset or resource) - - Arguments: - path: destination url of published file (rootless) - size(optional): size of file in bytes - file_hash(optional): hash of file for synchronization validation - sites(optional): array of published locations, - [ {'name':'studio', 'created_dt':date} by default - keys expected ['studio', 'site1', 'gdrive1'] - instance(dict, optional): to get collected settings - Returns: - rec: dictionary with filled info - """ - local_site = 'studio' # default - remote_site = None - always_accesible = [] - sync_project_presets = None - - rec = { - "_id": ObjectId(), - "path": path - } - if size: - rec["size"] = size - - if file_hash: - rec["hash"] = file_hash - - if sites: - rec["sites"] = sites - else: - system_sync_server_presets = ( - instance.context.data["system_settings"] - ["modules"] - ["sync_server"]) - log.debug("system_sett:: {}".format(system_sync_server_presets)) - - if system_sync_server_presets["enabled"]: - sync_project_presets = ( - instance.context.data["project_settings"] - ["global"] - ["sync_server"]) - - if sync_project_presets and sync_project_presets["enabled"]: - local_site, remote_site = self._get_sites(sync_project_presets) - - always_accesible = sync_project_presets["config"]. \ - get("always_accessible_on", []) - - already_attached_sites = {} - meta = {"name": local_site, "created_dt": datetime.now()} - rec["sites"] = [meta] - already_attached_sites[meta["name"]] = meta["created_dt"] - - if sync_project_presets and sync_project_presets["enabled"]: - if remote_site and \ - remote_site not in already_attached_sites.keys(): - # add remote - meta = {"name": remote_site.strip()} - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = None - - # add alternative sites - rec, already_attached_sites = self._add_alternative_sites( - system_sync_server_presets, already_attached_sites, rec) - - # add skeleton for site where it should be always synced to - for always_on_site in set(always_accesible): - if always_on_site not in already_attached_sites.keys(): - meta = {"name": always_on_site.strip()} - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = None - - log.debug("final sites:: {}".format(rec["sites"])) - - return rec - - def _get_sites(self, sync_project_presets): - """Returns tuple (local_site, remote_site)""" - local_site_id = get_local_site_id() - local_site = sync_project_presets["config"]. \ - get("active_site", "studio").strip() - - if local_site == 'local': - local_site = local_site_id - - remote_site = sync_project_presets["config"].get("remote_site") - - if remote_site == 'local': - remote_site = local_site_id - - return local_site, remote_site - - def _add_alternative_sites(self, - system_sync_server_presets, - already_attached_sites, - rec): - """Loop through all configured sites and add alternatives. - - See SyncServerModule.handle_alternate_site - """ - conf_sites = system_sync_server_presets.get("sites", {}) - - alt_site_pairs = self._get_alt_site_pairs(conf_sites) - - already_attached_keys = list(already_attached_sites.keys()) - for added_site in already_attached_keys: - real_created = already_attached_sites[added_site] - for alt_site in alt_site_pairs.get(added_site, []): - if alt_site in already_attached_sites.keys(): - continue - meta = {"name": alt_site} - # alt site inherits state of 'created_dt' - if real_created: - meta["created_dt"] = real_created - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = real_created - - return rec, already_attached_sites - - def _get_alt_site_pairs(self, conf_sites): - """Returns dict of site and its alternative sites. - - If `site` has alternative site, it means that alt_site has 'site' as - alternative site - Args: - conf_sites (dict) - Returns: - (dict): {'site': [alternative sites]...} - """ - alt_site_pairs = defaultdict(list) - for site_name, site_info in conf_sites.items(): - alt_sites = set(site_info.get("alternative_sites", [])) - alt_site_pairs[site_name].extend(alt_sites) - - for alt_site in alt_sites: - alt_site_pairs[alt_site].append(site_name) - - for site_name, alt_sites in alt_site_pairs.items(): - sites_queue = deque(alt_sites) - while sites_queue: - alt_site = sites_queue.popleft() - - # safety against wrong config - # {"SFTP": {"alternative_site": "SFTP"} - if alt_site == site_name or alt_site not in alt_site_pairs: - continue - - for alt_alt_site in alt_site_pairs[alt_site]: - if ( - alt_alt_site != site_name - and alt_alt_site not in alt_sites - ): - alt_sites.append(alt_alt_site) - sites_queue.append(alt_alt_site) - - return alt_site_pairs - - def handle_destination_files(self, integrated_file_sizes, mode): - """ Clean destination files - Called when error happened during integrating to DB or to disk - OR called to rename uploaded files from temporary name to final to - highlight publishing in progress/broken - Used to clean unwanted files - - Arguments: - integrated_file_sizes: dictionary, file urls as keys, size as value - mode: 'remove' - clean files, - 'finalize' - rename files, - remove TMP_FILE_EXT suffix denoting temp file - """ - if integrated_file_sizes: - for file_url, _file_size in integrated_file_sizes.items(): - if not os.path.exists(file_url): - self.log.debug( - "File {} was not found.".format(file_url) - ) - continue - - try: - if mode == 'remove': - self.log.debug("Removing file {}".format(file_url)) - os.remove(file_url) - if mode == 'finalize': - new_name = re.sub( - r'\.{}$'.format(self.TMP_FILE_EXT), - '', - file_url - ) - - if os.path.exists(new_name): - self.log.debug( - "Overwriting file {} to {}".format( - file_url, new_name - ) - ) - shutil.copy(file_url, new_name) - os.remove(file_url) - else: - self.log.debug( - "Renaming file {} to {}".format( - file_url, new_name - ) - ) - os.rename(file_url, new_name) - except OSError: - self.log.error("Cannot {} file {}".format(mode, file_url), - exc_info=True) - six.reraise(*sys.exc_info()) diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index 2e87d8fc86..0c12255d38 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -18,6 +18,7 @@ import collections import six import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import get_versions from openpype.client.operations import OperationsSession, new_thumbnail_doc from openpype.pipeline.publish import get_publish_instance_label @@ -39,6 +40,12 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): ] def process(self, context): + if AYON_SERVER_ENABLED: + self.log.debug( + "AYON is enabled. Skipping v3 thumbnail integration" + ) + return + # Filter instances which can be used for integration filtered_instance_items = self._prepare_instances(context) if not filtered_instance_items: @@ -69,14 +76,14 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): thumbnail_template = anatomy.templates["publish"]["thumbnail"] if not thumbnail_template: - self.log.info("Thumbnail template is not filled. Skipping.") + self.log.debug("Thumbnail template is not filled. Skipping.") return if ( not thumbnail_root and thumbnail_root_format_key in thumbnail_template ): - self.log.warning(("{} is not set. Skipping.").format(env_key)) + self.log.warning("{} is not set. Skipping.".format(env_key)) return # Collect verion ids from all filtered instance diff --git a/openpype/plugins/publish/integrate_thumbnail_ayon.py b/openpype/plugins/publish/integrate_thumbnail_ayon.py new file mode 100644 index 0000000000..cf05327ce8 --- /dev/null +++ b/openpype/plugins/publish/integrate_thumbnail_ayon.py @@ -0,0 +1,207 @@ +""" Integrate Thumbnails for Openpype use in Loaders. + + This thumbnail is different from 'thumbnail' representation which could + be uploaded to Ftrack, or used as any other representation in Loaders to + pull into a scene. + + This one is used only as image describing content of published item and + shows up only in Loader in right column section. +""" + +import os +import collections + +import pyblish.api + +from openpype import AYON_SERVER_ENABLED +from openpype.client import get_versions +from openpype.client.operations import OperationsSession + +InstanceFilterResult = collections.namedtuple( + "InstanceFilterResult", + ["instance", "thumbnail_path", "version_id"] +) + + +class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin): + """Integrate Thumbnails for Openpype use in Loaders.""" + + label = "Integrate Thumbnails to AYON" + order = pyblish.api.IntegratorOrder + 0.01 + + required_context_keys = [ + "project", "asset", "task", "subset", "version" + ] + + def process(self, context): + if not AYON_SERVER_ENABLED: + self.log.debug("AYON is not enabled. Skipping") + return + + # Filter instances which can be used for integration + filtered_instance_items = self._prepare_instances(context) + if not filtered_instance_items: + self.log.debug( + "All instances were filtered. Thumbnail integration skipped." + ) + return + + project_name = context.data["projectName"] + + # Collect version ids from all filtered instance + version_ids = { + instance_items.version_id + for instance_items in filtered_instance_items + } + # Query versions + version_docs = get_versions( + project_name, + version_ids=version_ids, + hero=True, + fields=["_id", "type", "name"] + ) + # Store version by their id (converted to string) + version_docs_by_str_id = { + str(version_doc["_id"]): version_doc + for version_doc in version_docs + } + self._integrate_thumbnails( + filtered_instance_items, + version_docs_by_str_id, + project_name + ) + + def _prepare_instances(self, context): + context_thumbnail_path = context.get("thumbnailPath") + valid_context_thumbnail = bool( + context_thumbnail_path + and os.path.exists(context_thumbnail_path) + ) + + filtered_instances = [] + for instance in context: + instance_label = self._get_instance_label(instance) + # Skip instances without published representations + # - there is no place where to put the thumbnail + published_repres = instance.data.get("published_representations") + if not published_repres: + self.log.debug(( + "There are no published representations" + " on the instance {}." + ).format(instance_label)) + continue + + # Find thumbnail path on instance + thumbnail_path = self._get_instance_thumbnail_path( + published_repres) + if thumbnail_path: + self.log.debug(( + "Found thumbnail path for instance \"{}\"." + " Thumbnail path: {}" + ).format(instance_label, thumbnail_path)) + + elif valid_context_thumbnail: + # Use context thumbnail path if is available + thumbnail_path = context_thumbnail_path + self.log.debug(( + "Using context thumbnail path for instance \"{}\"." + " Thumbnail path: {}" + ).format(instance_label, thumbnail_path)) + + # Skip instance if thumbnail path is not available for it + if not thumbnail_path: + self.log.debug(( + "Skipping thumbnail integration for instance \"{}\"." + " Instance and context" + " thumbnail paths are not available." + ).format(instance_label)) + continue + + version_id = str(self._get_version_id(published_repres)) + filtered_instances.append( + InstanceFilterResult(instance, thumbnail_path, version_id) + ) + return filtered_instances + + def _get_version_id(self, published_representations): + for repre_info in published_representations.values(): + return repre_info["representation"]["parent"] + + def _get_instance_thumbnail_path(self, published_representations): + thumb_repre_doc = None + for repre_info in published_representations.values(): + repre_doc = repre_info["representation"] + if repre_doc["name"].lower() == "thumbnail": + thumb_repre_doc = repre_doc + break + + if thumb_repre_doc is None: + self.log.debug( + "There is not representation with name \"thumbnail\"" + ) + return None + + path = thumb_repre_doc["data"]["path"] + if not os.path.exists(path): + self.log.warning( + "Thumbnail file cannot be found. Path: {}".format(path) + ) + return None + return os.path.normpath(path) + + def _integrate_thumbnails( + self, + filtered_instance_items, + version_docs_by_str_id, + project_name + ): + from openpype.client.server.operations import create_thumbnail + + op_session = OperationsSession() + + for instance_item in filtered_instance_items: + instance, thumbnail_path, version_id = instance_item + instance_label = self._get_instance_label(instance) + version_doc = version_docs_by_str_id.get(version_id) + if not version_doc: + self.log.warning(( + "Version entity for instance \"{}\" was not found." + ).format(instance_label)) + continue + + thumbnail_id = create_thumbnail(project_name, thumbnail_path) + + # Set thumbnail id for version + op_session.update_entity( + project_name, + version_doc["type"], + version_doc["_id"], + {"data.thumbnail_id": thumbnail_id} + ) + if version_doc["type"] == "hero_version": + version_name = "Hero" + else: + version_name = version_doc["name"] + self.log.debug("Setting thumbnail for version \"{}\" <{}>".format( + version_name, version_id + )) + + asset_entity = instance.data["assetEntity"] + op_session.update_entity( + project_name, + asset_entity["type"], + asset_entity["_id"], + {"data.thumbnail_id": thumbnail_id} + ) + self.log.debug("Setting thumbnail for asset \"{}\" <{}>".format( + asset_entity["name"], version_id + )) + + op_session.commit() + + def _get_instance_label(self, instance): + return ( + instance.data.get("label") + or instance.data.get("name") + or "N/A" + ) diff --git a/openpype/plugins/publish/integrate_version_attrs.py b/openpype/plugins/publish/integrate_version_attrs.py new file mode 100644 index 0000000000..ed179ae319 --- /dev/null +++ b/openpype/plugins/publish/integrate_version_attrs.py @@ -0,0 +1,93 @@ +import pyblish.api +import ayon_api + +from openpype import AYON_SERVER_ENABLED +from openpype.client.operations import OperationsSession + + +class IntegrateVersionAttributes(pyblish.api.ContextPlugin): + """Integrate version attributes from predefined key. + + Any integration after 'IntegrateAsset' can fill 'versionAttributes' with + attribute key & value to be updated on created version. + + The integration must make sure the attribute is available for the version + entity otherwise an error would be raised. + + Example of 'versionAttributes': + { + "ftrack_id": "0123456789-101112-131415", + "syncsketch_id": "987654321-012345-678910" + } + """ + + label = "Integrate Version Attributes" + order = pyblish.api.IntegratorOrder + 0.5 + + def process(self, context): + available_attributes = ayon_api.get_attributes_for_type("version") + skipped_attributes = set() + project_name = context.data["projectName"] + op_session = OperationsSession() + for instance in context: + label = self.get_instance_label(instance) + version_entity = instance.data.get("versionEntity") + if not version_entity: + continue + attributes = instance.data.get("versionAttributes") + if not attributes: + self.log.debug(( + "Skipping instance {} because it does not specify" + " version attributes to set." + ).format(label)) + continue + + filtered_attributes = {} + for attr, value in attributes.items(): + if attr not in available_attributes: + skipped_attributes.add(attr) + else: + filtered_attributes[attr] = value + + if not filtered_attributes: + self.log.debug(( + "Skipping instance {} because all version attributes were" + " filtered out." + ).format(label)) + continue + + self.log.debug("Updating attributes on version {} to {}".format( + version_entity["_id"], str(filtered_attributes) + )) + op_session.update_entity( + project_name, + "version", + version_entity["_id"], + {"attrib": filtered_attributes} + ) + + if skipped_attributes: + self.log.warning(( + "Skipped version attributes integration because they're" + " not available on the server: {}" + ).format(str(skipped_attributes))) + + if len(op_session): + op_session.commit() + self.log.info("Updated version attributes") + else: + self.log.debug("There are no version attributes to update") + + @staticmethod + def get_instance_label(instance): + return ( + instance.data.get("label") + or instance.data.get("name") + or instance.data.get("subset") + or str(instance) + ) + + +# Discover the plugin only in AYON mode +if not AYON_SERVER_ENABLED: + del IntegrateVersionAttributes diff --git a/openpype/plugins/publish/preintegrate_thumbnail_representation.py b/openpype/plugins/publish/preintegrate_thumbnail_representation.py index 1c95b82c97..77bf2edba5 100644 --- a/openpype/plugins/publish/preintegrate_thumbnail_representation.py +++ b/openpype/plugins/publish/preintegrate_thumbnail_representation.py @@ -29,13 +29,12 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin): if not repres: return - thumbnail_repre = None + thumbnail_repres = [] for repre in repres: - if repre["name"] == "thumbnail": - thumbnail_repre = repre - break + if "thumbnail" in repre.get("tags", []): + thumbnail_repres.append(repre) - if not thumbnail_repre: + if not thumbnail_repres: return family = instance.data["family"] @@ -60,14 +59,15 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin): if not found_profile: return - thumbnail_repre.setdefault("tags", []) + for thumbnail_repre in thumbnail_repres: + thumbnail_repre.setdefault("tags", []) - if not found_profile["integrate_thumbnail"]: - if "delete" not in thumbnail_repre["tags"]: - thumbnail_repre["tags"].append("delete") - else: - if "delete" in thumbnail_repre["tags"]: - thumbnail_repre["tags"].remove("delete") + if not found_profile["integrate_thumbnail"]: + if "delete" not in thumbnail_repre["tags"]: + thumbnail_repre["tags"].append("delete") + else: + if "delete" in thumbnail_repre["tags"]: + thumbnail_repre["tags"].remove("delete") - self.log.debug( - "Thumbnail repre tags {}".format(thumbnail_repre["tags"])) + self.log.debug( + "Thumbnail repre tags {}".format(thumbnail_repre["tags"])) diff --git a/openpype/plugins/publish/validate_asset_docs.py b/openpype/plugins/publish/validate_asset_docs.py index 9a1ca5b8de..8dfd783c39 100644 --- a/openpype/plugins/publish/validate_asset_docs.py +++ b/openpype/plugins/publish/validate_asset_docs.py @@ -22,11 +22,11 @@ class ValidateAssetDocs(pyblish.api.InstancePlugin): return if instance.data.get("assetEntity"): - self.log.info("Instance has set asset document in its data.") + self.log.debug("Instance has set asset document in its data.") elif instance.data.get("newAssetPublishing"): # skip if it is editorial - self.log.info("Editorial instance is no need to check...") + self.log.debug("Editorial instance has no need to check...") else: raise PublishValidationError(( diff --git a/openpype/plugins/publish/validate_editorial_asset_name.py b/openpype/plugins/publish/validate_editorial_asset_name.py index 4f8a1abf2e..fca0d8e7f5 100644 --- a/openpype/plugins/publish/validate_editorial_asset_name.py +++ b/openpype/plugins/publish/validate_editorial_asset_name.py @@ -56,7 +56,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin): } continue - self.log.info("correct asset: {}".format(asset)) + self.log.debug("correct asset: {}".format(asset)) if assets_missing_name: wrong_names = {} diff --git a/openpype/plugins/publish/validate_file_saved.py b/openpype/plugins/publish/validate_file_saved.py index 448eaccf57..94aadc9358 100644 --- a/openpype/plugins/publish/validate_file_saved.py +++ b/openpype/plugins/publish/validate_file_saved.py @@ -1,5 +1,7 @@ import pyblish.api +from openpype.pipeline.publish import PublishValidationError + class ValidateCurrentSaveFile(pyblish.api.ContextPlugin): """File must be saved before publishing""" @@ -12,4 +14,4 @@ class ValidateCurrentSaveFile(pyblish.api.ContextPlugin): current_file = context.data["currentFile"] if not current_file: - raise RuntimeError("File not saved") + raise PublishValidationError("File not saved") diff --git a/openpype/plugins/publish/validate_filesequences.py b/openpype/plugins/publish/validate_filesequences.py index 8a877d79bb..0ac281022d 100644 --- a/openpype/plugins/publish/validate_filesequences.py +++ b/openpype/plugins/publish/validate_filesequences.py @@ -1,5 +1,7 @@ import pyblish.api +from openpype.pipeline.publish import PublishValidationError + class ValidateFileSequences(pyblish.api.ContextPlugin): """Validates whether any file sequences were collected.""" @@ -10,4 +12,5 @@ class ValidateFileSequences(pyblish.api.ContextPlugin): label = "Validate File Sequences" def process(self, context): - assert context, "Nothing collected." + if not context: + raise PublishValidationError("Nothing collected.") diff --git a/openpype/plugins/publish/validate_intent.py b/openpype/plugins/publish/validate_intent.py index 23d57bb2b7..832c7cc0a1 100644 --- a/openpype/plugins/publish/validate_intent.py +++ b/openpype/plugins/publish/validate_intent.py @@ -1,7 +1,7 @@ -import os import pyblish.api from openpype.lib import filter_profiles +from openpype.pipeline.publish import PublishValidationError class ValidateIntent(pyblish.api.ContextPlugin): @@ -51,12 +51,10 @@ class ValidateIntent(pyblish.api.ContextPlugin): )) return - msg = ( - "Please make sure that you select the intent of this publish." - ) - intent = context.data.get("intent") or {} self.log.debug(str(intent)) intent_value = intent.get("value") if not intent_value: - raise AssertionError(msg) + raise PublishValidationError( + "Please make sure that you select the intent of this publish." + ) diff --git a/openpype/plugins/publish/validate_publish_dir.py b/openpype/plugins/publish/validate_publish_dir.py index 2f41127548..0eb93da583 100644 --- a/openpype/plugins/publish/validate_publish_dir.py +++ b/openpype/plugins/publish/validate_publish_dir.py @@ -7,12 +7,12 @@ from openpype.pipeline.publish import ( class ValidatePublishDir(pyblish.api.InstancePlugin): - """Validates if 'publishDir' is a project directory + """Validates if files are being published into a project directory - 'publishDir' is collected based on publish templates. In specific cases - ('source' template) source folder of items is used as a 'publishDir', this - validates if it is inside any project dir for the project. - (eg. files are not published from local folder, unaccessible for studio' + In specific cases ('source' template - in place publishing) source folder + of published items is used as a regular `publish` dir. + This validates if it is inside any project dir for the project. + (eg. files are not published from local folder, inaccessible for studio') """ @@ -44,23 +44,27 @@ class ValidatePublishDir(pyblish.api.InstancePlugin): anatomy = instance.context.data["anatomy"] + # original_dirname must be convertable to rootless path + # in other case it is path inside of root folder for the project success, _ = anatomy.find_root_template_from_path(original_dirname) - - formatting_data = { - "original_dirname": original_dirname, - } - msg = "Path '{}' not in project folder.".format(original_dirname) + \ - " Please publish from inside of project folder." if not success: - raise PublishXmlValidationError(self, msg, key="not_in_dir", - formatting_data=formatting_data) + raise PublishXmlValidationError( + plugin=self, + message=( + "Path '{}' not in project folder. Please publish from " + "inside of project folder.".format(original_dirname) + ), + key="not_in_dir", + formatting_data={"original_dirname": original_dirname} + ) def _get_template_name_from_instance(self, instance): + """Find template which will be used during integration.""" project_name = instance.context.data["projectName"] host_name = instance.context.data["hostName"] anatomy_data = instance.data["anatomyData"] family = anatomy_data["family"] - family = self.family_mapping.get("family") or family + family = self.family_mapping.get(family) or family task_info = anatomy_data.get("task") or {} return get_publish_template_name( diff --git a/openpype/plugins/publish/validate_version.py b/openpype/plugins/publish/validate_version.py index 2b919a3119..84d52fab73 100644 --- a/openpype/plugins/publish/validate_version.py +++ b/openpype/plugins/publish/validate_version.py @@ -25,16 +25,16 @@ class ValidateVersion(pyblish.api.InstancePlugin): # TODO: Remove full non-html version upon drop of old publisher msg = ( "Version '{0}' from instance '{1}' that you are " - " trying to publish is lower or equal to an existing version " - " in the database. Version in database: '{2}'." + "trying to publish is lower or equal to an existing version " + "in the database. Version in database: '{2}'." "Please version up your workfile to a higher version number " "than: '{2}'." ).format(version, instance.data["name"], latest_version) msg_html = ( "Version {0} from instance {1} that you are " - " trying to publish is lower or equal to an existing version " - " in the database. Version in database: {2}.

" + "trying to publish is lower or equal to an existing version " + "in the database. Version in database: {2}.

" "Please version up your workfile to a higher version number " "than: {2}." ).format(version, instance.data["name"], latest_version) diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 56a0fe60cd..071ecfffd2 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -88,9 +88,15 @@ class PypeCommands: """ from openpype.lib import Logger - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) from openpype.modules import ModulesManager - from openpype.pipeline import install_openpype_plugins + from openpype.pipeline import ( + install_openpype_plugins, + get_global_context, + ) from openpype.tools.utils.host_tools import show_publish from openpype.tools.utils.lib import qt_app_context @@ -112,12 +118,15 @@ class PypeCommands: if not any(paths): raise RuntimeError("No publish paths specified") - if os.getenv("AVALON_APP_NAME"): + app_full_name = os.getenv("AVALON_APP_NAME") + if app_full_name: + context = get_global_context() env = get_app_environments_for_context( - os.environ["AVALON_PROJECT"], - os.environ["AVALON_ASSET"], - os.environ["AVALON_TASK"], - os.environ["AVALON_APP_NAME"] + context["project_name"], + context["asset_name"], + context["task_name"], + app_full_name, + launch_type=LaunchTypes.farm_publish, ) os.environ.update(env) @@ -156,74 +165,6 @@ class PypeCommands: log.info("Publish finished.") - @staticmethod - def remotepublishfromapp(project_name, batch_path, host_name, - user_email, targets=None): - """Opens installed variant of 'host' and run remote publish there. - - Eventually should be yanked out to Webpublisher cli. - - Currently implemented and tested for Photoshop where customer - wants to process uploaded .psd file and publish collected layers - from there. Triggered by Webpublisher. - - Checks if no other batches are running (status =='in_progress). If - so, it sleeps for SLEEP (this is separate process), - waits for WAIT_FOR seconds altogether. - - Requires installed host application on the machine. - - Runs publish process as user would, in automatic fashion. - - Args: - project_name (str): project to publish (only single context is - expected per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - host_name (str): 'photoshop' - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish_from_app - ) - - cli_publish_from_app( - project_name, batch_path, host_name, user_email, targets - ) - - @staticmethod - def remotepublish(project, batch_path, user_email, targets=None): - """Start headless publishing. - - Used to publish rendered assets, workfiles etc via Webpublisher. - Eventually should be yanked out to Webpublisher cli. - - Publish use json from passed paths argument. - - Args: - project (str): project to publish (only single context is expected - per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - - Raises: - RuntimeError: When there is no path to process. - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish - ) - - cli_publish(project, batch_path, user_email, targets) - @staticmethod def extractenvironments(output_json_path, project, asset, task, app, env_group): @@ -232,11 +173,19 @@ class PypeCommands: Called by Deadline plugin to propagate environment into render jobs. """ - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) if all((project, asset, task, app)): env = get_app_environments_for_context( - project, asset, task, app, env_group + project, + asset, + task, + app, + env_group=env_group, + launch_type=LaunchTypes.farm_render, ) else: env = os.environ.copy() @@ -260,17 +209,12 @@ class PypeCommands: main(output_path, project_name, asset_name, strict) - def texture_copy(self, project, asset, path): - pass - - def run_application(self, app, project, asset, task, tools, arguments): - pass - def validate_jsons(self): pass def run_tests(self, folder, mark, pyargs, - test_data_folder, persist, app_variant, timeout, setup_only): + test_data_folder, persist, app_variant, timeout, setup_only, + mongo_url): """ Runs tests from 'folder' @@ -283,6 +227,10 @@ class PypeCommands: end app_variant (str): variant (eg 2020 for AE), empty if use latest installed version + timeout (int): explicit timeout for single test + setup_only (bool): if only preparation steps should be + triggered, no tests (useful for debugging/development) + mongo_url (str): url to Openpype Mongo database """ print("run_tests") if folder: @@ -291,7 +239,14 @@ class PypeCommands: folder = "../tests" # disable warnings and show captured stdout even if success - args = ["--disable-pytest-warnings", "-rP", folder] + args = [ + "--disable-pytest-warnings", + "--capture=sys", + "--print", + "-W ignore::DeprecationWarning", + "-rP", + folder + ] if mark: args.extend(["-m", mark]) @@ -314,38 +269,13 @@ class PypeCommands: if setup_only: args.extend(["--setup_only", setup_only]) + if mongo_url: + args.extend(["--mongo_url", mongo_url]) + print("run_tests args: {}".format(args)) import pytest pytest.main(args) - def syncserver(self, active_site): - """Start running sync_server in background. - - This functionality is available in directly in module cli commands. - `~/openpype_console module sync_server syncservice` - """ - - os.environ["OPENPYPE_LOCAL_ID"] = active_site - - def signal_handler(sig, frame): - print("You pressed Ctrl+C. Process ended.") - sync_server_module.server_exit() - sys.exit(0) - - signal.signal(signal.SIGINT, signal_handler) - signal.signal(signal.SIGTERM, signal_handler) - - from openpype.modules import ModulesManager - - manager = ModulesManager() - sync_server_module = manager.modules_by_name["sync_server"] - - sync_server_module.server_init() - sync_server_module.server_start() - - while True: - time.sleep(1.0) - def repack_version(self, directory): """Repacking OpenPype version.""" from openpype.tools.repack_version import VersionRepacker diff --git a/openpype/resources/__init__.py b/openpype/resources/__init__.py index 0d7778e546..b8671f517a 100644 --- a/openpype/resources/__init__.py +++ b/openpype/resources/__init__.py @@ -1,4 +1,5 @@ import os +from openpype import AYON_SERVER_ENABLED from openpype.lib.openpype_version import is_running_staging RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -40,11 +41,17 @@ def get_liberation_font_path(bold=False, italic=False): def get_openpype_production_icon_filepath(): - return get_resource("icons", "openpype_icon.png") + filename = "openpype_icon.png" + if AYON_SERVER_ENABLED: + filename = "AYON_icon.png" + return get_resource("icons", filename) def get_openpype_staging_icon_filepath(): - return get_resource("icons", "openpype_icon_staging.png") + filename = "openpype_icon_staging.png" + if AYON_SERVER_ENABLED: + filename = "AYON_icon_staging.png" + return get_resource("icons", filename) def get_openpype_icon_filepath(staging=None): @@ -60,7 +67,12 @@ def get_openpype_splash_filepath(staging=None): if staging is None: staging = is_running_staging() - if staging: + if AYON_SERVER_ENABLED: + if staging: + splash_file_name = "AYON_splash_staging.png" + else: + splash_file_name = "AYON_splash.png" + elif staging: splash_file_name = "openpype_splash_staging.png" else: splash_file_name = "openpype_splash.png" diff --git a/openpype/resources/icons/AYON_icon.png b/openpype/resources/icons/AYON_icon.png new file mode 100644 index 0000000000..ed13aeea52 Binary files /dev/null and b/openpype/resources/icons/AYON_icon.png differ diff --git a/openpype/resources/icons/AYON_icon_staging.png b/openpype/resources/icons/AYON_icon_staging.png new file mode 100644 index 0000000000..9da5b0488e Binary files /dev/null and b/openpype/resources/icons/AYON_icon_staging.png differ diff --git a/openpype/resources/icons/AYON_splash.png b/openpype/resources/icons/AYON_splash.png new file mode 100644 index 0000000000..734aefb740 Binary files /dev/null and b/openpype/resources/icons/AYON_splash.png differ diff --git a/openpype/resources/icons/AYON_splash_staging.png b/openpype/resources/icons/AYON_splash_staging.png new file mode 100644 index 0000000000..ab2537e8a8 Binary files /dev/null and b/openpype/resources/icons/AYON_splash_staging.png differ diff --git a/openpype/scripts/export_maya_ass_job.py b/openpype/scripts/export_maya_ass_job.py deleted file mode 100644 index 16e841ce96..0000000000 --- a/openpype/scripts/export_maya_ass_job.py +++ /dev/null @@ -1,105 +0,0 @@ -"""This module is used for command line exporting of ASS files. - -WARNING: -This need to be rewriten to be able use it in Pype 3! -""" - -import os -import argparse -import logging -import subprocess -import platform - -try: - from shutil import which -except ImportError: - # we are in python < 3.3 - def which(command): - path = os.getenv('PATH') - for p in path.split(os.path.pathsep): - p = os.path.join(p, command) - if os.path.exists(p) and os.access(p, os.X_OK): - return p - -handler = logging.basicConfig() -log = logging.getLogger("Publish Image Sequences") -log.setLevel(logging.DEBUG) - -error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" - - -def __main__(): - parser = argparse.ArgumentParser() - parser.add_argument("--paths", - nargs="*", - default=[], - help="The filepaths to publish. This can be a " - "directory or a path to a .json publish " - "configuration.") - parser.add_argument("--gui", - default=False, - action="store_true", - help="Whether to run Pyblish in GUI mode.") - - parser.add_argument("--pype", help="Pype root") - - kwargs, args = parser.parse_known_args() - - print("Running pype ...") - auto_pype_root = os.path.dirname(os.path.abspath(__file__)) - auto_pype_root = os.path.abspath(auto_pype_root + "../../../../..") - - auto_pype_root = os.environ.get('OPENPYPE_SETUP_PATH') or auto_pype_root - if os.environ.get('OPENPYPE_SETUP_PATH'): - print("Got Pype location from environment: {}".format( - os.environ.get('OPENPYPE_SETUP_PATH'))) - - pype_command = "openpype.ps1" - if platform.system().lower() == "linux": - pype_command = "pype" - elif platform.system().lower() == "windows": - pype_command = "openpype.bat" - - if kwargs.pype: - pype_root = kwargs.pype - else: - # test if pype.bat / pype is in the PATH - # if it is, which() will return its path and we use that. - # if not, we use auto_pype_root path. Caveat of that one is - # that it can be UNC path and that will not work on windows. - - pype_path = which(pype_command) - - if pype_path: - pype_root = os.path.dirname(pype_path) - else: - pype_root = auto_pype_root - - print("Set pype root to: {}".format(pype_root)) - print("Paths: {}".format(kwargs.paths or [os.getcwd()])) - - # paths = kwargs.paths or [os.environ.get("OPENPYPE_METADATA_FILE")] or [os.getcwd()] # noqa - - mayabatch = os.environ.get("AVALON_APP_NAME").replace("maya", "mayabatch") - args = [ - os.path.join(pype_root, pype_command), - "launch", - "--app", - mayabatch, - "-script", - os.path.join(pype_root, "repos", "pype", - "pype", "scripts", "export_maya_ass_sequence.mel") - ] - - print("Pype command: {}".format(" ".join(args))) - # Forcing forwaring the environment because environment inheritance does - # not always work. - # Cast all values in environment to str to be safe - env = {k: str(v) for k, v in os.environ.items()} - exit_code = subprocess.call(args, env=env) - if exit_code != 0: - raise RuntimeError("Publishing failed.") - - -if __name__ == '__main__': - __main__() diff --git a/openpype/scripts/export_maya_ass_sequence.mel b/openpype/scripts/export_maya_ass_sequence.mel deleted file mode 100644 index b3b9a8543e..0000000000 --- a/openpype/scripts/export_maya_ass_sequence.mel +++ /dev/null @@ -1,67 +0,0 @@ -/* - Script to export specified layer as ass files. - -Attributes: - - scene_file (str): Name of the scene to load. - start (int): Start frame. - end (int): End frame. - step (int): Step size. - output_path (str): File output path. - render_layer (str): Name of render layer. - -*/ - -$scene_file=`getenv "OPENPYPE_ASS_EXPORT_SCENE_FILE"`; -$step=`getenv "OPENPYPE_ASS_EXPORT_STEP"`; -$start=`getenv "OPENPYPE_ASS_EXPORT_START"`; -$end=`getenv "OPENPYPE_ASS_EXPORT_END"`; -$file_path=`getenv "OPENPYPE_ASS_EXPORT_OUTPUT"`; -$render_layer = `getenv "OPENPYPE_ASS_EXPORT_RENDER_LAYER"`; - -print("*** ASS Export Plugin\n"); - -if ($scene_file == "") { - print("!!! cannot determine scene file\n"); - quit -a -ex -1; -} - -if ($step == "") { - print("!!! cannot determine step size\n"); - quit -a -ex -1; -} - -if ($start == "") { - print("!!! cannot determine start frame\n"); - quit -a -ex -1; -} - -if ($end == "") { - print("!!! cannot determine end frame\n"); - quit -a -ex -1; -} - -if ($file_path == "") { - print("!!! cannot determine output file\n"); - quit -a -ex -1; -} - -if ($render_layer == "") { - print("!!! cannot determine render layer\n"); - quit -a -ex -1; -} - - -print(">>> Opening Scene [ " + $scene_file + " ]\n"); - -// open scene -file -o -f $scene_file; - -// switch to render layer -print(">>> Switching layer [ "+ $render_layer + " ]\n"); -editRenderLayerGlobals -currentRenderLayer $render_layer; - -// export -print(">>> Exporting to [ " + $file_path + " ]\n"); -arnoldExportAss -mask 255 -sl 1 -ll 1 -bb 1 -sf $start -se $end -b -fs $step; -print("--- Done\n"); diff --git a/openpype/scripts/fusion_switch_shot.py b/openpype/scripts/fusion_switch_shot.py deleted file mode 100644 index fc22f060a2..0000000000 --- a/openpype/scripts/fusion_switch_shot.py +++ /dev/null @@ -1,238 +0,0 @@ -import os -import re -import sys -import logging - -from openpype.client import get_asset_by_name, get_versions - -# Pipeline imports -from openpype.hosts.fusion import api -import openpype.hosts.fusion.api.lib as fusion_lib - -# Config imports -from openpype.lib import version_up -from openpype.pipeline import ( - install_host, - registered_host, - legacy_io, -) - -from openpype.pipeline.context_tools import get_workdir_from_session - -log = logging.getLogger("Update Slap Comp") - - -def _format_version_folder(folder): - """Format a version folder based on the filepath - - Assumption here is made that, if the path does not exists the folder - will be "v001" - - Args: - folder: file path to a folder - - Returns: - str: new version folder name - """ - - new_version = 1 - if os.path.isdir(folder): - re_version = re.compile("v\d+$") - versions = [i for i in os.listdir(folder) if os.path.isdir(i) - and re_version.match(i)] - if versions: - # ensure the "v" is not included - new_version = int(max(versions)[1:]) + 1 - - version_folder = "v{:03d}".format(new_version) - - return version_folder - - -def _get_fusion_instance(): - fusion = getattr(sys.modules["__main__"], "fusion", None) - if fusion is None: - try: - # Support for FuScript.exe, BlackmagicFusion module for py2 only - import BlackmagicFusion as bmf - fusion = bmf.scriptapp("Fusion") - except ImportError: - raise RuntimeError("Could not find a Fusion instance") - return fusion - - -def _format_filepath(session): - - project = session["AVALON_PROJECT"] - asset = session["AVALON_ASSET"] - - # Save updated slap comp - work_path = get_workdir_from_session(session) - walk_to_dir = os.path.join(work_path, "scenes", "slapcomp") - slapcomp_dir = os.path.abspath(walk_to_dir) - - # Ensure destination exists - if not os.path.isdir(slapcomp_dir): - log.warning("Folder did not exist, creating folder structure") - os.makedirs(slapcomp_dir) - - # Compute output path - new_filename = "{}_{}_slapcomp_v001.comp".format(project, asset) - new_filepath = os.path.join(slapcomp_dir, new_filename) - - # Create new unqiue filepath - if os.path.exists(new_filepath): - new_filepath = version_up(new_filepath) - - return new_filepath - - -def _update_savers(comp, session): - """Update all savers of the current comp to ensure the output is correct - - Args: - comp (object): current comp instance - session (dict): the current Avalon session - - Returns: - None - """ - - new_work = get_workdir_from_session(session) - renders = os.path.join(new_work, "renders") - version_folder = _format_version_folder(renders) - renders_version = os.path.join(renders, version_folder) - - comp.Print("New renders to: %s\n" % renders) - - with api.comp_lock_and_undo_chunk(comp): - savers = comp.GetToolList(False, "Saver").values() - for saver in savers: - filepath = saver.GetAttrs("TOOLST_Clip_Name")[1.0] - filename = os.path.basename(filepath) - new_path = os.path.join(renders_version, filename) - saver["Clip"] = new_path - - -def update_frame_range(comp, representations): - """Update the frame range of the comp and render length - - The start and end frame are based on the lowest start frame and the highest - end frame - - Args: - comp (object): current focused comp - representations (list) collection of dicts - - Returns: - None - - """ - - version_ids = [r["parent"] for r in representations] - project_name = legacy_io.active_project() - versions = list(get_versions(project_name, version_ids=version_ids)) - - start = min(v["data"]["frameStart"] for v in versions) - end = max(v["data"]["frameEnd"] for v in versions) - - fusion_lib.update_frame_range(start, end, comp=comp) - - -def switch(asset_name, filepath=None, new=True): - """Switch the current containers of the file to the other asset (shot) - - Args: - filepath (str): file path of the comp file - asset_name (str): name of the asset (shot) - new (bool): Save updated comp under a different name - - Returns: - comp path (str): new filepath of the updated comp - - """ - - # If filepath provided, ensure it is valid absolute path - if filepath is not None: - if not os.path.isabs(filepath): - filepath = os.path.abspath(filepath) - - assert os.path.exists(filepath), "%s must exist " % filepath - - # Assert asset name exists - # It is better to do this here then to wait till switch_shot does it - project_name = legacy_io.active_project() - asset = get_asset_by_name(project_name, asset_name) - assert asset, "Could not find '%s' in the database" % asset_name - - # Go to comp - if not filepath: - current_comp = api.get_current_comp() - assert current_comp is not None, "Could not find current comp" - else: - fusion = _get_fusion_instance() - current_comp = fusion.LoadComp(filepath, quiet=True) - assert current_comp is not None, "Fusion could not load '%s'" % filepath - - host = registered_host() - containers = list(host.ls()) - assert containers, "Nothing to update" - - representations = [] - for container in containers: - try: - representation = fusion_lib.switch_item(container, - asset_name=asset_name) - representations.append(representation) - except Exception as e: - current_comp.Print("Error in switching! %s\n" % e.message) - - message = "Switched %i Loaders of the %i\n" % (len(representations), - len(containers)) - current_comp.Print(message) - - # Build the session to switch to - switch_to_session = legacy_io.Session.copy() - switch_to_session["AVALON_ASSET"] = asset['name'] - - if new: - comp_path = _format_filepath(switch_to_session) - - # Update savers output based on new session - _update_savers(current_comp, switch_to_session) - else: - comp_path = version_up(filepath) - - current_comp.Print(comp_path) - - current_comp.Print("\nUpdating frame range") - update_frame_range(current_comp, representations) - - current_comp.Save(comp_path) - - return comp_path - - -if __name__ == '__main__': - - import argparse - - parser = argparse.ArgumentParser(description="Switch to a shot within an" - "existing comp file") - - parser.add_argument("--file_path", - type=str, - default=True, - help="File path of the comp to use") - - parser.add_argument("--asset_name", - type=str, - default=True, - help="Name of the asset (shot) to switch") - - args, unknown = parser.parse_args() - - install_host(api) - switch(args.asset_name, args.file_path) - - sys.exit(0) diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py index 16558642c6..c362670126 100644 --- a/openpype/scripts/ocio_wrapper.py +++ b/openpype/scripts/ocio_wrapper.py @@ -27,7 +27,7 @@ import PyOpenColorIO as ocio @click.group() def main(): - pass + pass # noqa: WPS100 @main.group() @@ -37,7 +37,17 @@ def config(): Example of use: > pyton.exe ./ocio_wrapper.py config *args """ - pass + pass # noqa: WPS100 + + +@main.group() +def colorspace(): + """Colorspace related commands group + + Example of use: + > pyton.exe ./ocio_wrapper.py config *args + """ + pass # noqa: WPS100 @config.command( @@ -70,8 +80,8 @@ def get_colorspace(in_path, out_path): out_data = _get_colorspace_data(in_path) - with open(json_path, "w") as f: - json.dump(out_data, f) + with open(json_path, "w") as f_: + json.dump(out_data, f_) print(f"Colorspace data are saved to '{json_path}'") @@ -97,8 +107,8 @@ def _get_colorspace_data(config_path): config = ocio.Config().CreateFromFile(str(config_path)) return { - c.getName(): c.getFamily() - for c in config.getColorSpaces() + c_.getName(): c_.getFamily() + for c_ in config.getColorSpaces() } @@ -132,8 +142,8 @@ def get_views(in_path, out_path): out_data = _get_views_data(in_path) - with open(json_path, "w") as f: - json.dump(out_data, f) + with open(json_path, "w") as f_: + json.dump(out_data, f_) print(f"Viewer data are saved to '{json_path}'") @@ -157,7 +167,7 @@ def _get_views_data(config_path): config = ocio.Config().CreateFromFile(str(config_path)) - data = {} + data_ = {} for display in config.getDisplays(): for view in config.getViews(display): colorspace = config.getDisplayViewColorSpaceName(display, view) @@ -165,14 +175,223 @@ def _get_views_data(config_path): if colorspace == "": colorspace = display - data[f"{display}/{view}"] = { + data_[f"{display}/{view}"] = { "display": display, "view": view, "colorspace": colorspace } - return data + return data_ +@config.command( + name="get_version", + help=( + "return major and minor version from config file " + "--config_path input arg is required" + "--out_path input arg is required" + ) +) +@click.option("--config_path", required=True, + help="path where to read ocio config file", + type=click.Path(exists=True)) +@click.option("--out_path", required=True, + help="path where to write output json file", + type=click.Path()) +def get_version(config_path, out_path): + """Get version of config. + + Python 2 wrapped console command + + Args: + config_path (str): ocio config file path string + out_path (str): temp json file path string + + Example of use: + > pyton.exe ./ocio_wrapper.py config get_version \ + --config_path= --out_path= + """ + json_path = Path(out_path) + + out_data = _get_version_data(config_path) + + with open(json_path, "w") as f_: + json.dump(out_data, f_) + + print(f"Config version data are saved to '{json_path}'") + + +def _get_version_data(config_path): + """Return major and minor version info. + + Args: + config_path (str): path string leading to config.ocio + + Raises: + IOError: Input config does not exist. + + Returns: + dict: minor and major keys with values + """ + config_path = Path(config_path) + + if not config_path.is_file(): + raise IOError("Input path should be `config.ocio` file") + + config = ocio.Config().CreateFromFile(str(config_path)) + + return { + "major": config.getMajorVersion(), + "minor": config.getMinorVersion() + } + + +@colorspace.command( + name="get_config_file_rules_colorspace_from_filepath", + help=( + "return colorspace from filepath " + "--config_path - ocio config file path (input arg is required) " + "--filepath - any file path (input arg is required) " + "--out_path - temp json file path (input arg is required)" + ) +) +@click.option("--config_path", required=True, + help="path where to read ocio config file", + type=click.Path(exists=True)) +@click.option("--filepath", required=True, + help="path to file to get colorspace from", + type=click.Path()) +@click.option("--out_path", required=True, + help="path where to write output json file", + type=click.Path()) +def get_config_file_rules_colorspace_from_filepath( + config_path, filepath, out_path +): + """Get colorspace from file path wrapper. + + Python 2 wrapped console command + + Args: + config_path (str): config file path string + filepath (str): path string leading to file + out_path (str): temp json file path string + + Example of use: + > pyton.exe ./ocio_wrapper.py \ + colorspace get_config_file_rules_colorspace_from_filepath \ + --config_path= --filepath= --out_path= + """ + json_path = Path(out_path) + + colorspace = _get_config_file_rules_colorspace_from_filepath( + config_path, filepath) + + with open(json_path, "w") as f_: + json.dump(colorspace, f_) + + print(f"Colorspace name is saved to '{json_path}'") + + +def _get_config_file_rules_colorspace_from_filepath(config_path, filepath): + """Return found colorspace data found in v2 file rules. + + Args: + config_path (str): path string leading to config.ocio + filepath (str): path string leading to v2 file rules + + Raises: + IOError: Input config does not exist. + + Returns: + dict: aggregated available colorspaces + """ + config_path = Path(config_path) + + if not config_path.is_file(): + raise IOError( + f"Input path `{config_path}` should be `config.ocio` file") + + config = ocio.Config().CreateFromFile(str(config_path)) + + # TODO: use `parseColorSpaceFromString` instead if ocio v1 + colorspace = config.getColorSpaceFromFilepath(str(filepath)) + + return colorspace + + +def _get_display_view_colorspace_name(config_path, display, view): + """Returns the colorspace attribute of the (display, view) pair. + + Args: + config_path (str): path string leading to config.ocio + display (str): display name e.g. "ACES" + view (str): view name e.g. "sRGB" + + + Raises: + IOError: Input config does not exist. + + Returns: + view color space name (str) e.g. "Output - sRGB" + """ + + config_path = Path(config_path) + + if not config_path.is_file(): + raise IOError("Input path should be `config.ocio` file") + + config = ocio.Config.CreateFromFile(str(config_path)) + colorspace = config.getDisplayViewColorSpaceName(display, view) + + return colorspace + + +@config.command( + name="get_display_view_colorspace_name", + help=( + "return default view colorspace name " + "for the given display and view " + "--path input arg is required" + ) +) +@click.option("--in_path", required=True, + help="path where to read ocio config file", + type=click.Path(exists=True)) +@click.option("--out_path", required=True, + help="path where to write output json file", + type=click.Path()) +@click.option("--display", required=True, + help="display name", + type=click.STRING) +@click.option("--view", required=True, + help="view name", + type=click.STRING) +def get_display_view_colorspace_name(in_path, out_path, + display, view): + """Aggregate view colorspace name to file. + + Wrapper command for processes without access to OpenColorIO + + Args: + in_path (str): config file path string + out_path (str): temp json file path string + display (str): display name e.g. "ACES" + view (str): view name e.g. "sRGB" + + Example of use: + > pyton.exe ./ocio_wrapper.py config \ + get_display_view_colorspace_name --in_path= \ + --out_path= --display= --view= + """ + + out_data = _get_display_view_colorspace_name(in_path, + display, + view) + + with open(out_path, "w") as f: + json.dump(out_data, f) + + print(f"Display view colorspace saved to '{out_path}'") + if __name__ == '__main__': main() diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index 085b62501c..189feaee3a 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -8,21 +8,15 @@ from string import Formatter import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffmpeg_codec_args, get_ffmpeg_format_args, convert_ffprobe_fps_value, - convert_ffprobe_fps_to_float, ) - -ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") -ffprobe_path = get_ffmpeg_tool_path("ffprobe") - - FFMPEG = ( - '"{}"%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' -).format(ffmpeg_path) + '{}%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' +).format(subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg"))) DRAWTEXT = ( "drawtext@'%(label)s'=fontfile='%(font)s':text=\\'%(text)s\\':" @@ -46,14 +40,14 @@ def _get_ffprobe_data(source): :param str source: source media file :rtype: [{}, ...] """ - command = [ - ffprobe_path, + command = get_ffmpeg_tool_args( + "ffprobe", "-v", "quiet", "-print_format", "json", "-show_format", "-show_streams", source - ] + ) kwargs = { "stdout": subprocess.PIPE, } diff --git a/openpype/scripts/remote_publish.py b/openpype/scripts/remote_publish.py index 37df35e36c..d362f7abdc 100644 --- a/openpype/scripts/remote_publish.py +++ b/openpype/scripts/remote_publish.py @@ -9,4 +9,4 @@ except ImportError as exc: if __name__ == "__main__": # Perform remote publish with thorough error checking log = Logger.get_logger(__name__) - remote_publish(log, raise_error=True) + remote_publish(log) diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py new file mode 100644 index 0000000000..3ccb18111a --- /dev/null +++ b/openpype/settings/ayon_settings.py @@ -0,0 +1,1468 @@ +"""Helper functionality to convert AYON settings to OpenPype v3 settings. + +The settings are converted, so we can use v3 code with AYON settings. Once +the code of and addon is converted to full AYON addon which expect AYON +settings the conversion function can be removed. + +The conversion is hardcoded -> there is no other way how to achieve the result. + +Main entrypoints are functions: +- convert_project_settings - convert settings to project settings +- convert_system_settings - convert settings to system settings +# Both getters cache values +- get_ayon_project_settings - replacement for 'get_project_settings' +- get_ayon_system_settings - replacement for 'get_system_settings' +""" +import os +import collections +import json +import copy +import time + +import six +import ayon_api + + +def _convert_color(color_value): + if isinstance(color_value, six.string_types): + color_value = color_value.lstrip("#") + color_value_len = len(color_value) + _color_value = [] + for idx in range(color_value_len // 2): + _color_value.append(int(color_value[idx:idx + 2], 16)) + for _ in range(4 - len(_color_value)): + _color_value.append(255) + return _color_value + + if isinstance(color_value, list): + # WARNING R,G,B can be 'int' or 'float' + # - 'float' variant is using 'int' for min: 0 and max: 1 + if len(color_value) == 3: + # Add alpha + color_value.append(255) + else: + # Convert float alha to int + alpha = int(color_value[3] * 255) + if alpha > 255: + alpha = 255 + elif alpha < 0: + alpha = 0 + color_value[3] = alpha + return color_value + + +def _convert_host_imageio(host_settings): + if "imageio" not in host_settings: + return + + # --- imageio --- + ayon_imageio = host_settings["imageio"] + # TODO remove when fixed on server + if "ocio_config" in ayon_imageio["ocio_config"]: + ayon_imageio["ocio_config"]["filepath"] = ( + ayon_imageio["ocio_config"].pop("ocio_config") + ) + # Convert file rules + imageio_file_rules = ayon_imageio["file_rules"] + new_rules = {} + for rule in imageio_file_rules["rules"]: + name = rule.pop("name") + new_rules[name] = rule + imageio_file_rules["rules"] = new_rules + + +def _convert_applications_groups(groups, clear_metadata): + environment_key = "environment" + if isinstance(groups, dict): + new_groups = [] + for name, item in groups.items(): + item["name"] = name + new_groups.append(item) + groups = new_groups + + output = {} + group_dynamic_labels = {} + for group in groups: + group_name = group.pop("name") + if "label" in group: + group_dynamic_labels[group_name] = group["label"] + + tool_group_envs = group[environment_key] + if isinstance(tool_group_envs, six.string_types): + group[environment_key] = json.loads(tool_group_envs) + + variants = {} + variant_dynamic_labels = {} + for variant in group.pop("variants"): + variant_name = variant.pop("name") + label = variant.get("label") + if label and label != variant_name: + variant_dynamic_labels[variant_name] = label + variant_envs = variant[environment_key] + if isinstance(variant_envs, six.string_types): + variant[environment_key] = json.loads(variant_envs) + variants[variant_name] = variant + group["variants"] = variants + + if not clear_metadata: + variants["__dynamic_keys_labels__"] = variant_dynamic_labels + output[group_name] = group + + if not clear_metadata: + output["__dynamic_keys_labels__"] = group_dynamic_labels + return output + + +def _convert_applications_system_settings( + ayon_settings, output, clear_metadata +): + # Addon settings + addon_settings = ayon_settings["applications"] + + # Remove project settings + addon_settings.pop("only_available", None) + + # Applications settings + ayon_apps = addon_settings["applications"] + + additional_apps = ayon_apps.pop("additional_apps") + applications = _convert_applications_groups( + ayon_apps, clear_metadata + ) + applications["additional_apps"] = _convert_applications_groups( + additional_apps, clear_metadata + ) + + # Tools settings + tools = _convert_applications_groups( + addon_settings["tool_groups"], clear_metadata + ) + + output["applications"] = applications + output["tools"] = {"tool_groups": tools} + + +def _convert_general(ayon_settings, output, default_settings): + # TODO get studio name/code + core_settings = ayon_settings["core"] + environments = core_settings["environments"] + if isinstance(environments, six.string_types): + environments = json.loads(environments) + + general = default_settings["general"] + general.update({ + "log_to_server": False, + "studio_name": core_settings["studio_name"], + "studio_code": core_settings["studio_code"], + "environment": environments + }) + output["general"] = general + + +def _convert_kitsu_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("kitsu") is not None + kitsu_settings = default_settings["modules"]["kitsu"] + kitsu_settings["enabled"] = enabled + if enabled: + kitsu_settings["server"] = ayon_settings["kitsu"]["server"] + output["modules"]["kitsu"] = kitsu_settings + + +def _convert_timers_manager_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("timers_manager") is not None + manager_settings = default_settings["modules"]["timers_manager"] + manager_settings["enabled"] = enabled + if enabled: + ayon_manager = ayon_settings["timers_manager"] + manager_settings.update({ + key: ayon_manager[key] + for key in { + "auto_stop", + "full_time", + "message_time", + "disregard_publishing" + } + }) + output["modules"]["timers_manager"] = manager_settings + + +def _convert_clockify_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("clockify") is not None + clockify_settings = default_settings["modules"]["clockify"] + clockify_settings["enabled"] = enabled + if enabled: + clockify_settings["workspace_name"] = ( + ayon_settings["clockify"]["workspace_name"] + ) + output["modules"]["clockify"] = clockify_settings + + +def _convert_deadline_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("deadline") is not None + deadline_settings = default_settings["modules"]["deadline"] + deadline_settings["enabled"] = enabled + if enabled: + ayon_deadline = ayon_settings["deadline"] + deadline_settings["deadline_urls"] = { + item["name"]: item["value"] + for item in ayon_deadline["deadline_urls"] + } + + output["modules"]["deadline"] = deadline_settings + + +def _convert_muster_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("muster") is not None + muster_settings = default_settings["modules"]["muster"] + muster_settings["enabled"] = enabled + if enabled: + ayon_muster = ayon_settings["muster"] + muster_settings["MUSTER_REST_URL"] = ayon_muster["MUSTER_REST_URL"] + muster_settings["templates_mapping"] = { + item["name"]: item["value"] + for item in ayon_muster["templates_mapping"] + } + output["modules"]["muster"] = muster_settings + + +def _convert_royalrender_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("royalrender") is not None + rr_settings = default_settings["modules"]["royalrender"] + rr_settings["enabled"] = enabled + if enabled: + ayon_royalrender = ayon_settings["royalrender"] + rr_settings["rr_paths"] = { + item["name"]: item["value"] + for item in ayon_royalrender["rr_paths"] + } + output["modules"]["royalrender"] = rr_settings + + +def _convert_modules_system( + ayon_settings, output, addon_versions, default_settings +): + # TODO add all modules + # TODO add 'enabled' values + for func in ( + _convert_kitsu_system_settings, + _convert_timers_manager_system_settings, + _convert_clockify_system_settings, + _convert_deadline_system_settings, + _convert_muster_system_settings, + _convert_royalrender_system_settings, + ): + func(ayon_settings, output, addon_versions, default_settings) + + modules_settings = output["modules"] + for module_name in ( + "sync_server", + "log_viewer", + "standalonepublish_tool", + "project_manager", + "job_queue", + "avalon", + "addon_paths", + ): + settings = default_settings["modules"][module_name] + if "enabled" in settings: + settings["enabled"] = False + modules_settings[module_name] = settings + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + # Make sure addons have access to settings in initialization + # - ModulesManager passes only modules settings into initialization + if key not in modules_settings: + modules_settings[key] = value + + +def convert_system_settings(ayon_settings, default_settings, addon_versions): + default_settings = copy.deepcopy(default_settings) + output = { + "modules": {} + } + if "applications" in ayon_settings: + _convert_applications_system_settings(ayon_settings, output, False) + + if "core" in ayon_settings: + _convert_general(ayon_settings, output, default_settings) + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + for key, value in default_settings.items(): + if key not in output: + output[key] = value + + _convert_modules_system( + ayon_settings, + output, + addon_versions, + default_settings + ) + return output + + +# --------- Project settings --------- +def _convert_applications_project_settings(ayon_settings, output): + if "applications" not in ayon_settings: + return + + output["applications"] = { + "only_available": ayon_settings["applications"]["only_available"] + } + + +def _convert_blender_project_settings(ayon_settings, output): + if "blender" not in ayon_settings: + return + ayon_blender = ayon_settings["blender"] + _convert_host_imageio(ayon_blender) + + ayon_publish = ayon_blender["publish"] + + for plugin in ("ExtractThumbnail", "ExtractPlayblast"): + plugin_settings = ayon_publish[plugin] + plugin_settings["presets"] = json.loads(plugin_settings["presets"]) + + output["blender"] = ayon_blender + + +def _convert_celaction_project_settings(ayon_settings, output): + if "celaction" not in ayon_settings: + return + + ayon_celaction = ayon_settings["celaction"] + _convert_host_imageio(ayon_celaction) + + output["celaction"] = ayon_celaction + + +def _convert_flame_project_settings(ayon_settings, output): + if "flame" not in ayon_settings: + return + + ayon_flame = ayon_settings["flame"] + + ayon_publish_flame = ayon_flame["publish"] + # Plugin 'ExtractSubsetResources' renamed to 'ExtractProductResources' + if "ExtractSubsetResources" in ayon_publish_flame: + ayon_product_resources = ayon_publish_flame["ExtractSubsetResources"] + else: + ayon_product_resources = ( + ayon_publish_flame.pop("ExtractProductResources")) + ayon_publish_flame["ExtractSubsetResources"] = ayon_product_resources + + # 'ExtractSubsetResources' changed model of 'export_presets_mapping' + # - some keys were moved under 'other_parameters' + new_subset_resources = {} + for item in ayon_product_resources.pop("export_presets_mapping"): + name = item.pop("name") + if "other_parameters" in item: + other_parameters = item.pop("other_parameters") + item.update(other_parameters) + new_subset_resources[name] = item + + ayon_product_resources["export_presets_mapping"] = new_subset_resources + + # 'imageio' changed model + # - missing subkey 'project' which is in root of 'imageio' model + _convert_host_imageio(ayon_flame) + ayon_imageio_flame = ayon_flame["imageio"] + if "project" not in ayon_imageio_flame: + profile_mapping = ayon_imageio_flame.pop("profilesMapping") + ayon_flame["imageio"] = { + "project": ayon_imageio_flame, + "profilesMapping": profile_mapping + } + + ayon_load_flame = ayon_flame["load"] + for plugin_name in ("LoadClip", "LoadClipBatch"): + plugin_settings = ayon_load_flame[plugin_name] + plugin_settings["families"] = plugin_settings.pop("product_types") + plugin_settings["clip_name_template"] = ( + plugin_settings["clip_name_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + plugin_settings["layer_rename_template"] = ( + plugin_settings["layer_rename_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + + output["flame"] = ayon_flame + + +def _convert_fusion_project_settings(ayon_settings, output): + if "fusion" not in ayon_settings: + return + + ayon_fusion = ayon_settings["fusion"] + _convert_host_imageio(ayon_fusion) + + ayon_imageio_fusion = ayon_fusion["imageio"] + + if "ocioSettings" in ayon_imageio_fusion: + ayon_ocio_setting = ayon_imageio_fusion.pop("ocioSettings") + paths = ayon_ocio_setting.pop("ocioPathModel") + for key, value in tuple(paths.items()): + new_value = [] + if value: + new_value.append(value) + paths[key] = new_value + + ayon_ocio_setting["configFilePath"] = paths + ayon_imageio_fusion["ocio"] = ayon_ocio_setting + elif "ocio" in ayon_imageio_fusion: + paths = ayon_imageio_fusion["ocio"].pop("configFilePath") + for key, value in tuple(paths.items()): + new_value = [] + if value: + new_value.append(value) + paths[key] = new_value + ayon_imageio_fusion["ocio"]["configFilePath"] = paths + + _convert_host_imageio(ayon_imageio_fusion) + + ayon_create_saver = ayon_fusion["create"]["CreateSaver"] + ayon_create_saver["temp_rendering_path_template"] = ( + ayon_create_saver["temp_rendering_path_template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + .replace("{folder[name]}", "{asset}") + .replace("{task[name]}", "{task}") + ) + + output["fusion"] = ayon_fusion + + +def _convert_maya_project_settings(ayon_settings, output): + if "maya" not in ayon_settings: + return + + ayon_maya = ayon_settings["maya"] + + # Change key of render settings + ayon_maya["RenderSettings"] = ayon_maya.pop("render_settings") + + # Convert extensions mapping + ayon_maya["ext_mapping"] = { + item["name"]: item["value"] + for item in ayon_maya["ext_mapping"] + } + + # Publish UI filters + new_filters = {} + for item in ayon_maya["filters"]: + new_filters[item["name"]] = { + subitem["name"]: subitem["value"] + for subitem in item["value"] + } + ayon_maya["filters"] = new_filters + + # Maya dirmap + ayon_maya_dirmap = ayon_maya.pop("maya_dirmap") + ayon_maya_dirmap_path = ayon_maya_dirmap["paths"] + ayon_maya_dirmap_path["source-path"] = ( + ayon_maya_dirmap_path.pop("source_path") + ) + ayon_maya_dirmap_path["destination-path"] = ( + ayon_maya_dirmap_path.pop("destination_path") + ) + ayon_maya["maya-dirmap"] = ayon_maya_dirmap + + # Create plugins + ayon_create = ayon_maya["create"] + ayon_create_static_mesh = ayon_create["CreateUnrealStaticMesh"] + if "static_mesh_prefixes" in ayon_create_static_mesh: + ayon_create_static_mesh["static_mesh_prefix"] = ( + ayon_create_static_mesh.pop("static_mesh_prefixes") + ) + + # --- Publish (START) --- + ayon_publish = ayon_maya["publish"] + try: + attributes = json.loads( + ayon_publish["ValidateAttributes"]["attributes"] + ) + except ValueError: + attributes = {} + ayon_publish["ValidateAttributes"]["attributes"] = attributes + + try: + SUFFIX_NAMING_TABLE = json.loads( + ayon_publish + ["ValidateTransformNamingSuffix"] + ["SUFFIX_NAMING_TABLE"] + ) + except ValueError: + SUFFIX_NAMING_TABLE = {} + ayon_publish["ValidateTransformNamingSuffix"]["SUFFIX_NAMING_TABLE"] = ( + SUFFIX_NAMING_TABLE + ) + + validate_frame_range = ayon_publish["ValidateFrameRange"] + if "exclude_product_types" in validate_frame_range: + validate_frame_range["exclude_families"] = ( + validate_frame_range.pop("exclude_product_types")) + + # Extract playblast capture settings + validate_rendern_settings = ayon_publish["ValidateRenderSettings"] + for key in ( + "arnold_render_attributes", + "vray_render_attributes", + "redshift_render_attributes", + "renderman_render_attributes", + ): + if key not in validate_rendern_settings: + continue + validate_rendern_settings[key] = [ + [item["type"], item["value"]] + for item in validate_rendern_settings[key] + ] + + plugin_path_attributes = ayon_publish["ValidatePluginPathAttributes"] + plugin_path_attributes["attribute"] = { + item["name"]: item["value"] + for item in plugin_path_attributes["attribute"] + } + + ayon_capture_preset = ayon_publish["ExtractPlayblast"]["capture_preset"] + display_options = ayon_capture_preset["DisplayOptions"] + for key in ("background", "backgroundBottom", "backgroundTop"): + display_options[key] = _convert_color(display_options[key]) + + for src_key, dst_key in ( + ("DisplayOptions", "Display Options"), + ("ViewportOptions", "Viewport Options"), + ("CameraOptions", "Camera Options"), + ): + ayon_capture_preset[dst_key] = ayon_capture_preset.pop(src_key) + + viewport_options = ayon_capture_preset["Viewport Options"] + viewport_options["pluginObjects"] = { + item["name"]: item["value"] + for item in viewport_options["pluginObjects"] + } + + # Extract Camera Alembic bake attributes + try: + bake_attributes = json.loads( + ayon_publish["ExtractCameraAlembic"]["bake_attributes"] + ) + except ValueError: + bake_attributes = [] + ayon_publish["ExtractCameraAlembic"]["bake_attributes"] = bake_attributes + + # --- Publish (END) --- + for renderer_settings in ayon_maya["RenderSettings"].values(): + if ( + not isinstance(renderer_settings, dict) + or "additional_options" not in renderer_settings + ): + continue + renderer_settings["additional_options"] = [ + [item["attribute"], item["value"]] + for item in renderer_settings["additional_options"] + ] + + # Workfile build + ayon_workfile_build = ayon_maya["workfile_build"] + for item in ayon_workfile_build["profiles"]: + for key in ("current_context", "linked_assets"): + for subitem in item[key]: + if "families" in subitem: + break + subitem["families"] = subitem.pop("product_types") + subitem["subset_name_filters"] = subitem.pop( + "product_name_filters") + + _convert_host_imageio(ayon_maya) + + ayon_maya_load = ayon_maya["load"] + load_colors = ayon_maya_load["colors"] + for key, color in tuple(load_colors.items()): + load_colors[key] = _convert_color(color) + + reference_loader = ayon_maya_load["reference_loader"] + reference_loader["namespace"] = ( + reference_loader["namespace"] + .replace("{product[name]}", "{subset}") + ) + + if ayon_maya_load.get("import_loader"): + import_loader = ayon_maya_load["import_loader"] + import_loader["namespace"] = ( + import_loader["namespace"] + .replace("{product[name]}", "{subset}") + ) + + output["maya"] = ayon_maya + + +def _convert_3dsmax_project_settings(ayon_settings, output): + if "max" not in ayon_settings: + return + + ayon_max = ayon_settings["max"] + _convert_host_imageio(ayon_max) + if "PointCloud" in ayon_max: + point_cloud_attribute = ayon_max["PointCloud"]["attribute"] + new_point_cloud_attribute = { + item["name"]: item["value"] + for item in point_cloud_attribute + } + ayon_max["PointCloud"]["attribute"] = new_point_cloud_attribute + + output["max"] = ayon_max + + +def _convert_nuke_knobs(knobs): + new_knobs = [] + for knob in knobs: + knob_type = knob["type"] + + if knob_type == "boolean": + knob_type = "bool" + + if knob_type != "bool": + value = knob[knob_type] + elif knob_type in knob: + value = knob[knob_type] + else: + value = knob["boolean"] + + new_knob = { + "type": knob_type, + "name": knob["name"], + } + new_knobs.append(new_knob) + + if knob_type == "formatable": + new_knob["template"] = value["template"] + new_knob["to_type"] = value["to_type"] + continue + + value_key = "value" + if knob_type == "expression": + value_key = "expression" + + elif knob_type == "color_gui": + value = _convert_color(value) + + elif knob_type == "vector_2d": + value = [value["x"], value["y"]] + + elif knob_type == "vector_3d": + value = [value["x"], value["y"], value["z"]] + + elif knob_type == "box": + value = [value["x"], value["y"], value["r"], value["t"]] + + new_knob[value_key] = value + return new_knobs + + +def _convert_nuke_project_settings(ayon_settings, output): + if "nuke" not in ayon_settings: + return + + ayon_nuke = ayon_settings["nuke"] + + # --- Dirmap --- + dirmap = ayon_nuke.pop("dirmap") + for src_key, dst_key in ( + ("source_path", "source-path"), + ("destination_path", "destination-path"), + ): + dirmap["paths"][dst_key] = dirmap["paths"].pop(src_key) + ayon_nuke["nuke-dirmap"] = dirmap + + # --- Filters --- + new_gui_filters = {} + for item in ayon_nuke.pop("filters"): + subvalue = {} + key = item["name"] + for subitem in item["value"]: + subvalue[subitem["name"]] = subitem["value"] + new_gui_filters[key] = subvalue + ayon_nuke["filters"] = new_gui_filters + + # --- Load --- + ayon_load = ayon_nuke["load"] + ayon_load["LoadClip"]["_representations"] = ( + ayon_load["LoadClip"].pop("representations_include") + ) + ayon_load["LoadImage"]["_representations"] = ( + ayon_load["LoadImage"].pop("representations_include") + ) + + # --- Create --- + ayon_create = ayon_nuke["create"] + for creator_name in ( + "CreateWritePrerender", + "CreateWriteImage", + "CreateWriteRender", + ): + create_plugin_settings = ayon_create[creator_name] + create_plugin_settings["temp_rendering_path_template"] = ( + create_plugin_settings["temp_rendering_path_template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + .replace("{task[name]}", "{task}") + .replace("{folder[name]}", "{asset}") + ) + new_prenodes = {} + for prenode in create_plugin_settings["prenodes"]: + name = prenode.pop("name") + prenode["knobs"] = _convert_nuke_knobs(prenode["knobs"]) + new_prenodes[name] = prenode + + create_plugin_settings["prenodes"] = new_prenodes + + # --- Publish --- + ayon_publish = ayon_nuke["publish"] + slate_mapping = ayon_publish["ExtractSlateFrame"]["key_value_mapping"] + for key in tuple(slate_mapping.keys()): + value = slate_mapping[key] + slate_mapping[key] = [value["enabled"], value["template"]] + + ayon_publish["ValidateKnobs"]["knobs"] = json.loads( + ayon_publish["ValidateKnobs"]["knobs"] + ) + + new_review_data_outputs = {} + outputs_settings = [] + # Check deprecated ExtractReviewDataMov + # settings for backwards compatibility + deprecrated_review_settings = ayon_publish["ExtractReviewDataMov"] + current_review_settings = ( + ayon_publish.get("ExtractReviewIntermediates") + ) + if deprecrated_review_settings["enabled"]: + outputs_settings = deprecrated_review_settings["outputs"] + elif current_review_settings is None: + pass + elif current_review_settings["enabled"]: + outputs_settings = current_review_settings["outputs"] + + for item in outputs_settings: + item_filter = item["filter"] + if "product_names" in item_filter: + item_filter["subsets"] = item_filter.pop("product_names") + item_filter["families"] = item_filter.pop("product_types") + + reformat_nodes_config = item.get("reformat_nodes_config") or {} + reposition_nodes = reformat_nodes_config.get( + "reposition_nodes") or [] + + for reposition_node in reposition_nodes: + if "knobs" not in reposition_node: + continue + reposition_node["knobs"] = _convert_nuke_knobs( + reposition_node["knobs"] + ) + + name = item.pop("name") + new_review_data_outputs[name] = item + + if deprecrated_review_settings["enabled"]: + deprecrated_review_settings["outputs"] = new_review_data_outputs + elif current_review_settings["enabled"]: + current_review_settings["outputs"] = new_review_data_outputs + + collect_instance_data = ayon_publish["CollectInstanceData"] + if "sync_workfile_version_on_product_types" in collect_instance_data: + collect_instance_data["sync_workfile_version_on_families"] = ( + collect_instance_data.pop( + "sync_workfile_version_on_product_types")) + + # TODO 'ExtractThumbnail' does not have ideal schema in v3 + ayon_extract_thumbnail = ayon_publish["ExtractThumbnail"] + new_thumbnail_nodes = {} + for item in ayon_extract_thumbnail["nodes"]: + name = item["nodeclass"] + value = [] + for knob in _convert_nuke_knobs(item["knobs"]): + knob_name = knob["name"] + # This may crash + if knob["type"] == "expression": + knob_value = knob["expression"] + else: + knob_value = knob["value"] + value.append([knob_name, knob_value]) + new_thumbnail_nodes[name] = value + + ayon_extract_thumbnail["nodes"] = new_thumbnail_nodes + + if "reposition_nodes" in ayon_extract_thumbnail: + for item in ayon_extract_thumbnail["reposition_nodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + + # --- ImageIO --- + # NOTE 'monitorOutLut' is maybe not yet in v3 (ut should be) + _convert_host_imageio(ayon_nuke) + ayon_imageio = ayon_nuke["imageio"] + for item in ayon_imageio["nodes"]["requiredNodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + for item in ayon_imageio["nodes"]["overrideNodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + + output["nuke"] = ayon_nuke + + +def _convert_hiero_project_settings(ayon_settings, output): + if "hiero" not in ayon_settings: + return + + ayon_hiero = ayon_settings["hiero"] + _convert_host_imageio(ayon_hiero) + + new_gui_filters = {} + for item in ayon_hiero.pop("filters"): + subvalue = {} + key = item["name"] + for subitem in item["value"]: + subvalue[subitem["name"]] = subitem["value"] + new_gui_filters[key] = subvalue + ayon_hiero["filters"] = new_gui_filters + + ayon_load_clip = ayon_hiero["load"]["LoadClip"] + if "product_types" in ayon_load_clip: + ayon_load_clip["families"] = ayon_load_clip.pop("product_types") + + ayon_load_clip = ayon_hiero["load"]["LoadClip"] + ayon_load_clip["clip_name_template"] = ( + ayon_load_clip["clip_name_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + + output["hiero"] = ayon_hiero + + +def _convert_photoshop_project_settings(ayon_settings, output): + if "photoshop" not in ayon_settings: + return + + ayon_photoshop = ayon_settings["photoshop"] + _convert_host_imageio(ayon_photoshop) + + ayon_publish_photoshop = ayon_photoshop["publish"] + + ayon_colorcoded = ayon_publish_photoshop["CollectColorCodedInstances"] + if "flatten_product_type_template" in ayon_colorcoded: + ayon_colorcoded["flatten_subset_template"] = ( + ayon_colorcoded.pop("flatten_product_type_template")) + + collect_review = ayon_publish_photoshop["CollectReview"] + if "active" in collect_review: + collect_review["publish"] = collect_review.pop("active") + + output["photoshop"] = ayon_photoshop + + +def _convert_tvpaint_project_settings(ayon_settings, output): + if "tvpaint" not in ayon_settings: + return + ayon_tvpaint = ayon_settings["tvpaint"] + + _convert_host_imageio(ayon_tvpaint) + + filters = {} + for item in ayon_tvpaint["filters"]: + value = item["value"] + try: + value = json.loads(value) + + except ValueError: + value = {} + filters[item["name"]] = value + ayon_tvpaint["filters"] = filters + + ayon_publish_settings = ayon_tvpaint["publish"] + for plugin_name in ( + "ValidateProjectSettings", + "ValidateMarks", + "ValidateStartFrame", + "ValidateAssetName", + ): + ayon_value = ayon_publish_settings[plugin_name] + for src_key, dst_key in ( + ("action_enabled", "optional"), + ("action_enable", "active"), + ): + if src_key in ayon_value: + ayon_value[dst_key] = ayon_value.pop(src_key) + + extract_sequence_setting = ayon_publish_settings["ExtractSequence"] + extract_sequence_setting["review_bg"] = _convert_color( + extract_sequence_setting["review_bg"] + ) + + output["tvpaint"] = ayon_tvpaint + + +def _convert_traypublisher_project_settings(ayon_settings, output): + if "traypublisher" not in ayon_settings: + return + + ayon_traypublisher = ayon_settings["traypublisher"] + + _convert_host_imageio(ayon_traypublisher) + + ayon_editorial_simple = ( + ayon_traypublisher["editorial_creators"]["editorial_simple"] + ) + # Subset -> Product type conversion + if "product_type_presets" in ayon_editorial_simple: + family_presets = ayon_editorial_simple.pop("product_type_presets") + for item in family_presets: + item["family"] = item.pop("product_type") + ayon_editorial_simple["family_presets"] = family_presets + + if "shot_metadata_creator" in ayon_editorial_simple: + shot_metadata_creator = ayon_editorial_simple.pop( + "shot_metadata_creator" + ) + if isinstance(shot_metadata_creator["clip_name_tokenizer"], dict): + shot_metadata_creator["clip_name_tokenizer"] = [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"}, + ] + ayon_editorial_simple.update(shot_metadata_creator) + + ayon_editorial_simple["clip_name_tokenizer"] = { + item["name"]: item["regex"] + for item in ayon_editorial_simple["clip_name_tokenizer"] + } + + if "shot_subset_creator" in ayon_editorial_simple: + ayon_editorial_simple.update( + ayon_editorial_simple.pop("shot_subset_creator")) + for item in ayon_editorial_simple["shot_hierarchy"]["parents"]: + item["type"] = item.pop("parent_type") + + # Simple creators + ayon_simple_creators = ayon_traypublisher["simple_creators"] + for item in ayon_simple_creators: + if "product_type" not in item: + break + item["family"] = item.pop("product_type") + + shot_add_tasks = ayon_editorial_simple["shot_add_tasks"] + if isinstance(shot_add_tasks, dict): + shot_add_tasks = [] + new_shot_add_tasks = { + item["name"]: item["task_type"] + for item in shot_add_tasks + } + ayon_editorial_simple["shot_add_tasks"] = new_shot_add_tasks + + output["traypublisher"] = ayon_traypublisher + + +def _convert_webpublisher_project_settings(ayon_settings, output): + if "webpublisher" not in ayon_settings: + return + + ayon_webpublisher = ayon_settings["webpublisher"] + _convert_host_imageio(ayon_webpublisher) + + ayon_publish = ayon_webpublisher["publish"] + + ayon_collect_files = ayon_publish["CollectPublishedFiles"] + ayon_collect_files["task_type_to_family"] = { + item["name"]: item["value"] + for item in ayon_collect_files["task_type_to_family"] + } + + output["webpublisher"] = ayon_webpublisher + + +def _convert_deadline_project_settings(ayon_settings, output): + if "deadline" not in ayon_settings: + return + + ayon_deadline = ayon_settings["deadline"] + + for key in ("deadline_urls",): + ayon_deadline.pop(key) + + ayon_deadline_publish = ayon_deadline["publish"] + limit_groups = { + item["name"]: item["value"] + for item in ayon_deadline_publish["NukeSubmitDeadline"]["limit_groups"] + } + ayon_deadline_publish["NukeSubmitDeadline"]["limit_groups"] = limit_groups + + maya_submit = ayon_deadline_publish["MayaSubmitDeadline"] + for json_key in ("jobInfo", "pluginInfo"): + src_text = maya_submit.pop(json_key) + try: + value = json.loads(src_text) + except ValueError: + value = {} + maya_submit[json_key] = value + + nuke_submit = ayon_deadline_publish["NukeSubmitDeadline"] + nuke_submit["env_search_replace_values"] = { + item["name"]: item["value"] + for item in nuke_submit.pop("env_search_replace_values") + } + nuke_submit["limit_groups"] = { + item["name"]: item["value"] for item in nuke_submit.pop("limit_groups") + } + + process_subsetted_job = ayon_deadline_publish["ProcessSubmittedJobOnFarm"] + process_subsetted_job["aov_filter"] = { + item["name"]: item["value"] + for item in process_subsetted_job.pop("aov_filter") + } + + output["deadline"] = ayon_deadline + + +def _convert_royalrender_project_settings(ayon_settings, output): + if "royalrender" not in ayon_settings: + return + ayon_royalrender = ayon_settings["royalrender"] + rr_paths = ayon_royalrender.get("selected_rr_paths", []) + + output["royalrender"] = { + "publish": ayon_royalrender["publish"], + "rr_paths": rr_paths, + } + + +def _convert_kitsu_project_settings(ayon_settings, output): + if "kitsu" not in ayon_settings: + return + + ayon_kitsu_settings = ayon_settings["kitsu"] + ayon_kitsu_settings.pop("server") + + integrate_note = ayon_kitsu_settings["publish"]["IntegrateKitsuNote"] + status_change_conditions = integrate_note["status_change_conditions"] + if "product_type_requirements" in status_change_conditions: + status_change_conditions["family_requirements"] = ( + status_change_conditions.pop("product_type_requirements")) + + output["kitsu"] = ayon_kitsu_settings + + +def _convert_shotgrid_project_settings(ayon_settings, output): + if "shotgrid" not in ayon_settings: + return + + ayon_shotgrid = ayon_settings["shotgrid"] + # This means that a different variant of addon is used + if "leecher_backend_url" not in ayon_shotgrid: + return + + for key in { + "leecher_backend_url", + "filter_projects_by_login", + "shotgrid_settings", + "leecher_manager_url", + }: + ayon_shotgrid.pop(key) + + asset_field = ayon_shotgrid["fields"]["asset"] + asset_field["type"] = asset_field.pop("asset_type") + + task_field = ayon_shotgrid["fields"]["task"] + if "task" in task_field: + task_field["step"] = task_field.pop("task") + + output["shotgrid"] = ayon_settings["shotgrid"] + + +def _convert_slack_project_settings(ayon_settings, output): + if "slack" not in ayon_settings: + return + + ayon_slack = ayon_settings["slack"] + ayon_slack.pop("enabled", None) + for profile in ayon_slack["publish"]["CollectSlackFamilies"]["profiles"]: + profile["tasks"] = profile.pop("task_names") + profile["subsets"] = profile.pop("subset_names") + + output["slack"] = ayon_slack + + +def _convert_global_project_settings(ayon_settings, output, default_settings): + if "core" not in ayon_settings: + return + + ayon_core = ayon_settings["core"] + + _convert_host_imageio(ayon_core) + + for key in ( + "environments", + "studio_name", + "studio_code", + ): + ayon_core.pop(key, None) + + # Publish conversion + ayon_publish = ayon_core["publish"] + + ayon_collect_audio = ayon_publish["CollectAudio"] + if "audio_product_name" in ayon_collect_audio: + ayon_collect_audio["audio_subset_name"] = ( + ayon_collect_audio.pop("audio_product_name")) + + for profile in ayon_publish["ExtractReview"]["profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + new_outputs = {} + for output_def in profile.pop("outputs"): + name = output_def.pop("name") + new_outputs[name] = output_def + + output_def_filter = output_def["filter"] + if "product_names" in output_def_filter: + output_def_filter["subsets"] = ( + output_def_filter.pop("product_names")) + + for color_key in ("overscan_color", "bg_color"): + output_def[color_key] = _convert_color(output_def[color_key]) + + letter_box = output_def["letter_box"] + for color_key in ("fill_color", "line_color"): + letter_box[color_key] = _convert_color(letter_box[color_key]) + + if "output_width" in output_def: + output_def["width"] = output_def.pop("output_width") + + if "output_height" in output_def: + output_def["height"] = output_def.pop("output_height") + + profile["outputs"] = new_outputs + + # ExtractOIIOTranscode plugin + extract_oiio_transcode = ayon_publish["ExtractOIIOTranscode"] + extract_oiio_transcode_profiles = extract_oiio_transcode["profiles"] + for profile in extract_oiio_transcode_profiles: + new_outputs = {} + name_counter = {} + for profile_output in profile["outputs"]: + if "name" in profile_output: + name = profile_output.pop("name") + else: + # Backwards compatibility for setting without 'name' in model + name = profile_output["extension"] + if name in new_outputs: + name_counter[name] += 1 + name = "{}_{}".format(name, name_counter[name]) + else: + name_counter[name] = 0 + + new_outputs[name] = profile_output + profile["outputs"] = new_outputs + + # Extract Burnin plugin + extract_burnin = ayon_publish["ExtractBurnin"] + extract_burnin_options = extract_burnin["options"] + for color_key in ("font_color", "bg_color"): + extract_burnin_options[color_key] = _convert_color( + extract_burnin_options[color_key] + ) + + for profile in extract_burnin["profiles"]: + extract_burnin_defs = profile["burnins"] + if "product_names" in profile: + profile["subsets"] = profile.pop("product_names") + profile["families"] = profile.pop("product_types") + + for burnin_def in extract_burnin_defs: + for key in ( + "TOP_LEFT", + "TOP_CENTERED", + "TOP_RIGHT", + "BOTTOM_LEFT", + "BOTTOM_CENTERED", + "BOTTOM_RIGHT", + ): + burnin_def[key] = ( + burnin_def[key] + .replace("{product[name]}", "{subset}") + .replace("{Product[name]}", "{Subset}") + .replace("{PRODUCT[NAME]}", "{SUBSET}") + .replace("{product[type]}", "{family}") + .replace("{Product[type]}", "{Family}") + .replace("{PRODUCT[TYPE]}", "{FAMILY}") + .replace("{folder[name]}", "{asset}") + .replace("{Folder[name]}", "{Asset}") + .replace("{FOLDER[NAME]}", "{ASSET}") + ) + profile["burnins"] = { + extract_burnin_def.pop("name"): extract_burnin_def + for extract_burnin_def in extract_burnin_defs + } + + ayon_integrate_hero = ayon_publish["IntegrateHeroVersion"] + for profile in ayon_integrate_hero["template_name_profiles"]: + if "product_types" not in profile: + break + profile["families"] = profile.pop("product_types") + + if "IntegrateProductGroup" in ayon_publish: + subset_group = ayon_publish.pop("IntegrateProductGroup") + subset_group_profiles = subset_group.pop("product_grouping_profiles") + for profile in subset_group_profiles: + profile["families"] = profile.pop("product_types") + subset_group["subset_grouping_profiles"] = subset_group_profiles + ayon_publish["IntegrateSubsetGroup"] = subset_group + + # Cleanup plugin + ayon_cleanup = ayon_publish["CleanUp"] + if "patterns" in ayon_cleanup: + ayon_cleanup["paterns"] = ayon_cleanup.pop("patterns") + + # Project root settings - json string to dict + ayon_core["project_environments"] = json.loads( + ayon_core["project_environments"] + ) + ayon_core["project_folder_structure"] = json.dumps(json.loads( + ayon_core["project_folder_structure"] + )) + + # Tools settings + ayon_tools = ayon_core["tools"] + ayon_create_tool = ayon_tools["creator"] + if "product_name_profiles" in ayon_create_tool: + product_name_profiles = ayon_create_tool.pop("product_name_profiles") + for profile in product_name_profiles: + profile["families"] = profile.pop("product_types") + ayon_create_tool["subset_name_profiles"] = product_name_profiles + + for profile in ayon_create_tool["subset_name_profiles"]: + template = profile["template"] + profile["template"] = ( + template + .replace("{task[name]}", "{task}") + .replace("{Task[name]}", "{Task}") + .replace("{TASK[NAME]}", "{TASK}") + .replace("{product[type]}", "{family}") + .replace("{Product[type]}", "{Family}") + .replace("{PRODUCT[TYPE]}", "{FAMILY}") + .replace("{folder[name]}", "{asset}") + .replace("{Folder[name]}", "{Asset}") + .replace("{FOLDER[NAME]}", "{ASSET}") + ) + + product_smart_select_key = "families_smart_select" + if "product_types_smart_select" in ayon_create_tool: + product_smart_select_key = "product_types_smart_select" + + new_smart_select_families = { + item["name"]: item["task_names"] + for item in ayon_create_tool.pop(product_smart_select_key) + } + ayon_create_tool["families_smart_select"] = new_smart_select_families + + ayon_loader_tool = ayon_tools["loader"] + if "product_type_filter_profiles" in ayon_loader_tool: + product_type_filter_profiles = ( + ayon_loader_tool.pop("product_type_filter_profiles")) + for profile in product_type_filter_profiles: + profile["filter_families"] = profile.pop("filter_product_types") + + ayon_loader_tool["family_filter_profiles"] = ( + product_type_filter_profiles) + + ayon_publish_tool = ayon_tools["publish"] + for profile in ayon_publish_tool["hero_template_name_profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + + for profile in ayon_publish_tool["template_name_profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + + ayon_core["sync_server"] = ( + default_settings["global"]["sync_server"] + ) + output["global"] = ayon_core + + +def convert_project_settings(ayon_settings, default_settings): + # Missing settings + # - standalonepublisher + default_settings = copy.deepcopy(default_settings) + output = {} + exact_match = { + "aftereffects", + "harmony", + "houdini", + "resolve", + "unreal", + } + for key in exact_match: + if key in ayon_settings: + output[key] = ayon_settings[key] + _convert_host_imageio(output[key]) + + _convert_applications_project_settings(ayon_settings, output) + _convert_blender_project_settings(ayon_settings, output) + _convert_celaction_project_settings(ayon_settings, output) + _convert_flame_project_settings(ayon_settings, output) + _convert_fusion_project_settings(ayon_settings, output) + _convert_maya_project_settings(ayon_settings, output) + _convert_3dsmax_project_settings(ayon_settings, output) + _convert_nuke_project_settings(ayon_settings, output) + _convert_hiero_project_settings(ayon_settings, output) + _convert_photoshop_project_settings(ayon_settings, output) + _convert_tvpaint_project_settings(ayon_settings, output) + _convert_traypublisher_project_settings(ayon_settings, output) + _convert_webpublisher_project_settings(ayon_settings, output) + + _convert_deadline_project_settings(ayon_settings, output) + _convert_royalrender_project_settings(ayon_settings, output) + _convert_kitsu_project_settings(ayon_settings, output) + _convert_shotgrid_project_settings(ayon_settings, output) + _convert_slack_project_settings(ayon_settings, output) + + _convert_global_project_settings(ayon_settings, output, default_settings) + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + for key, value in default_settings.items(): + if key not in output: + output[key] = value + + return output + + +class CacheItem: + lifetime = 10 + + def __init__(self, value, outdate_time=None): + self._value = value + if outdate_time is None: + outdate_time = time.time() + self.lifetime + self._outdate_time = outdate_time + + @classmethod + def create_outdated(cls): + return cls({}, 0) + + def get_value(self): + return copy.deepcopy(self._value) + + def update_value(self, value): + self._value = value + self._outdate_time = time.time() + self.lifetime + + @property + def is_outdated(self): + return time.time() > self._outdate_time + + +class _AyonSettingsCache: + use_bundles = None + variant = None + addon_versions = CacheItem.create_outdated() + studio_settings = CacheItem.create_outdated() + cache_by_project_name = collections.defaultdict( + CacheItem.create_outdated) + + @classmethod + def _use_bundles(cls): + if _AyonSettingsCache.use_bundles is None: + major, minor, _, _, _ = ayon_api.get_server_version_tuple() + _AyonSettingsCache.use_bundles = major == 0 and minor >= 3 + return _AyonSettingsCache.use_bundles + + @classmethod + def _get_variant(cls): + if _AyonSettingsCache.variant is None: + from openpype.lib.openpype_version import is_staging_enabled + + _AyonSettingsCache.variant = ( + "staging" if is_staging_enabled() else "production" + ) + return _AyonSettingsCache.variant + + @classmethod + def _get_bundle_name(cls): + return os.environ["AYON_BUNDLE_NAME"] + + @classmethod + def get_value_by_project(cls, project_name): + cache_item = _AyonSettingsCache.cache_by_project_name[project_name] + if cache_item.is_outdated: + if cls._use_bundles(): + value = ayon_api.get_addons_settings( + bundle_name=cls._get_bundle_name(), + project_name=project_name + ) + else: + value = ayon_api.get_addons_settings(project_name) + cache_item.update_value(value) + return cache_item.get_value() + + @classmethod + def _get_addon_versions_from_bundle(cls): + expected_bundle = cls._get_bundle_name() + bundles = ayon_api.get_bundles()["bundles"] + bundle = next( + ( + bundle + for bundle in bundles + if bundle["name"] == expected_bundle + ), + None + ) + if bundle is not None: + return bundle["addons"] + return {} + + @classmethod + def get_addon_versions(cls): + cache_item = _AyonSettingsCache.addon_versions + if cache_item.is_outdated: + if cls._use_bundles(): + addons = cls._get_addon_versions_from_bundle() + else: + settings_data = ayon_api.get_addons_settings( + only_values=False, variant=cls._get_variant()) + addons = settings_data["versions"] + cache_item.update_value(addons) + + return cache_item.get_value() + + +def get_ayon_project_settings(default_values, project_name): + ayon_settings = _AyonSettingsCache.get_value_by_project(project_name) + return convert_project_settings(ayon_settings, default_values) + + +def get_ayon_system_settings(default_values): + addon_versions = _AyonSettingsCache.get_addon_versions() + ayon_settings = _AyonSettingsCache.get_value_by_project(None) + + return convert_system_settings( + ayon_settings, default_values, addon_versions + ) diff --git a/openpype/settings/defaults/project_settings/aftereffects.json b/openpype/settings/defaults/project_settings/aftereffects.json index 63f544e536..77ccb74410 100644 --- a/openpype/settings/defaults/project_settings/aftereffects.json +++ b/openpype/settings/defaults/project_settings/aftereffects.json @@ -12,7 +12,7 @@ }, "create": { "RenderCreator": { - "defaults": [ + "default_variants": [ "Main" ], "mark_for_review": true diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index eae5b239c8..f3eb31174f 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -4,6 +4,8 @@ "apply_on_opening": false, "base_file_unit_scale": 0.01 }, + "set_resolution_startup": true, + "set_frames_startup": true, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -15,6 +17,14 @@ "rules": {} } }, + "RenderSettings": { + "default_render_image_folder": "renders/blender", + "aov_separator": "underscore", + "image_format": "exr", + "multilayer_exr": true, + "aov_list": [], + "custom_passes": [] + }, "workfile_builder": { "create_first_version": false, "custom_templates": [] @@ -25,6 +35,22 @@ "optional": true, "active": true }, + "ValidateFileSaved": { + "enabled": true, + "optional": false, + "active": true, + "exclude_families": [] + }, + "ValidateRenderCameraIsSet": { + "enabled": true, + "optional": false, + "active": true + }, + "ValidateDeadlinePublish": { + "enabled": true, + "optional": false, + "active": true + }, "ValidateMeshHasUvs": { "enabled": true, "optional": true, @@ -54,7 +80,8 @@ "camera", "rig", "action", - "layout" + "layout", + "blendScene" ] }, "ExtractFBX": { @@ -82,6 +109,11 @@ "optional": true, "active": true }, + "ExtractCameraABC": { + "enabled": true, + "optional": true, + "active": true + }, "ExtractLayout": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index 1b8c8397d7..2c5e0dc65d 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -52,7 +52,8 @@ "priority": 50, "chunk_size": 10, "concurrent_tasks": 1, - "group": "" + "group": "", + "plugin": "Fusion" }, "NukeSubmitDeadline": { "enabled": true, @@ -99,6 +100,15 @@ "deadline_chunk_size": 10, "deadline_job_delay": "00:00:00:00" }, + "BlenderSubmitDeadline": { + "enabled": true, + "optional": false, + "active": true, + "use_published": true, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, "ProcessSubmittedJobOnFarm": { "enabled": true, "deadline_department": "", @@ -112,6 +122,9 @@ "maya": [ ".*([Bb]eauty).*" ], + "blender": [ + ".*([Bb]eauty).*" + ], "aftereffects": [ ".*" ], diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index b87c45666d..e2ca334b5f 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -1,9 +1,10 @@ { "events": { "sync_to_avalon": { - "statuses_name_change": [ - "ready", - "not ready" + "role_list": [ + "Pypeclub", + "Administrator", + "Project manager" ] }, "prepare_project": { diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 802b964375..06a595d1c5 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -1,10 +1,13 @@ { + "version_start_category": { + "profiles": [] + }, "imageio": { "activate_global_color_management": false, "ocio_config": { "filepath": [ - "{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfigs/aces_1.2/config.ocio", - "{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfigs/nuke-default/config.ocio" + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" ] }, "file_rules": { @@ -53,7 +56,8 @@ }, "ValidateEditorialAssetName": { "enabled": true, - "optional": false + "optional": false, + "active": true }, "ValidateVersion": { "enabled": true, @@ -300,74 +304,6 @@ } ] }, - "IntegrateAssetNew": { - "subset_grouping_profiles": [ - { - "families": [], - "hosts": [], - "task_types": [], - "tasks": [], - "template": "" - } - ], - "template_name_profiles": [ - { - "families": [], - "hosts": [], - "task_types": [], - "tasks": [], - "template_name": "publish" - }, - { - "families": [ - "review", - "render", - "prerender" - ], - "hosts": [], - "task_types": [], - "tasks": [], - "template_name": "render" - }, - { - "families": [ - "simpleUnrealTexture" - ], - "hosts": [ - "standalonepublisher" - ], - "task_types": [], - "tasks": [], - "template_name": "simpleUnrealTexture" - }, - { - "families": [ - "staticMesh", - "skeletalMesh" - ], - "hosts": [ - "maya" - ], - "task_types": [], - "tasks": [], - "template_name": "maya2unreal" - }, - { - "families": [ - "online" - ], - "hosts": [ - "traypublisher" - ], - "task_types": [], - "tasks": [], - "template_name": "online" - } - ] - }, - "IntegrateAsset": { - "skip_host_families": [] - }, "IntegrateHeroVersion": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/harmony.json b/openpype/settings/defaults/project_settings/harmony.json index 02f51d1d2b..b424b43cc1 100644 --- a/openpype/settings/defaults/project_settings/harmony.json +++ b/openpype/settings/defaults/project_settings/harmony.json @@ -10,22 +10,6 @@ "rules": {} } }, - "load": { - "ImageSequenceLoader": { - "family": [ - "shot", - "render", - "image", - "plate", - "reference" - ], - "representations": [ - "jpeg", - "png", - "jpg" - ] - } - }, "publish": { "CollectPalettes": { "allowed_tasks": [ diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index a53f1ff202..4f57ee52c6 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -1,4 +1,16 @@ { + "general": { + "update_houdini_var_context": { + "enabled": true, + "houdini_vars":[ + { + "var": "JOB", + "value": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task[name]}", + "is_directory": true + } + ] + } + }, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -14,48 +26,83 @@ "create": { "CreateArnoldAss": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "ext": ".ass" }, + "CreateStaticMesh": { + "enabled": true, + "default_variants": [ + "Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, "CreateAlembicCamera": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateCompositeSequence": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreatePointCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRedshiftROP": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRemotePublish": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateVDBCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSD": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDModel": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "USDCreateShadingWorkspace": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDRender": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] } }, "publish": { @@ -71,10 +118,30 @@ "$JOB" ] }, + "ValidateReviewColorspace": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateContainers": { "enabled": true, "optional": true, "active": true + }, + "ValidateSubsetName": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateMeshIsStatic": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateUnrealStaticMeshName": { + "enabled": false, + "optional": true, + "active": true } } } diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index e3fc5f0723..7719a5e255 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -521,19 +521,19 @@ "enabled": true, "make_tx": true, "rs_tex": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRender": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateUnrealStaticMesh": { "enabled": true, - "defaults": [ + "default_variants": [ "", "_Main" ], @@ -547,7 +547,9 @@ }, "CreateUnrealSkeletalMesh": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "joint_hints": "jnt_org" }, "CreateMultiverseLook": { @@ -555,12 +557,11 @@ "publish_mip_map": true }, "CreateAnimation": { - "enabled": false, "write_color_sets": false, "write_face_sets": false, "include_parent_hierarchy": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -568,7 +569,7 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main", "Proxy", "Sculpt" @@ -579,7 +580,7 @@ "write_color_sets": false, "write_face_sets": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -587,20 +588,20 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateReview": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "useMayaTimeline": true }, "CreateAss": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "expandProcedurals": false, @@ -615,68 +616,68 @@ "maskOverride": false, "maskDriver": false, "maskFilter": false, - "maskColor_manager": false, - "maskOperator": false + "maskOperator": false, + "maskColor_manager": false }, "CreateVrayProxy": { "enabled": true, "vrmesh": true, "alembic": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsd": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdComp": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdOver": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateAssembly": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateCamera": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateLayout": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMayaScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRenderSetup": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Sim", "Cloth" @@ -684,20 +685,20 @@ }, "CreateSetDress": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Anim" ] }, "CreateVRayScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateYetiRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] } @@ -706,6 +707,9 @@ "CollectMayaRender": { "sync_workfile_version": false }, + "CollectFbxAnimation": { + "enabled": true + }, "CollectFbxCamera": { "enabled": false }, @@ -825,6 +829,11 @@ "redshift_render_attributes": [], "renderman_render_attributes": [] }, + "ValidateResolution": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateCurrentRenderLayerIsRenderable": { "enabled": true, "optional": false, @@ -1119,6 +1128,11 @@ "optional": true, "active": true }, + "ValidateAnimatedReferenceRig": { + "enabled": true, + "optional": false, + "active": true + }, "ValidateAnimationContent": { "enabled": true, "optional": false, @@ -1139,6 +1153,16 @@ "optional": false, "active": true }, + "ValidateSkeletonRigContents": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateSkeletonRigControllers": { + "enabled": false, + "optional": true, + "active": true + }, "ValidateSkinclusterDeformerSet": { "enabled": true, "optional": false, @@ -1149,6 +1173,21 @@ "optional": false, "allow_history_only": false }, + "ValidateSkeletonRigOutSetNodeIds": { + "enabled": false, + "optional": false, + "allow_history_only": false + }, + "ValidateSkeletonRigOutputIds": { + "enabled": false, + "optional": true, + "active": true + }, + "ValidateSkeletonTopGroupHierarchy": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateCameraAttributes": { "enabled": false, "optional": true, @@ -1337,6 +1376,12 @@ "active": true, "bake_attributes": [] }, + "ExtractCameraMayaScene": { + "enabled": true, + "optional": true, + "active": true, + "keep_image_planes": false + }, "ExtractGLB": { "enabled": true, "active": true, @@ -1464,6 +1509,10 @@ "namespace": "{asset_name}_{subset}_##_", "group_name": "_GRP", "display_handle": true + }, + "import_loader": { + "namespace": "{asset_name}_{subset}_##_", + "group_name": "_GRP" } }, "workfile_build": { diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 85e3c0d3c3..3b69ef54fd 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -28,11 +28,7 @@ "colorManagement": "Nuke", "OCIO_config": "nuke-default", "workingSpaceLUT": "linear", - "monitorLut": "sRGB", - "int8Lut": "sRGB", - "int16Lut": "sRGB", - "logLut": "Cineon", - "floatLut": "linear" + "monitorLut": "sRGB" }, "nodes": { "requiredNodes": [ @@ -345,7 +341,7 @@ "write" ] }, - "ValidateCorrectAssetName": { + "ValidateCorrectAssetContext": { "enabled": true, "optional": true, "active": true @@ -465,34 +461,60 @@ "viewer_process_override": "", "bake_viewer_process": true, "bake_viewer_input_process": true, - "reformat_node_add": false, - "reformat_node_config": [ - { - "type": "text", - "name": "type", - "value": "to format" - }, - { - "type": "text", - "name": "format", - "value": "HD_1080" - }, - { - "type": "text", - "name": "filter", - "value": "Lanczos6" - }, - { - "type": "bool", - "name": "black_outside", - "value": true - }, - { - "type": "bool", - "name": "pbb", - "value": false - } - ], + "reformat_nodes_config": { + "enabled": false, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "value": "to format" + }, + { + "type": "text", + "name": "format", + "value": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "value": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "value": true + }, + { + "type": "bool", + "name": "pbb", + "value": false + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + } + }, + "ExtractReviewIntermediates": { + "enabled": true, + "viewer_lut_raw": false, + "outputs": { + "baking": { + "filter": { + "task_types": [], + "families": [], + "subsets": [] + }, + "read_raw": false, + "viewer_process_override": "", + "bake_viewer_process": true, + "bake_viewer_input_process": true, "reformat_nodes_config": { "enabled": false, "reposition_nodes": [ diff --git a/openpype/settings/defaults/project_settings/royalrender.json b/openpype/settings/defaults/project_settings/royalrender.json index b72fed8474..14e36058aa 100644 --- a/openpype/settings/defaults/project_settings/royalrender.json +++ b/openpype/settings/defaults/project_settings/royalrender.json @@ -1,4 +1,7 @@ { + "rr_paths": [ + "default" + ], "publish": { "CollectSequencesFromJob": { "review": true diff --git a/openpype/settings/defaults/project_settings/substancepainter.json b/openpype/settings/defaults/project_settings/substancepainter.json index 4adeff98ef..2f9344d435 100644 --- a/openpype/settings/defaults/project_settings/substancepainter.json +++ b/openpype/settings/defaults/project_settings/substancepainter.json @@ -2,11 +2,11 @@ "imageio": { "activate_host_color_management": true, "ocio_config": { - "override_global_config": true, + "override_global_config": false, "filepath": [] }, "file_rules": { - "activate_host_rules": true, + "activate_host_rules": false, "rules": {} } }, diff --git a/openpype/settings/defaults/project_settings/traypublisher.json b/openpype/settings/defaults/project_settings/traypublisher.json index 4c2c2f1391..e13de11414 100644 --- a/openpype/settings/defaults/project_settings/traypublisher.json +++ b/openpype/settings/defaults/project_settings/traypublisher.json @@ -256,6 +256,23 @@ "allow_multiple_items": true, "allow_version_control": false, "extensions": [] + }, + { + "family": "audio", + "identifier": "", + "label": "Audio ", + "icon": "fa5s.file-audio", + "default_variants": [ + "Main" + ], + "description": "Audio product", + "detailed_description": "Audio files for review or final delivery", + "allow_sequences": false, + "allow_multiple_items": false, + "allow_version_control": false, + "extensions": [ + ".wav" + ] } ], "editorial_creators": { @@ -329,6 +346,11 @@ } }, "publish": { + "CollectSequenceFrameData": { + "enabled": true, + "optional": true, + "active": false + }, "ValidateFrameRange": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/tvpaint.json b/openpype/settings/defaults/project_settings/tvpaint.json index 1f4f468656..fdbd6d5d0f 100644 --- a/openpype/settings/defaults/project_settings/tvpaint.json +++ b/openpype/settings/defaults/project_settings/tvpaint.json @@ -60,11 +60,6 @@ 255, 255, 255 - ], - "families_to_review": [ - "review", - "renderlayer", - "renderscene" ] }, "ValidateProjectSettings": { diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index f2fc7d933a..6a0ddb398e 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -12,6 +12,26 @@ "LC_ALL": "C" }, "variants": { + "2024": { + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": { + "MAYA_VERSION": "2024" + } + }, "2023": { "use_python_2": false, "executables": { @@ -51,66 +71,65 @@ "environment": { "MAYA_VERSION": "2022" } - }, - "2020": { - "use_python_2": true, + } + } + }, + "mayapy": { + "enabled": true, + "label": "MayaPy", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": { + "MAYA_DISABLE_CLIC_IPM": "Yes", + "MAYA_DISABLE_CIP": "Yes", + "MAYA_DISABLE_CER": "Yes", + "PYMEL_SKIP_MEL_INIT": "Yes", + "LC_ALL": "C" + }, + "variants": { + "2024": { + "use_python_2": false, "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\mayapy.exe" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2020/bin/maya" + "/usr/autodesk/maya2024/bin/mayapy" ] }, "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": { - "MAYA_VERSION": "2020" - } - }, - "2019": { - "use_python_2": true, - "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" + "-I" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2019/bin/maya" + "-I" ] }, - "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": { - "MAYA_VERSION": "2019" - } + "environment": {} }, - "2018": { - "use_python_2": true, + "2023": { + "use_python_2": false, "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\mayapy.exe" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2018/bin/maya" + "/usr/autodesk/maya2023/bin/mayapy" ] }, "arguments": { - "windows": [], + "windows": [ + "-I" + ], "darwin": [], - "linux": [] + "linux": [ + "-I" + ] }, - "environment": { - "MAYA_VERSION": "2018" - } + "environment": {} } } }, diff --git a/openpype/settings/defaults/system_settings/general.json b/openpype/settings/defaults/system_settings/general.json index d2994d1a62..496c37cd4d 100644 --- a/openpype/settings/defaults/system_settings/general.json +++ b/openpype/settings/defaults/system_settings/general.json @@ -15,6 +15,11 @@ "darwin": [], "linux": [] }, + "local_openpype_path": { + "windows": "", + "darwin": "", + "linux": "" + }, "production_version": "", "staging_version": "", "version_check_interval": 5 diff --git a/openpype/settings/defaults/system_settings/modules.json b/openpype/settings/defaults/system_settings/modules.json index 1ddbfd2726..f524f01d45 100644 --- a/openpype/settings/defaults/system_settings/modules.json +++ b/openpype/settings/defaults/system_settings/modules.json @@ -185,9 +185,9 @@ "enabled": false, "rr_paths": { "default": { - "windows": "", - "darwin": "", - "linux": "" + "windows": "C:\\RR8", + "darwin": "/Volumes/share/RR8", + "linux": "/mnt/studio/RR8" } } }, diff --git a/openpype/settings/entities/__init__.py b/openpype/settings/entities/__init__.py index 5e3a76094e..00db2b33a7 100644 --- a/openpype/settings/entities/__init__.py +++ b/openpype/settings/entities/__init__.py @@ -107,7 +107,8 @@ from .enum_entity import ( TaskTypeEnumEntity, DeadlineUrlEnumEntity, AnatomyTemplatesEnumEntity, - ShotgridUrlEnumEntity + ShotgridUrlEnumEntity, + RoyalRenderRootEnumEntity ) from .list_entity import ListEntity @@ -170,6 +171,7 @@ __all__ = ( "TaskTypeEnumEntity", "DeadlineUrlEnumEntity", "ShotgridUrlEnumEntity", + "RoyalRenderRootEnumEntity", "AnatomyTemplatesEnumEntity", "ListEntity", diff --git a/openpype/settings/entities/enum_entity.py b/openpype/settings/entities/enum_entity.py index de3bd353eb..26ecd33551 100644 --- a/openpype/settings/entities/enum_entity.py +++ b/openpype/settings/entities/enum_entity.py @@ -1,3 +1,5 @@ +import abc +import six import copy from .input_entities import InputEntity from .exceptions import EntitySchemaError @@ -477,8 +479,8 @@ class TaskTypeEnumEntity(BaseEnumEntity): self.set(value_on_not_set) -class DeadlineUrlEnumEntity(BaseEnumEntity): - schema_types = ["deadline_url-enum"] +class DynamicEnumEntity(BaseEnumEntity): + schema_types = [] def _item_initialization(self): self.multiselection = self.schema_data.get("multiselection", True) @@ -496,22 +498,8 @@ class DeadlineUrlEnumEntity(BaseEnumEntity): # GUI attribute self.placeholder = self.schema_data.get("placeholder") - def _get_enum_values(self): - deadline_urls_entity = self.get_entity_from_path( - "system_settings/modules/deadline/deadline_urls" - ) - - valid_keys = set() - enum_items_list = [] - for server_name, url_entity in deadline_urls_entity.items(): - enum_items_list.append( - {server_name: "{}: {}".format(server_name, url_entity.value)} - ) - valid_keys.add(server_name) - return enum_items_list, valid_keys - def set_override_state(self, *args, **kwargs): - super(DeadlineUrlEnumEntity, self).set_override_state(*args, **kwargs) + super(DynamicEnumEntity, self).set_override_state(*args, **kwargs) self.enum_items, self.valid_keys = self._get_enum_values() if self.multiselection: @@ -528,22 +516,50 @@ class DeadlineUrlEnumEntity(BaseEnumEntity): elif self._current_value not in self.valid_keys: self._current_value = tuple(self.valid_keys)[0] + @abc.abstractmethod + def _get_enum_values(self): + pass -class ShotgridUrlEnumEntity(BaseEnumEntity): + +class DeadlineUrlEnumEntity(DynamicEnumEntity): + schema_types = ["deadline_url-enum"] + + def _get_enum_values(self): + deadline_urls_entity = self.get_entity_from_path( + "system_settings/modules/deadline/deadline_urls" + ) + + valid_keys = set() + enum_items_list = [] + for server_name, url_entity in deadline_urls_entity.items(): + enum_items_list.append( + {server_name: "{}: {}".format(server_name, url_entity.value)} + ) + valid_keys.add(server_name) + return enum_items_list, valid_keys + + +class RoyalRenderRootEnumEntity(DynamicEnumEntity): + schema_types = ["rr_root-enum"] + + def _get_enum_values(self): + rr_root_entity = self.get_entity_from_path( + "system_settings/modules/royalrender/rr_paths" + ) + + valid_keys = set() + enum_items_list = [] + for server_name, url_entity in rr_root_entity.items(): + enum_items_list.append( + {server_name: "{}: {}".format(server_name, url_entity.value)} + ) + valid_keys.add(server_name) + return enum_items_list, valid_keys + + +class ShotgridUrlEnumEntity(DynamicEnumEntity): schema_types = ["shotgrid_url-enum"] - def _item_initialization(self): - self.multiselection = False - - self.enum_items = [] - self.valid_keys = set() - - self.valid_value_types = (STRING_TYPE,) - self.value_on_not_set = "" - - # GUI attribute - self.placeholder = self.schema_data.get("placeholder") - def _get_enum_values(self): shotgrid_settings = self.get_entity_from_path( "system_settings/modules/shotgrid/shotgrid_settings" @@ -562,16 +578,6 @@ class ShotgridUrlEnumEntity(BaseEnumEntity): valid_keys.add(server_name) return enum_items_list, valid_keys - def set_override_state(self, *args, **kwargs): - super(ShotgridUrlEnumEntity, self).set_override_state(*args, **kwargs) - - self.enum_items, self.valid_keys = self._get_enum_values() - if not self.valid_keys: - self._current_value = "" - - elif self._current_value not in self.valid_keys: - self._current_value = tuple(self.valid_keys)[0] - class AnatomyTemplatesEnumEntity(BaseEnumEntity): schema_types = ["anatomy-templates-enum"] diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json index 35b8fede86..72f09a641d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json @@ -32,7 +32,7 @@ "children": [ { "type": "list", - "key": "defaults", + "key": "default_variants", "label": "Default Variants", "object_type": "text", "docstring": "Fill default variant(s) (like 'Main' or 'Default') used in subset name creation." diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index c549b577b2..535d9434a3 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -31,6 +31,16 @@ } ] }, + { + "key": "set_resolution_startup", + "type": "boolean", + "label": "Set Resolution on Startup" + }, + { + "key": "set_frames_startup", + "type": "boolean", + "label": "Set Start/End Frames and FPS on Startup" + }, { "key": "imageio", "type": "dict", @@ -44,6 +54,110 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "RenderSettings", + "label": "Render Settings", + "children": [ + { + "type": "text", + "key": "default_render_image_folder", + "label": "Default render image folder" + }, + { + "key": "aov_separator", + "label": "AOV Separator Character", + "type": "enum", + "multiselection": false, + "defaults": "underscore", + "enum_items": [ + {"dash": "- (dash)"}, + {"underscore": "_ (underscore)"}, + {"dot": ". (dot)"} + ] + }, + { + "key": "image_format", + "label": "Output Image Format", + "type": "enum", + "multiselection": false, + "defaults": "exr", + "enum_items": [ + {"exr": "OpenEXR"}, + {"bmp": "BMP"}, + {"rgb": "Iris"}, + {"png": "PNG"}, + {"jpg": "JPEG"}, + {"jp2": "JPEG 2000"}, + {"tga": "Targa"}, + {"tif": "TIFF"} + ] + }, + { + "key": "multilayer_exr", + "type": "boolean", + "label": "Multilayer (EXR)" + }, + { + "type": "label", + "label": "Note: Multilayer EXR is only used when output format type set to EXR." + }, + { + "key": "aov_list", + "label": "AOVs to create", + "type": "enum", + "multiselection": true, + "defaults": "empty", + "enum_items": [ + {"empty": "< empty >"}, + {"combined": "Combined"}, + {"z": "Z"}, + {"mist": "Mist"}, + {"normal": "Normal"}, + {"diffuse_light": "Diffuse Light"}, + {"diffuse_color": "Diffuse Color"}, + {"specular_light": "Specular Light"}, + {"specular_color": "Specular Color"}, + {"volume_light": "Volume Light"}, + {"emission": "Emission"}, + {"environment": "Environment"}, + {"shadow": "Shadow"}, + {"ao": "Ambient Occlusion"}, + {"denoising": "Denoising"}, + {"volume_direct": "Direct Volumetric Scattering"}, + {"volume_indirect": "Indirect Volumetric Scattering"} + ] + }, + { + "type": "label", + "label": "Add custom AOVs. They are added to the view layer and in the Compositing Nodetree,\nbut they need to be added manually to the Shader Nodetree." + }, + { + "type": "dict-modifiable", + "store_as_list": true, + "key": "custom_passes", + "label": "Custom Passes", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "key": "type", + "label": "Type", + "type": "enum", + "multiselection": false, + "default": "COLOR", + "enum_items": [ + {"COLOR": "Color"}, + {"VALUE": "Value"} + ] + } + ] + } + } + ] + }, { "type": "schema_template", "name": "template_workfile_options", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json index 6d59b5a92b..64db852c89 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json @@ -289,6 +289,15 @@ "type": "text", "key": "group", "label": "Group Name" + }, + { + "type": "enum", + "key": "plugin", + "label": "Deadline Plugin", + "enum_items": [ + {"Fusion": "Fusion"}, + {"FusionCmd": "FusionCmd"} + ] } ] }, @@ -531,6 +540,50 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "BlenderSubmitDeadline", + "label": "Blender Submit to Deadline", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "boolean", + "key": "use_published", + "label": "Use Published scene" + }, + { + "type": "number", + "key": "priority", + "label": "Priority" + }, + { + "type": "number", + "key": "chunk_size", + "label": "Frame per Task" + }, + { + "type": "text", + "key": "group", + "label": "Group Name" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json index 157a8d297e..d6efb118b9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json @@ -21,12 +21,9 @@ }, { "type": "list", - "key": "statuses_name_change", - "label": "Statuses", - "object_type": { - "type": "text", - "multiline": false - } + "key": "role_list", + "label": "Roles", + "object_type": "text" } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json index 953361935c..4094632c72 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json @@ -5,6 +5,61 @@ "label": "Global", "is_file": true, "children": [ + { + "type": "dict", + "key": "version_start_category", + "label": "Version Start", + "collapsible": true, + "collapsible_key": true, + "children": [ + { + "type": "list", + "collapsible": true, + "key": "profiles", + "label": "Profiles", + "object_type": { + "type": "dict", + "children": [ + { + "key": "host_names", + "label": "Host names", + "type": "hosts-enum", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subset names", + "type": "list", + "object_type": "text" + }, + { + "key": "version_start", + "label": "Version Start", + "type": "number", + "minimum": 0 + } + ] + } + } + ] + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json index 98a815f2d4..f081c48b23 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json @@ -18,34 +18,6 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "load", - "label": "Loader plugins", - "children": [ - { - "type": "dict", - "collapsible": true, - "key": "ImageSequenceLoader", - "label": "Load Image Sequence", - "children": [ - { - "type": "list", - "key": "family", - "label": "Families", - "object_type": "text" - }, - { - "type": "list", - "key": "representations", - "label": "Representations", - "object_type": "text" - } - ] - } - ] - }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json index 7f782e3647..d4d0565ec9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json @@ -5,6 +5,10 @@ "label": "Houdini", "is_file": true, "children": [ + { + "type": "schema", + "name": "schema_houdini_general" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json index 26c64e6219..6b516ddf4a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json @@ -284,6 +284,10 @@ "type": "schema_template", "name": "template_workfile_options" }, + { + "type": "label", + "label": "^ Settings and for Workfile Builder is deprecated and will be soon removed.
Please use Template Workfile Build Settings instead." + }, { "type": "schema", "name": "schema_templated_workfile_build" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json index cabb4747d5..f4bf2f51ba 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json @@ -5,6 +5,12 @@ "collapsible": true, "is_file": true, "children": [ + { + "type": "rr_root-enum", + "key": "rr_paths", + "label": "Royal Render Roots", + "multiselect": true + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json b/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json index 4faeca89f3..a5f1c57121 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json @@ -13,7 +13,8 @@ { "type": "shotgrid_url-enum", "key": "shotgrid_server", - "label": "Shotgrid Server" + "label": "Shotgrid Server", + "multiselection": false }, { "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json index e75e2887db..93e6325b5a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json @@ -349,6 +349,10 @@ "type": "schema_template", "name": "template_validate_plugin", "template_data": [ + { + "key": "CollectSequenceFrameData", + "label": "Collect Original Sequence Frame Data" + }, { "key": "ValidateFrameRange", "label": "Validate frame range" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json index 45fc13bdde..e9255f426e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json @@ -273,18 +273,6 @@ "key": "review_bg", "label": "Review BG color", "use_alpha": false - }, - { - "type": "enum", - "key": "families_to_review", - "label": "Families to review", - "multiselection": true, - "enum_items": [ - {"review": "review"}, - {"renderpass": "renderPass"}, - {"renderlayer": "renderLayer"}, - {"renderscene": "renderScene"} - ] } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 1037519f57..7f1a8a915b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -18,6 +18,39 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateFileSaved", + "label": "Validate File Saved", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "splitter" + }, + { + "key": "exclude_families", + "label": "Exclude Families", + "type": "list", + "object_type": "text" + } + ] + }, { "type": "collapsible-wrap", "label": "Model", @@ -46,6 +79,66 @@ } ] }, + { + "type": "collapsible-wrap", + "label": "Render", + "children": [ + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "type": "dict", + "collapsible": true, + "key": "ValidateRenderCameraIsSet", + "label": "Validate Render Camera Is Set", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateDeadlinePublish", + "label": "Validate Render Output for Deadline", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + } + ] + } + ] + } + ] + }, { "type": "splitter" }, @@ -105,7 +198,11 @@ }, { "key": "ExtractCamera", - "label": "Extract FBX Camera as FBX" + "label": "Extract Camera as FBX" + }, + { + "key": "ExtractCameraABC", + "label": "Extract Camera as ABC" }, { "key": "ExtractLayout", @@ -174,4 +271,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index 3164cfb62d..c7e91fd22d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -118,6 +118,11 @@ "type": "boolean", "key": "optional", "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" } ] }, @@ -888,142 +893,6 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "IntegrateAssetNew", - "label": "IntegrateAsset (Legacy)", - "is_group": true, - "children": [ - { - "type": "label", - "label": "NOTE: Subset grouping profiles settings were moved to Integrate Subset Group. Please move values there." - }, - { - "type": "list", - "key": "subset_grouping_profiles", - "label": "Subset grouping profiles (DEPRECATED)", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - }, - { - "type": "hosts-enum", - "key": "hosts", - "label": "Hosts", - "multiselection": true - }, - { - "key": "task_types", - "label": "Task types", - "type": "task-types-enum" - }, - { - "key": "tasks", - "label": "Task names", - "type": "list", - "object_type": "text" - }, - { - "type": "separator" - }, - { - "type": "text", - "key": "template", - "label": "Template" - } - ] - } - }, - { - "type": "label", - "label": "NOTE: Publish template profiles settings were moved to Tools/Publish/Template name profiles. Please move values there." - }, - { - "type": "list", - "key": "template_name_profiles", - "label": "Template name profiles (DEPRECATED)", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "type": "label", - "label": "" - }, - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - }, - { - "type": "hosts-enum", - "key": "hosts", - "label": "Hosts", - "multiselection": true - }, - { - "key": "task_types", - "label": "Task types", - "type": "task-types-enum" - }, - { - "key": "tasks", - "label": "Task names", - "type": "list", - "object_type": "text" - }, - { - "type": "separator" - }, - { - "type": "text", - "key": "template_name", - "label": "Template name" - } - ] - } - } - ] - }, - { - "type": "dict", - "collapsible": true, - "key": "IntegrateAsset", - "label": "Integrate Asset", - "is_group": true, - "children": [ - { - "type": "list", - "key": "skip_host_families", - "label": "Skip hosts and families", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "type": "hosts-enum", - "key": "host", - "label": "Host" - }, - { - "type": "list", - "key": "families", - "label": "Families", - "object_type": "text" - } - ] - } - } - ] - }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json index 85ec482e73..23fc7c9351 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json @@ -320,10 +320,6 @@ "key": "publish", "label": "Publish", "children": [ - { - "type": "label", - "label": "NOTE: For backwards compatibility can be value empty and in that case are used values from IntegrateAssetNew. This will change in future so please move all values here as soon as possible." - }, { "type": "list", "key": "template_name_profiles", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json index 83e0cf789a..cd8c260124 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json @@ -18,8 +18,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -39,51 +39,82 @@ ] }, - { - "type": "schema_template", - "name": "template_create_plugin", - "template_data": [ - { - "key": "CreateAlembicCamera", - "label": "Create Alembic Camera" - }, - { - "key": "CreateCompositeSequence", - "label": "Create Composite (Image Sequence)" - }, - { - "key": "CreatePointCache", - "label": "Create Point Cache" - }, - { - "key": "CreateRedshiftROP", - "label": "Create Redshift ROP" - }, - { - "key": "CreateRemotePublish", - "label": "Create Remote Publish" - }, - { - "key": "CreateVDBCache", - "label": "Create VDB Cache" - }, - { - "key": "CreateUSD", - "label": "Create USD" - }, - { - "key": "CreateUSDModel", - "label": "Create USD Model" - }, - { - "key": "USDCreateShadingWorkspace", - "label": "Create USD Shading Workspace" - }, - { - "key": "CreateUSDRender", - "label": "Create USD Render" - } - ] - } + { + "type": "dict", + "collapsible": true, + "key": "CreateStaticMesh", + "label": "Create Static Mesh", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "list", + "key": "default_variants", + "label": "Default Variants", + "object_type": "text" + }, + { + "type": "text", + "key": "static_mesh_prefix", + "label": "Static Mesh Prefix" + }, + { + "type": "list", + "key": "collision_prefixes", + "label": "Collision Mesh Prefixes", + "object_type": "text" + } + ] + }, + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateAlembicCamera", + "label": "Create Alembic Camera" + }, + { + "key": "CreateCompositeSequence", + "label": "Create Composite (Image Sequence)" + }, + { + "key": "CreatePointCache", + "label": "Create Point Cache" + }, + { + "key": "CreateRedshiftROP", + "label": "Create Redshift ROP" + }, + { + "key": "CreateRemotePublish", + "label": "Create Remote Publish" + }, + { + "key": "CreateVDBCache", + "label": "Create VDB Cache" + }, + { + "key": "CreateUSD", + "label": "Create USD" + }, + { + "key": "CreateUSDModel", + "label": "Create USD Model" + }, + { + "key": "USDCreateShadingWorkspace", + "label": "Create USD Shading Workspace" + }, + { + "key": "CreateUSDRender", + "label": "Create USD Render" + } + ] + } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json new file mode 100644 index 0000000000..de1a0396ec --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json @@ -0,0 +1,53 @@ +{ + "type": "dict", + "key": "general", + "label": "General", + "collapsible": true, + "is_group": true, + "children": [ + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "update_houdini_var_context", + "label": "Update Houdini Vars on context change", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "Sync vars with context changes.
If a value is treated as a directory on update it will be ensured the folder exists" + }, + { + "type": "list", + "key": "houdini_vars", + "label": "Houdini Vars", + "collapsible": false, + "object_type": { + "type": "dict", + "children": [ + { + "type": "text", + "key": "var", + "label": "Var" + }, + { + "type": "text", + "key": "value", + "label": "Value" + }, + { + "type": "boolean", + "key": "is_directory", + "label": "Treat as directory" + } + ] + } + } + ] + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json index aa6eaf5164..d5f70b0312 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json @@ -40,11 +40,27 @@ "type": "schema_template", "name": "template_publish_plugin", "template_data": [ + { + "key": "ValidateReviewColorspace", + "label": "Validate Review Colorspace" + }, { "key": "ValidateContainers", "label": "ValidateContainers" + }, + { + "key": "ValidateSubsetName", + "label": "Validate Subset Name" + }, + { + "key": "ValidateMeshIsStatic", + "label": "Validate Mesh is Static" + }, + { + "key": "ValidateUnrealStaticMeshName", + "label": "Validate Unreal Static Mesh Name" } ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index a8b76a0331..b56e381c1d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -28,15 +28,21 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] }, - { - "type": "schema", - "name": "schema_maya_create_render" + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateRender", + "label": "Create Render" + } + ] }, { "type": "dict", @@ -52,8 +58,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -84,8 +90,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -120,12 +126,10 @@ "collapsible": true, "key": "CreateAnimation", "label": "Create Animation", - "checkbox_key": "enabled", "children": [ { - "type": "boolean", - "key": "enabled", - "label": "Enabled" + "type": "label", + "label": "This plugin is not optional due to implicit creation through loading the \"rig\" family.\nThis family is also hidden from creation due to complexity in setup." }, { "type": "boolean", @@ -149,8 +153,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -179,8 +183,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -214,8 +218,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -244,8 +248,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -264,8 +268,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -289,8 +293,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -318,52 +322,52 @@ { "type": "boolean", "key": "maskOptions", - "label": "Mask Options" + "label": "Export Options" }, { "type": "boolean", "key": "maskCamera", - "label": "Mask Camera" + "label": "Export Cameras" }, { "type": "boolean", "key": "maskLight", - "label": "Mask Light" + "label": "Export Lights" }, { "type": "boolean", "key": "maskShape", - "label": "Mask Shape" + "label": "Export Shapes" }, { "type": "boolean", "key": "maskShader", - "label": "Mask Shader" + "label": "Export Shaders" }, { "type": "boolean", "key": "maskOverride", - "label": "Mask Override" + "label": "Export Override Nodes" }, { "type": "boolean", "key": "maskDriver", - "label": "Mask Driver" + "label": "Export Drivers" }, { "type": "boolean", "key": "maskFilter", - "label": "Mask Filter" - }, - { - "type": "boolean", - "key": "maskColor_manager", - "label": "Mask Color Manager" + "label": "Export Filters" }, { "type": "boolean", "key": "maskOperator", - "label": "Mask Operator" + "label": "Export Operators" + }, + { + "type": "boolean", + "key": "maskColor_manager", + "label": "Export Color Managers" } ] }, @@ -391,8 +395,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json deleted file mode 100644 index 68ad7ad63d..0000000000 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "type": "dict", - "collapsible": true, - "key": "CreateRender", - "label": "Create Render", - "checkbox_key": "enabled", - "children": [ - { - "type": "boolean", - "key": "enabled", - "label": "Enabled" - }, - { - "type": "list", - "key": "defaults", - "label": "Default Subsets", - "object_type": "text" - } - ] -} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json index 4b6b97ab4e..e73d39c06d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json @@ -121,6 +121,28 @@ "label": "Display Handle On Load References" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "import_loader", + "label": "Import Loader", + "children": [ + { + "type": "text", + "label": "Namespace", + "key": "namespace" + }, + { + "type": "text", + "label": "Group name", + "key": "group_name" + }, + { + "type": "label", + "label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 07c8d8715b..d2e7c51e24 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -21,6 +21,20 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "CollectFbxAnimation", + "label": "Collect Fbx Animation", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + } + ] + }, { "type": "dict", "collapsible": true, @@ -103,7 +117,7 @@ }, { "key": "exclude_families", - "label": "Families", + "label": "Exclude Families", "type": "list", "object_type": "text" } @@ -417,6 +431,10 @@ "type": "schema_template", "name": "template_publish_plugin", "template_data": [ + { + "key": "ValidateResolution", + "label": "Validate Resolution Settings" + }, { "key": "ValidateCurrentRenderLayerIsRenderable", "label": "Validate Current Render Layer Has Renderable Camera" @@ -793,6 +811,10 @@ "key": "ValidateRigControllers", "label": "Validate Rig Controllers" }, + { + "key": "ValidateAnimatedReferenceRig", + "label": "Validate Animated Reference Rig" + }, { "key": "ValidateAnimationContent", "label": "Validate Animation Content" @@ -809,9 +831,51 @@ "key": "ValidateSkeletalMeshHierarchy", "label": "Validate Skeletal Mesh Top Node" }, - { + { + "key": "ValidateSkeletonRigContents", + "label": "Validate Skeleton Rig Contents" + }, + { + "key": "ValidateSkeletonRigControllers", + "label": "Validate Skeleton Rig Controllers" + }, + { "key": "ValidateSkinclusterDeformerSet", "label": "Validate Skincluster Deformer Relationships" + }, + { + "key": "ValidateSkeletonRigOutputIds", + "label": "Validate Skeleton Rig Output Ids" + }, + { + "key": "ValidateSkeletonTopGroupHierarchy", + "label": "Validate Skeleton Top Group Hierarchy" + } + ] + }, + + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ValidateRigOutSetNodeIds", + "label": "Validate Rig Out Set Node Ids", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "allow_history_only", + "label": "Allow history only" } ] }, @@ -819,8 +883,8 @@ "type": "dict", "collapsible": true, "checkbox_key": "enabled", - "key": "ValidateRigOutSetNodeIds", - "label": "Validate Rig Out Set Node Ids", + "key": "ValidateSkeletonRigOutSetNodeIds", + "label": "Validate Skeleton Rig Out Set Node Ids", "is_group": true, "children": [ { @@ -978,6 +1042,35 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractCameraMayaScene", + "label": "Extract camera to Maya scene", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "boolean", + "key": "keep_image_planes", + "label": "Export Image planes" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json index 636dfa114c..fc4e750e3b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json @@ -12,7 +12,7 @@ { "type": "text", "key": "default_render_image_folder", - "label": "Default render image folder" + "label": "Default render image folder. This setting can be\noverwritten by custom staging directory profile;\n\"project_settings/global/tools/publish\n/custom_staging_dir_profiles\"." }, { "type": "boolean", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json index d4cd332ef8..af826fcf46 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json @@ -106,26 +106,6 @@ "type": "text", "key": "monitorLut", "label": "monitor" - }, - { - "type": "text", - "key": "int8Lut", - "label": "8-bit files" - }, - { - "type": "text", - "key": "int16Lut", - "label": "16-bit files" - }, - { - "type": "text", - "key": "logLut", - "label": "log files" - }, - { - "type": "text", - "key": "floatLut", - "label": "float files" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 3019c9b1b5..9e012e560f 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -61,7 +61,7 @@ "name": "template_publish_plugin", "template_data": [ { - "key": "ValidateCorrectAssetName", + "key": "ValidateCorrectAssetContext", "label": "Validate Correct Asset Name" } ] @@ -309,24 +309,149 @@ "type": "separator" }, { - "type": "label", - "label": "Currently we are supporting also multiple reposition nodes.
Older single reformat node is still supported
and if it is activated then preference will
be on it. If you want to use multiple reformat
nodes then you need to disable single reformat
node and enable multiple Reformat nodes here." + "key": "reformat_nodes_config", + "type": "dict", + "label": "Reformat Nodes", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "Reposition knobs supported only.
You can add multiple reformat nodes
and set their knobs. Order of reformat
nodes is important. First reformat node
will be applied first and last reformat
node will be applied last." + }, + { + "key": "reposition_nodes", + "type": "list", + "label": "Reposition nodes", + "object_type": { + "type": "dict", + "children": [ + { + "key": "node_class", + "label": "Node class", + "type": "text" + }, + { + "type": "schema_template", + "name": "template_nuke_knob_inputs", + "template_data": [ + { + "label": "Node knobs", + "key": "knobs" + } + ] + } + ] + } + } + ] + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "extension", + "label": "Write node file type" + }, + { + "key": "add_custom_tags", + "label": "Add custom tags", + "type": "list", + "object_type": "text" + } + ] + } + } + + ] + }, + { + "type": "label", + "label": "^ Settings and for ExtractReviewDataMov is deprecated and will be soon removed.
Please use ExtractReviewIntermediates instead." + }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ExtractReviewIntermediates", + "label": "ExtractReviewIntermediates", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "viewer_lut_raw", + "label": "Viewer LUT raw" + }, + { + "key": "outputs", + "label": "Output Definitions", + "type": "dict-modifiable", + "highlight_content": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "dict", + "collapsible": false, + "key": "filter", + "label": "Filtering", + "children": [ + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subsets", + "type": "list", + "object_type": "text" + } + ] + }, + { + "type": "separator" }, { "type": "boolean", - "key": "reformat_node_add", - "label": "Add Reformat Node", + "key": "read_raw", + "label": "Read colorspace RAW", "default": false }, { - "type": "schema_template", - "name": "template_nuke_knob_inputs", - "template_data": [ - { - "label": "Reformat Node Knobs", - "key": "reformat_node_config" - } - ] + "type": "text", + "key": "viewer_process_override", + "label": "Viewer Process colorspace profile override" + }, + { + "type": "boolean", + "key": "bake_viewer_process", + "label": "Bake Viewer Process" + }, + { + "type": "boolean", + "key": "bake_viewer_input_process", + "label": "Bake Viewer Input Process (LUTs)" + }, + { + "type": "separator" }, { "key": "reformat_nodes_config", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json index 14d15e7840..3d2ed9f3d4 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json @@ -13,8 +13,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json index c9dee8681a..51c78ce8f0 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json @@ -213,7 +213,7 @@ }, { "type": "number", - "key": "y", + "key": "z", "default": 1, "decimal": 4, "maximum": 99999999 @@ -238,29 +238,75 @@ "object_types": [ { "type": "number", - "key": "x", + "key": "r", "default": 1, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "x", + "key": "g", "default": 1, "decimal": 4, "maximum": 99999999 }, + { + "type": "number", + "key": "b", + "default": 1, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "a", + "default": 1, + "decimal": 4, + "maximum": 99999999 + } + ] + } + ] + }, + { + "key": "box", + "label": "Box", + "children": [ + { + "type": "text", + "key": "name", + "label": "Name" + }, + { + "type": "list-strict", + "key": "value", + "label": "Value", + "object_types": [ + { + "type": "number", + "key": "x", + "default": 0, + "decimal": 4, + "maximum": 99999999 + }, { "type": "number", "key": "y", - "default": 1, + "default": 0, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "y", - "default": 1, + "key": "r", + "default": 1920, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "t", + "default": 1080, "decimal": 4, "maximum": 99999999 } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json index 8be48e669d..3a34858f4e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json @@ -13,6 +13,12 @@ }, { "use_range_limit": "Use range limit" + }, + { + "ordered": "Defined order" + }, + { + "channels": "Channels override" } ] } diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json new file mode 100644 index 0000000000..bbdc7e13b0 --- /dev/null +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json @@ -0,0 +1,39 @@ +{ + "type": "dict", + "key": "mayapy", + "label": "Autodesk MayaPy", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "schema_template", + "name": "template_host_unchangables" + }, + { + "key": "environment", + "label": "Environment", + "type": "raw-json" + }, + { + "type": "dict-modifiable", + "key": "variants", + "collapsible_key": true, + "use_label_wrap": false, + "object_type": { + "type": "dict", + "collapsible": true, + "children": [ + { + "type": "schema_template", + "name": "template_host_variant_items" + } + ] + } + } + ] +} diff --git a/openpype/settings/entities/schemas/system_schema/schema_applications.json b/openpype/settings/entities/schemas/system_schema/schema_applications.json index abea37a9ab..7965c344ae 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_applications.json +++ b/openpype/settings/entities/schemas/system_schema/schema_applications.json @@ -9,6 +9,10 @@ "type": "schema", "name": "schema_maya" }, + { + "type": "schema", + "name": "schema_mayapy" + }, { "type": "schema", "name": "schema_3dsmax" diff --git a/openpype/settings/entities/schemas/system_schema/schema_general.json b/openpype/settings/entities/schemas/system_schema/schema_general.json index d6c22fe54c..2609441061 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_general.json +++ b/openpype/settings/entities/schemas/system_schema/schema_general.json @@ -128,8 +128,12 @@ { "type": "collapsible-wrap", "label": "OpenPype deployment control", - "collapsible": false, + "collapsible": true, "children": [ + { + "type": "label", + "label": "Define location accessible by artist machine to check for zip updates with Openpype code." + }, { "type": "path", "key": "openpype_path", @@ -138,6 +142,18 @@ "multipath": true, "require_restart": true }, + { + "type": "label", + "label": "Define custom location for artist machine where to unzip versions of Openpype code. By default it is in user app data folder." + }, + { + "type": "path", + "key": "local_openpype_path", + "label": "Custom Local Versions Folder", + "multiplatform": true, + "multipath": false, + "require_restart": true + }, { "type": "splitter" }, diff --git a/openpype/settings/handlers.py b/openpype/settings/handlers.py index a1f3331ccc..671cabfbc2 100644 --- a/openpype/settings/handlers.py +++ b/openpype/settings/handlers.py @@ -7,10 +7,14 @@ from abc import ABCMeta, abstractmethod import six import openpype.version -from openpype.client.mongo import OpenPypeMongoConnection -from openpype.client.entities import get_project_connection, get_project +from openpype.client.mongo import ( + OpenPypeMongoConnection, + get_project_connection, +) +from openpype.client.entities import get_project from openpype.lib.pype_info import get_workstation_info + from .constants import ( GLOBAL_SETTINGS_KEY, SYSTEM_SETTINGS_KEY, @@ -185,6 +189,7 @@ class SettingsStateInfo: class SettingsHandler(object): global_keys = { "openpype_path", + "local_openpype_path", "admin_password", "log_to_server", "disk_mapping", @@ -1798,10 +1803,7 @@ class MongoLocalSettingsHandler(LocalSettingsHandler): def __init__(self, local_site_id=None): # Get mongo connection - from openpype.lib import ( - OpenPypeMongoConnection, - get_local_site_id - ) + from openpype.lib import get_local_site_id if local_site_id is None: local_site_id = get_local_site_id() diff --git a/openpype/settings/lib.py b/openpype/settings/lib.py index 73554df236..ce62dde43f 100644 --- a/openpype/settings/lib.py +++ b/openpype/settings/lib.py @@ -4,6 +4,9 @@ import functools import logging import platform import copy + +from openpype import AYON_SERVER_ENABLED + from .exceptions import ( SaveWarningExc ) @@ -18,6 +21,11 @@ from .constants import ( DEFAULT_PROJECT_KEY ) +from .ayon_settings import ( + get_ayon_project_settings, + get_ayon_system_settings +) + log = logging.getLogger(__name__) # Py2 + Py3 json decode exception @@ -40,36 +48,17 @@ _SETTINGS_HANDLER = None _LOCAL_SETTINGS_HANDLER = None -def require_handler(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - global _SETTINGS_HANDLER - if _SETTINGS_HANDLER is None: - _SETTINGS_HANDLER = create_settings_handler() - return func(*args, **kwargs) - return wrapper - - -def require_local_handler(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - global _LOCAL_SETTINGS_HANDLER - if _LOCAL_SETTINGS_HANDLER is None: - _LOCAL_SETTINGS_HANDLER = create_local_settings_handler() - return func(*args, **kwargs) - return wrapper - - -def create_settings_handler(): - from .handlers import MongoSettingsHandler - # Handler can't be created in global space on initialization but only when - # needed. Plus here may be logic: Which handler is used (in future). - return MongoSettingsHandler() - - -def create_local_settings_handler(): - from .handlers import MongoLocalSettingsHandler - return MongoLocalSettingsHandler() +def clear_metadata_from_settings(values): + """Remove all metadata keys from loaded settings.""" + if isinstance(values, dict): + for key in tuple(values.keys()): + if key in METADATA_KEYS: + values.pop(key) + else: + clear_metadata_from_settings(values[key]) + elif isinstance(values, list): + for item in values: + clear_metadata_from_settings(item) def calculate_changes(old_value, new_value): @@ -91,6 +80,42 @@ def calculate_changes(old_value, new_value): return changes +def create_settings_handler(): + if AYON_SERVER_ENABLED: + raise RuntimeError("Mongo settings handler was triggered in AYON mode") + from .handlers import MongoSettingsHandler + # Handler can't be created in global space on initialization but only when + # needed. Plus here may be logic: Which handler is used (in future). + return MongoSettingsHandler() + + +def create_local_settings_handler(): + if AYON_SERVER_ENABLED: + raise RuntimeError("Mongo settings handler was triggered in AYON mode") + from .handlers import MongoLocalSettingsHandler + return MongoLocalSettingsHandler() + + +def require_handler(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + global _SETTINGS_HANDLER + if _SETTINGS_HANDLER is None: + _SETTINGS_HANDLER = create_settings_handler() + return func(*args, **kwargs) + return wrapper + + +def require_local_handler(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + global _LOCAL_SETTINGS_HANDLER + if _LOCAL_SETTINGS_HANDLER is None: + _LOCAL_SETTINGS_HANDLER = create_local_settings_handler() + return func(*args, **kwargs) + return wrapper + + @require_handler def get_system_last_saved_info(): return _SETTINGS_HANDLER.get_system_last_saved_info() @@ -494,10 +519,17 @@ def save_local_settings(data): @require_local_handler -def get_local_settings(): +def _get_local_settings(): return _LOCAL_SETTINGS_HANDLER.get_local_settings() +def get_local_settings(): + if not AYON_SERVER_ENABLED: + return _get_local_settings() + # TODO implement ayon implementation + return {} + + def load_openpype_default_settings(): """Load openpype default settings.""" return load_jsons_from_dir(DEFAULTS_DIR) @@ -890,7 +922,7 @@ def apply_local_settings_on_project_settings( sync_server_config["remote_site"] = remote_site -def get_system_settings(clear_metadata=True, exclude_locals=None): +def _get_system_settings(clear_metadata=True, exclude_locals=None): """System settings with applied studio overrides.""" default_values = get_default_settings()[SYSTEM_SETTINGS_KEY] studio_values = get_studio_system_settings_overrides() @@ -992,7 +1024,7 @@ def get_anatomy_settings( return result -def get_project_settings( +def _get_project_settings( project_name, clear_metadata=True, exclude_locals=None ): """Project settings with applied studio and project overrides.""" @@ -1043,7 +1075,7 @@ def get_current_project_settings(): @require_handler -def get_global_settings(): +def _get_global_settings(): default_settings = load_openpype_default_settings() default_values = default_settings["system_settings"]["general"] studio_values = _SETTINGS_HANDLER.get_global_settings() @@ -1053,7 +1085,14 @@ def get_global_settings(): } -def get_general_environments(): +def get_global_settings(): + if not AYON_SERVER_ENABLED: + return _get_global_settings() + default_settings = load_openpype_default_settings() + return default_settings["system_settings"]["general"] + + +def _get_general_environments(): """Get general environments. Function is implemented to be able load general environments without using @@ -1082,14 +1121,24 @@ def get_general_environments(): return environments -def clear_metadata_from_settings(values): - """Remove all metadata keys from loaded settings.""" - if isinstance(values, dict): - for key in tuple(values.keys()): - if key in METADATA_KEYS: - values.pop(key) - else: - clear_metadata_from_settings(values[key]) - elif isinstance(values, list): - for item in values: - clear_metadata_from_settings(item) +def get_general_environments(): + if not AYON_SERVER_ENABLED: + return _get_general_environments() + value = get_system_settings() + return value["general"]["environment"] + + +def get_system_settings(*args, **kwargs): + if not AYON_SERVER_ENABLED: + return _get_system_settings(*args, **kwargs) + + default_settings = get_default_settings()[SYSTEM_SETTINGS_KEY] + return get_ayon_system_settings(default_settings) + + +def get_project_settings(project_name, *args, **kwargs): + if not AYON_SERVER_ENABLED: + return _get_project_settings(project_name, *args, **kwargs) + + default_settings = get_default_settings()[PROJECT_SETTINGS_KEY] + return get_ayon_project_settings(default_settings, project_name) diff --git a/openpype/style/style.css b/openpype/style/style.css index 5ce55aa658..ca368f84f8 100644 --- a/openpype/style/style.css +++ b/openpype/style/style.css @@ -1427,6 +1427,10 @@ CreateNextPageOverlay { background: rgba(0, 0, 0, 127); } +#OverlayFrameLabel { + font-size: 15pt; +} + #BreadcrumbsPathInput { padding: 2px; font-size: 9pt; diff --git a/openpype/tests/lib.py b/openpype/tests/lib.py index 1fa5fb8054..c7d4423aba 100644 --- a/openpype/tests/lib.py +++ b/openpype/tests/lib.py @@ -5,7 +5,6 @@ import tempfile import contextlib import pyblish -import pyblish.cli import pyblish.plugin from pyblish.vendor import six diff --git a/openpype/tests/test_lib_restructuralization.py b/openpype/tests/test_lib_restructuralization.py index 669706d470..a91d65f7a8 100644 --- a/openpype/tests/test_lib_restructuralization.py +++ b/openpype/tests/test_lib_restructuralization.py @@ -18,8 +18,6 @@ def test_backward_compatibility(printer): from openpype.lib import get_ffprobe_streams - from openpype.hosts.fusion.lib import switch_item - from openpype.lib import source_hash from openpype.lib import run_subprocess diff --git a/openpype/tools/adobe_webserver/app.py b/openpype/tools/adobe_webserver/app.py index 3911baf7ac..49d61d3883 100644 --- a/openpype/tools/adobe_webserver/app.py +++ b/openpype/tools/adobe_webserver/app.py @@ -16,7 +16,7 @@ from wsrpc_aiohttp import ( WSRPCClient ) -from openpype.pipeline import legacy_io +from openpype.pipeline import get_global_context log = logging.getLogger(__name__) @@ -80,9 +80,10 @@ class WebServerTool: loop=asyncio.get_event_loop()) await client.connect() - project = legacy_io.Session["AVALON_PROJECT"] - asset = legacy_io.Session["AVALON_ASSET"] - task = legacy_io.Session["AVALON_TASK"] + context = get_global_context() + project = context["project_name"] + asset = context["asset_name"] + task = context["task_name"] log.info("Sending context change to {}-{}-{}".format(project, asset, task)) diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py index d46c238da1..d9c55f4a64 100644 --- a/openpype/tools/attribute_defs/widgets.py +++ b/openpype/tools/attribute_defs/widgets.py @@ -19,6 +19,7 @@ from openpype.tools.utils import ( CustomTextComboBox, FocusSpinBox, FocusDoubleSpinBox, + MultiSelectionComboBox, ) from openpype.widgets.nice_checkbox import NiceCheckbox @@ -343,6 +344,7 @@ class TextAttrWidget(_BaseAttrDefWidget): return self._input_widget.text() def set_value(self, value, multivalue=False): + block_signals = False if multivalue: set_value = set(value) if None in set_value: @@ -352,13 +354,18 @@ class TextAttrWidget(_BaseAttrDefWidget): if len(set_value) == 1: value = tuple(set_value)[0] else: + block_signals = True value = "< Multiselection >" if value != self.current_value(): + if block_signals: + self._input_widget.blockSignals(True) if self.multiline: self._input_widget.setPlainText(value) else: self._input_widget.setText(value) + if block_signals: + self._input_widget.blockSignals(False) class BoolAttrWidget(_BaseAttrDefWidget): @@ -391,7 +398,9 @@ class BoolAttrWidget(_BaseAttrDefWidget): set_value.add(self.attr_def.default) if len(set_value) > 1: + self._input_widget.blockSignals(True) self._input_widget.setCheckState(QtCore.Qt.PartiallyChecked) + self._input_widget.blockSignals(False) return value = tuple(set_value)[0] @@ -404,10 +413,19 @@ class EnumAttrWidget(_BaseAttrDefWidget): self._multivalue = False super(EnumAttrWidget, self).__init__(*args, **kwargs) + @property + def multiselection(self): + return self.attr_def.multiselection + def _ui_init(self): - input_widget = CustomTextComboBox(self) - combo_delegate = QtWidgets.QStyledItemDelegate(input_widget) - input_widget.setItemDelegate(combo_delegate) + if self.multiselection: + input_widget = MultiSelectionComboBox(self) + + else: + input_widget = CustomTextComboBox(self) + combo_delegate = QtWidgets.QStyledItemDelegate(input_widget) + input_widget.setItemDelegate(combo_delegate) + self._combo_delegate = combo_delegate if self.attr_def.tooltip: input_widget.setToolTip(self.attr_def.tooltip) @@ -419,9 +437,11 @@ class EnumAttrWidget(_BaseAttrDefWidget): if idx >= 0: input_widget.setCurrentIndex(idx) - input_widget.currentIndexChanged.connect(self._on_value_change) + if self.multiselection: + input_widget.value_changed.connect(self._on_value_change) + else: + input_widget.currentIndexChanged.connect(self._on_value_change) - self._combo_delegate = combo_delegate self._input_widget = input_widget self.main_layout.addWidget(input_widget, 0) @@ -434,17 +454,40 @@ class EnumAttrWidget(_BaseAttrDefWidget): self.value_changed.emit(new_value, self.attr_def.id) def current_value(self): + if self.multiselection: + return self._input_widget.value() idx = self._input_widget.currentIndex() return self._input_widget.itemData(idx) + def _multiselection_multivalue_prep(self, values): + final = None + multivalue = False + for value in values: + value = set(value) + if final is None: + final = value + elif multivalue or final != value: + final |= value + multivalue = True + return list(final), multivalue + def set_value(self, value, multivalue=False): if multivalue: - set_value = set(value) - if len(set_value) == 1: - multivalue = False - value = tuple(set_value)[0] + if self.multiselection: + value, multivalue = self._multiselection_multivalue_prep( + value) + else: + set_value = set(value) + if len(set_value) == 1: + multivalue = False + value = tuple(set_value)[0] - if not multivalue: + if self.multiselection: + self._input_widget.blockSignals(True) + self._input_widget.set_value(value) + self._input_widget.blockSignals(False) + + elif not multivalue: idx = self._input_widget.findData(value) cur_idx = self._input_widget.currentIndex() if idx != cur_idx and idx >= 0: diff --git a/openpype/tools/ayon_launcher/abstract.py b/openpype/tools/ayon_launcher/abstract.py new file mode 100644 index 0000000000..95fe2b2c8d --- /dev/null +++ b/openpype/tools/ayon_launcher/abstract.py @@ -0,0 +1,307 @@ +from abc import ABCMeta, abstractmethod + +import six + + +@six.add_metaclass(ABCMeta) +class AbstractLauncherCommon(object): + @abstractmethod + def register_event_callback(self, topic, callback): + """Register event callback. + + Listen for events with given topic. + + Args: + topic (str): Name of topic. + callback (Callable): Callback that will be called when event + is triggered. + """ + + pass + + +class AbstractLauncherBackend(AbstractLauncherCommon): + @abstractmethod + def emit_event(self, topic, data=None, source=None): + """Emit event. + + Args: + topic (str): Event topic used for callbacks filtering. + data (Optional[dict[str, Any]]): Event data. + source (Optional[str]): Event source. + """ + + pass + + @abstractmethod + def get_project_settings(self, project_name): + """Project settings for current project. + + Args: + project_name (Union[str, None]): Project name. + + Returns: + dict[str, Any]: Project settings. + """ + + pass + + @abstractmethod + def get_project_entity(self, project_name): + """Get project entity by name. + + Args: + project_name (str): Project name. + + Returns: + dict[str, Any]: Project entity data. + """ + + pass + + @abstractmethod + def get_folder_entity(self, project_name, folder_id): + """Get folder entity by id. + + Args: + project_name (str): Project name. + folder_id (str): Folder id. + + Returns: + dict[str, Any]: Folder entity data. + """ + + pass + + @abstractmethod + def get_task_entity(self, project_name, task_id): + """Get task entity by id. + + Args: + project_name (str): Project name. + task_id (str): Task id. + + Returns: + dict[str, Any]: Task entity data. + """ + + pass + + +class AbstractLauncherFrontEnd(AbstractLauncherCommon): + # Entity items for UI + @abstractmethod + def get_project_items(self, sender=None): + """Project items for all projects. + + This function may trigger events 'projects.refresh.started' and + 'projects.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of project items in UI elements. + + Args: + sender (str): Who requested folder items. + + Returns: + list[ProjectItem]: Minimum possible information needed + for visualisation of folder hierarchy. + """ + + pass + + @abstractmethod + def get_folder_items(self, project_name, sender=None): + """Folder items to visualize project hierarchy. + + This function may trigger events 'folders.refresh.started' and + 'folders.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of folder items in UI elements. + + Args: + project_name (str): Project name. + sender (str): Who requested folder items. + + Returns: + list[FolderItem]: Minimum possible information needed + for visualisation of folder hierarchy. + """ + + pass + + @abstractmethod + def get_task_items(self, project_name, folder_id, sender=None): + """Task items. + + This function may trigger events 'tasks.refresh.started' and + 'tasks.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of task items in UI elements. + + Args: + project_name (str): Project name. + folder_id (str): Folder ID for which are tasks requested. + sender (str): Who requested folder items. + + Returns: + list[TaskItem]: Minimum possible information needed + for visualisation of tasks. + """ + + pass + + @abstractmethod + def get_selected_project_name(self): + """Selected project name. + + Returns: + Union[str, None]: Selected project name. + """ + + pass + + @abstractmethod + def get_selected_folder_id(self): + """Selected folder id. + + Returns: + Union[str, None]: Selected folder id. + """ + + pass + + @abstractmethod + def get_selected_task_id(self): + """Selected task id. + + Returns: + Union[str, None]: Selected task id. + """ + + pass + + @abstractmethod + def get_selected_task_name(self): + """Selected task name. + + Returns: + Union[str, None]: Selected task name. + """ + + pass + + @abstractmethod + def get_selected_context(self): + """Get whole selected context. + + Example: + { + "project_name": self.get_selected_project_name(), + "folder_id": self.get_selected_folder_id(), + "task_id": self.get_selected_task_id(), + "task_name": self.get_selected_task_name(), + } + + Returns: + dict[str, Union[str, None]]: Selected context. + """ + + pass + + @abstractmethod + def set_selected_project(self, project_name): + """Change selected folder. + + Args: + project_name (Union[str, None]): Project nameor None if no project + is selected. + """ + + pass + + @abstractmethod + def set_selected_folder(self, folder_id): + """Change selected folder. + + Args: + folder_id (Union[str, None]): Folder id or None if no folder + is selected. + """ + + pass + + @abstractmethod + def set_selected_task(self, task_id, task_name): + """Change selected task. + + Args: + task_id (Union[str, None]): Task id or None if no task + is selected. + task_name (Union[str, None]): Task name or None if no task + is selected. + """ + + pass + + # Actions + @abstractmethod + def get_action_items(self, project_name, folder_id, task_id): + """Get action items for given context. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[ActionItem]: List of action items that should be shown + for given context. + """ + + pass + + @abstractmethod + def trigger_action(self, project_name, folder_id, task_id, action_id): + """Trigger action on given context. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + action_id (str): Action identifier. + """ + + pass + + @abstractmethod + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + """This is application action related to force not open last workfile. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + action_id (Iterable[str]): Action identifiers. + enabled (bool): New value of force not open workfile. + """ + + pass + + @abstractmethod + def refresh(self): + """Refresh everything, models, ui etc. + + Triggers 'controller.refresh.started' event at the beginning and + 'controller.refresh.finished' at the end. + """ + + pass + + @abstractmethod + def refresh_actions(self): + """Refresh actions and all related data. + + Triggers 'controller.refresh.actions.started' event at the beginning + and 'controller.refresh.actions.finished' at the end. + """ + + pass diff --git a/openpype/tools/ayon_launcher/control.py b/openpype/tools/ayon_launcher/control.py new file mode 100644 index 0000000000..36c0536422 --- /dev/null +++ b/openpype/tools/ayon_launcher/control.py @@ -0,0 +1,161 @@ +from openpype.lib import Logger +from openpype.lib.events import QueuedEventSystem +from openpype.settings import get_project_settings +from openpype.tools.ayon_utils.models import ProjectsModel, HierarchyModel + +from .abstract import AbstractLauncherFrontEnd, AbstractLauncherBackend +from .models import LauncherSelectionModel, ActionsModel + + +class BaseLauncherController( + AbstractLauncherFrontEnd, AbstractLauncherBackend +): + def __init__(self): + self._project_settings = {} + self._event_system = None + self._log = None + + self._selection_model = LauncherSelectionModel(self) + self._projects_model = ProjectsModel(self) + self._hierarchy_model = HierarchyModel(self) + self._actions_model = ActionsModel(self) + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + @property + def event_system(self): + """Inner event system for workfiles tool controller. + + Is used for communication with UI. Event system is created on demand. + + Returns: + QueuedEventSystem: Event system which can trigger callbacks + for topics. + """ + + if self._event_system is None: + self._event_system = QueuedEventSystem() + return self._event_system + + # --------------------------------- + # Implementation of abstract methods + # --------------------------------- + # Events system + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self.event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self.event_system.add_callback(topic, callback) + + # Entity items for UI + def get_project_items(self, sender=None): + return self._projects_model.get_project_items(sender) + + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_task_items(self, project_name, folder_id, sender=None): + return self._hierarchy_model.get_task_items( + project_name, folder_id, sender) + + # Project settings for applications actions + def get_project_settings(self, project_name): + if project_name in self._project_settings: + return self._project_settings[project_name] + settings = get_project_settings(project_name) + self._project_settings[project_name] = settings + return settings + + # Entity for backend + def get_project_entity(self, project_name): + return self._projects_model.get_project_entity(project_name) + + def get_folder_entity(self, project_name, folder_id): + return self._hierarchy_model.get_folder_entity( + project_name, folder_id) + + def get_task_entity(self, project_name, task_id): + return self._hierarchy_model.get_task_entity(project_name, task_id) + + # Selection methods + def get_selected_project_name(self): + return self._selection_model.get_selected_project_name() + + def set_selected_project(self, project_name): + self._selection_model.set_selected_project(project_name) + + def get_selected_folder_id(self): + return self._selection_model.get_selected_folder_id() + + def set_selected_folder(self, folder_id): + self._selection_model.set_selected_folder(folder_id) + + def get_selected_task_id(self): + return self._selection_model.get_selected_task_id() + + def get_selected_task_name(self): + return self._selection_model.get_selected_task_name() + + def set_selected_task(self, task_id, task_name): + self._selection_model.set_selected_task(task_id, task_name) + + def get_selected_context(self): + return { + "project_name": self.get_selected_project_name(), + "folder_id": self.get_selected_folder_id(), + "task_id": self.get_selected_task_id(), + "task_name": self.get_selected_task_name(), + } + + # Actions + def get_action_items(self, project_name, folder_id, task_id): + return self._actions_model.get_action_items( + project_name, folder_id, task_id) + + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + self._actions_model.set_application_force_not_open_workfile( + project_name, folder_id, task_id, action_ids, enabled + ) + + def trigger_action(self, project_name, folder_id, task_id, identifier): + self._actions_model.trigger_action( + project_name, folder_id, task_id, identifier) + + # General methods + def refresh(self): + self._emit_event("controller.refresh.started") + + self._project_settings = {} + + self._projects_model.reset() + self._hierarchy_model.reset() + + self._actions_model.refresh() + self._projects_model.refresh() + + self._emit_event("controller.refresh.finished") + + def refresh_actions(self): + self._emit_event("controller.refresh.actions.started") + + # Refresh project settings (used for actions discovery) + self._project_settings = {} + # Refresh projects - they define applications + self._projects_model.reset() + # Refresh actions + self._actions_model.refresh() + + self._emit_event("controller.refresh.actions.finished") + + def _emit_event(self, topic, data=None): + self.emit_event(topic, data, "controller") diff --git a/openpype/tools/ayon_launcher/models/__init__.py b/openpype/tools/ayon_launcher/models/__init__.py new file mode 100644 index 0000000000..1bc60c85f0 --- /dev/null +++ b/openpype/tools/ayon_launcher/models/__init__.py @@ -0,0 +1,8 @@ +from .actions import ActionsModel +from .selection import LauncherSelectionModel + + +__all__ = ( + "ActionsModel", + "LauncherSelectionModel", +) diff --git a/openpype/tools/ayon_launcher/models/actions.py b/openpype/tools/ayon_launcher/models/actions.py new file mode 100644 index 0000000000..93ec115734 --- /dev/null +++ b/openpype/tools/ayon_launcher/models/actions.py @@ -0,0 +1,509 @@ +import os + +from openpype import resources +from openpype.lib import Logger, OpenPypeSettingsRegistry +from openpype.pipeline.actions import ( + discover_launcher_actions, + LauncherAction, +) + + +# class Action: +# def __init__(self, label, icon=None, identifier=None): +# self._label = label +# self._icon = icon +# self._callbacks = [] +# self._identifier = identifier or uuid.uuid4().hex +# self._checked = True +# self._checkable = False +# +# def set_checked(self, checked): +# self._checked = checked +# +# def set_checkable(self, checkable): +# self._checkable = checkable +# +# def set_label(self, label): +# self._label = label +# +# def add_callback(self, callback): +# self._callbacks = callback +# +# +# class Menu: +# def __init__(self, label, icon=None): +# self.label = label +# self.icon = icon +# self._actions = [] +# +# def add_action(self, action): +# self._actions.append(action) + + +class ApplicationAction(LauncherAction): + """Action to launch an application. + + Application action based on 'ApplicationManager' system. + + Handling of applications in launcher is not ideal and should be completely + redone from scratch. This is just a temporary solution to keep backwards + compatibility with OpenPype launcher. + + Todos: + Move handling of errors to frontend. + """ + + # Application object + application = None + # Action attributes + name = None + label = None + label_variant = None + group = None + icon = None + color = None + order = 0 + data = {} + project_settings = {} + project_entities = {} + + _log = None + required_session_keys = ( + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK" + ) + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + def is_compatible(self, session): + for key in self.required_session_keys: + if not session.get(key): + return False + + project_name = session["AVALON_PROJECT"] + project_entity = self.project_entities[project_name] + apps = project_entity["attrib"].get("applications") + if not apps or self.application.full_name not in apps: + return False + + project_settings = self.project_settings[project_name] + only_available = project_settings["applications"]["only_available"] + if only_available and not self.application.find_executable(): + return False + return True + + def _show_message_box(self, title, message, details=None): + from qtpy import QtWidgets, QtGui + from openpype import style + + dialog = QtWidgets.QMessageBox() + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + dialog.setWindowIcon(icon) + dialog.setStyleSheet(style.load_stylesheet()) + dialog.setWindowTitle(title) + dialog.setText(message) + if details: + dialog.setDetailedText(details) + dialog.exec_() + + def process(self, session, **kwargs): + """Process the full Application action""" + + from openpype.lib import ( + ApplictionExecutableNotFound, + ApplicationLaunchFailed, + ) + + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session["AVALON_TASK"] + try: + self.application.launch( + project_name=project_name, + asset_name=asset_name, + task_name=task_name, + **self.data + ) + + except ApplictionExecutableNotFound as exc: + details = exc.details + msg = exc.msg + log_msg = str(msg) + if details: + log_msg += "\n" + details + self.log.warning(log_msg) + self._show_message_box( + "Application executable not found", msg, details + ) + + except ApplicationLaunchFailed as exc: + msg = str(exc) + self.log.warning(msg, exc_info=True) + self._show_message_box("Application launch failed", msg) + + +class ActionItem: + """Item representing single action to trigger. + + Todos: + Get rid of application specific logic. + + Args: + identifier (str): Unique identifier of action item. + label (str): Action label. + variant_label (Union[str, None]): Variant label, full label is + concatenated with space. Actions are grouped under single + action if it has same 'label' and have set 'variant_label'. + icon (dict[str, str]): Icon definition. + order (int): Action ordering. + is_application (bool): Is action application action. + force_not_open_workfile (bool): Force not open workfile. Application + related. + full_label (Optional[str]): Full label, if not set it is generated + from 'label' and 'variant_label'. + """ + + def __init__( + self, + identifier, + label, + variant_label, + icon, + order, + is_application, + force_not_open_workfile, + full_label=None + ): + self.identifier = identifier + self.label = label + self.variant_label = variant_label + self.icon = icon + self.order = order + self.is_application = is_application + self.force_not_open_workfile = force_not_open_workfile + self._full_label = full_label + + def copy(self): + return self.from_data(self.to_data()) + + @property + def full_label(self): + if self._full_label is None: + if self.variant_label: + self._full_label = " ".join([self.label, self.variant_label]) + else: + self._full_label = self.label + return self._full_label + + def to_data(self): + return { + "identifier": self.identifier, + "label": self.label, + "variant_label": self.variant_label, + "icon": self.icon, + "order": self.order, + "is_application": self.is_application, + "force_not_open_workfile": self.force_not_open_workfile, + "full_label": self._full_label, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +def get_action_icon(action): + """Get action icon info. + + Args: + action (LacunherAction): Action instance. + + Returns: + dict[str, str]: Icon info. + """ + + icon = action.icon + if not icon: + return { + "type": "awesome-font", + "name": "fa.cube", + "color": "white" + } + + if isinstance(icon, dict): + return icon + + icon_path = resources.get_resource(icon) + if not os.path.exists(icon_path): + try: + icon_path = icon.format(resources.RESOURCES_DIR) + except Exception: + pass + + if os.path.exists(icon_path): + return { + "type": "path", + "path": icon_path, + } + + return { + "type": "awesome-font", + "name": icon, + "color": action.color or "white" + } + + +class ActionsModel: + """Actions model. + + Args: + controller (AbstractLauncherBackend): Controller instance. + """ + + _not_open_workfile_reg_key = "force_not_open_workfile" + + def __init__(self, controller): + self._controller = controller + + self._log = None + + self._discovered_actions = None + self._actions = None + self._action_items = {} + + self._launcher_tool_reg = OpenPypeSettingsRegistry("launcher_tool") + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + def refresh(self): + self._discovered_actions = None + self._actions = None + self._action_items = {} + + self._controller.emit_event("actions.refresh.started") + self._get_action_objects() + self._controller.emit_event("actions.refresh.finished") + + def get_action_items(self, project_name, folder_id, task_id): + """Get actions for project. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[ActionItem]: List of actions. + """ + + not_open_workfile_actions = self._get_no_last_workfile_for_context( + project_name, folder_id, task_id) + session = self._prepare_session(project_name, folder_id, task_id) + output = [] + action_items = self._get_action_items(project_name) + for identifier, action in self._get_action_objects().items(): + if not action.is_compatible(session): + continue + + action_item = action_items[identifier] + # Handling of 'force_not_open_workfile' for applications + if action_item.is_application: + action_item = action_item.copy() + action_item.force_not_open_workfile = ( + not_open_workfile_actions.get(identifier, False) + ) + + output.append(action_item) + return output + + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + no_workfile_reg_data = self._get_no_last_workfile_reg_data() + project_data = no_workfile_reg_data.setdefault(project_name, {}) + folder_data = project_data.setdefault(folder_id, {}) + task_data = folder_data.setdefault(task_id, {}) + for action_id in action_ids: + task_data[action_id] = enabled + self._launcher_tool_reg.set_item( + self._not_open_workfile_reg_key, no_workfile_reg_data + ) + + def trigger_action(self, project_name, folder_id, task_id, identifier): + session = self._prepare_session(project_name, folder_id, task_id) + failed = False + error_message = None + action_label = identifier + action_items = self._get_action_items(project_name) + try: + action = self._actions[identifier] + action_item = action_items[identifier] + action_label = action_item.full_label + self._controller.emit_event( + "action.trigger.started", + { + "identifier": identifier, + "full_label": action_label, + } + ) + if isinstance(action, ApplicationAction): + per_action = self._get_no_last_workfile_for_context( + project_name, folder_id, task_id + ) + force_not_open_workfile = per_action.get(identifier, False) + if force_not_open_workfile: + action.data["start_last_workfile"] = False + else: + action.data.pop("start_last_workfile", None) + action.process(session) + except Exception as exc: + self.log.warning("Action trigger failed.", exc_info=True) + failed = True + error_message = str(exc) + + self._controller.emit_event( + "action.trigger.finished", + { + "identifier": identifier, + "failed": failed, + "error_message": error_message, + "full_label": action_label, + } + ) + + def _get_no_last_workfile_reg_data(self): + try: + no_workfile_reg_data = self._launcher_tool_reg.get_item( + self._not_open_workfile_reg_key) + except ValueError: + no_workfile_reg_data = {} + self._launcher_tool_reg.set_item( + self._not_open_workfile_reg_key, no_workfile_reg_data) + return no_workfile_reg_data + + def _get_no_last_workfile_for_context( + self, project_name, folder_id, task_id + ): + not_open_workfile_reg_data = self._get_no_last_workfile_reg_data() + return ( + not_open_workfile_reg_data + .get(project_name, {}) + .get(folder_id, {}) + .get(task_id, {}) + ) + + def _prepare_session(self, project_name, folder_id, task_id): + folder_name = None + if folder_id: + folder = self._controller.get_folder_entity( + project_name, folder_id) + if folder: + folder_name = folder["name"] + + task_name = None + if task_id: + task = self._controller.get_task_entity(project_name, task_id) + if task: + task_name = task["name"] + + return { + "AVALON_PROJECT": project_name, + "AVALON_ASSET": folder_name, + "AVALON_TASK": task_name, + } + + def _get_discovered_action_classes(self): + if self._discovered_actions is None: + self._discovered_actions = ( + discover_launcher_actions() + + self._get_applications_action_classes() + ) + return self._discovered_actions + + def _get_action_objects(self): + if self._actions is None: + actions = {} + for cls in self._get_discovered_action_classes(): + obj = cls() + identifier = getattr(obj, "identifier", None) + if identifier is None: + identifier = cls.__name__ + actions[identifier] = obj + self._actions = actions + return self._actions + + def _get_action_items(self, project_name): + action_items = self._action_items.get(project_name) + if action_items is not None: + return action_items + + project_entity = None + if project_name: + project_entity = self._controller.get_project_entity(project_name) + project_settings = self._controller.get_project_settings(project_name) + + action_items = {} + for identifier, action in self._get_action_objects().items(): + is_application = isinstance(action, ApplicationAction) + if is_application: + action.project_entities[project_name] = project_entity + action.project_settings[project_name] = project_settings + label = action.label or identifier + variant_label = getattr(action, "label_variant", None) + icon = get_action_icon(action) + item = ActionItem( + identifier, + label, + variant_label, + icon, + action.order, + is_application, + False + ) + action_items[identifier] = item + self._action_items[project_name] = action_items + return action_items + + def _get_applications_action_classes(self): + from openpype.lib.applications import ( + CUSTOM_LAUNCH_APP_GROUPS, + ApplicationManager, + ) + + actions = [] + + manager = ApplicationManager() + for full_name, application in manager.applications.items(): + if ( + application.group.name in CUSTOM_LAUNCH_APP_GROUPS + or not application.enabled + ): + continue + + action = type( + "app_{}".format(full_name), + (ApplicationAction,), + { + "identifier": "application.{}".format(full_name), + "application": application, + "name": application.name, + "label": application.group.label, + "label_variant": application.label, + "group": None, + "icon": application.icon, + "color": getattr(application, "color", None), + "order": getattr(application, "order", None) or 0, + "data": {} + } + ) + actions.append(action) + return actions diff --git a/openpype/tools/ayon_launcher/models/selection.py b/openpype/tools/ayon_launcher/models/selection.py new file mode 100644 index 0000000000..b156d2084c --- /dev/null +++ b/openpype/tools/ayon_launcher/models/selection.py @@ -0,0 +1,72 @@ +class LauncherSelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.project.changed" + - "selection.folder.changed" + - "selection.task.changed" + """ + + event_source = "launcher.selection.model" + + def __init__(self, controller): + self._controller = controller + + self._project_name = None + self._folder_id = None + self._task_name = None + self._task_id = None + + def get_selected_project_name(self): + return self._project_name + + def set_selected_project(self, project_name): + if project_name == self._project_name: + return + + self._project_name = project_name + self._controller.emit_event( + "selection.project.changed", + {"project_name": project_name}, + self.event_source + ) + + def get_selected_folder_id(self): + return self._folder_id + + def set_selected_folder(self, folder_id): + if folder_id == self._folder_id: + return + + self._folder_id = folder_id + self._controller.emit_event( + "selection.folder.changed", + { + "project_name": self._project_name, + "folder_id": folder_id, + }, + self.event_source + ) + + def get_selected_task_name(self): + return self._task_name + + def get_selected_task_id(self): + return self._task_id + + def set_selected_task(self, task_id, task_name): + if task_id == self._task_id: + return + + self._task_name = task_name + self._task_id = task_id + self._controller.emit_event( + "selection.task.changed", + { + "project_name": self._project_name, + "folder_id": self._folder_id, + "task_name": task_name, + "task_id": task_id, + }, + self.event_source + ) diff --git a/openpype/tools/ayon_launcher/ui/__init__.py b/openpype/tools/ayon_launcher/ui/__init__.py new file mode 100644 index 0000000000..da30c84656 --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/__init__.py @@ -0,0 +1,6 @@ +from .window import LauncherWindow + + +__all__ = ( + "LauncherWindow", +) diff --git a/openpype/tools/ayon_launcher/ui/actions_widget.py b/openpype/tools/ayon_launcher/ui/actions_widget.py new file mode 100644 index 0000000000..2a1a06695d --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/actions_widget.py @@ -0,0 +1,476 @@ +import time +import collections + +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.flickcharm import FlickCharm +from openpype.tools.ayon_utils.widgets import get_qt_icon + +from .resources import get_options_image_path + +ANIMATION_LEN = 7 + +ACTION_ID_ROLE = QtCore.Qt.UserRole + 1 +ACTION_IS_APPLICATION_ROLE = QtCore.Qt.UserRole + 2 +ACTION_IS_GROUP_ROLE = QtCore.Qt.UserRole + 3 +ACTION_SORT_ROLE = QtCore.Qt.UserRole + 4 +ANIMATION_START_ROLE = QtCore.Qt.UserRole + 5 +ANIMATION_STATE_ROLE = QtCore.Qt.UserRole + 6 +FORCE_NOT_OPEN_WORKFILE_ROLE = QtCore.Qt.UserRole + 7 + + +def _variant_label_sort_getter(action_item): + """Get variant label value for sorting. + + Make sure the output value is a string. + + Args: + action_item (ActionItem): Action item. + + Returns: + str: Variant label or empty string. + """ + + return action_item.variant_label or "" + + +class ActionsQtModel(QtGui.QStandardItemModel): + """Qt model for actions. + + Args: + controller (AbstractLauncherFrontEnd): Controller instance. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(ActionsQtModel, self).__init__() + + controller.register_event_callback( + "selection.project.changed", + self._on_selection_project_changed, + ) + controller.register_event_callback( + "selection.folder.changed", + self._on_selection_folder_changed, + ) + controller.register_event_callback( + "selection.task.changed", + self._on_selection_task_changed, + ) + + self._controller = controller + + self._items_by_id = {} + self._action_items_by_id = {} + self._groups_by_id = {} + + self._selected_project_name = None + self._selected_folder_id = None + self._selected_task_id = None + + def get_selected_project_name(self): + return self._selected_project_name + + def get_selected_folder_id(self): + return self._selected_folder_id + + def get_selected_task_id(self): + return self._selected_task_id + + def get_group_items(self, action_id): + return self._groups_by_id[action_id] + + def get_item_by_id(self, action_id): + return self._items_by_id.get(action_id) + + def get_action_item_by_id(self, action_id): + return self._action_items_by_id.get(action_id) + + def _clear_items(self): + self._items_by_id = {} + self._action_items_by_id = {} + self._groups_by_id = {} + root = self.invisibleRootItem() + root.removeRows(0, root.rowCount()) + + def refresh(self): + items = self._controller.get_action_items( + self._selected_project_name, + self._selected_folder_id, + self._selected_task_id, + ) + if not items: + self._clear_items() + self.refreshed.emit() + return + + root_item = self.invisibleRootItem() + + all_action_items_info = [] + items_by_label = collections.defaultdict(list) + for item in items: + if not item.variant_label: + all_action_items_info.append((item, False)) + else: + items_by_label[item.label].append(item) + + groups_by_id = {} + for action_items in items_by_label.values(): + action_items.sort(key=_variant_label_sort_getter, reverse=True) + first_item = next(iter(action_items)) + all_action_items_info.append((first_item, len(action_items) > 1)) + groups_by_id[first_item.identifier] = action_items + + new_items = [] + items_by_id = {} + action_items_by_id = {} + for action_item_info in all_action_items_info: + action_item, is_group = action_item_info + icon = get_qt_icon(action_item.icon) + if is_group: + label = action_item.label + else: + label = action_item.full_label + + item = self._items_by_id.get(action_item.identifier) + if item is None: + item = QtGui.QStandardItem() + item.setData(action_item.identifier, ACTION_ID_ROLE) + new_items.append(item) + + item.setFlags(QtCore.Qt.ItemIsEnabled) + item.setData(label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(is_group, ACTION_IS_GROUP_ROLE) + item.setData(action_item.order, ACTION_SORT_ROLE) + item.setData( + action_item.is_application, ACTION_IS_APPLICATION_ROLE) + item.setData( + action_item.force_not_open_workfile, + FORCE_NOT_OPEN_WORKFILE_ROLE) + items_by_id[action_item.identifier] = item + action_items_by_id[action_item.identifier] = action_item + + if new_items: + root_item.appendRows(new_items) + + to_remove = set(self._items_by_id.keys()) - set(items_by_id.keys()) + for identifier in to_remove: + item = self._items_by_id.pop(identifier) + self._action_items_by_id.pop(identifier) + root_item.removeRow(item.row()) + + self._groups_by_id = groups_by_id + self._items_by_id = items_by_id + self._action_items_by_id = action_items_by_id + self.refreshed.emit() + + def _on_selection_project_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = None + self._selected_task_id = None + self.refresh() + + def _on_selection_folder_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = event["folder_id"] + self._selected_task_id = None + self.refresh() + + def _on_selection_task_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = event["folder_id"] + self._selected_task_id = event["task_id"] + self.refresh() + + +class ActionDelegate(QtWidgets.QStyledItemDelegate): + _cached_extender = {} + + def __init__(self, *args, **kwargs): + super(ActionDelegate, self).__init__(*args, **kwargs) + self._anim_start_color = QtGui.QColor(178, 255, 246) + self._anim_end_color = QtGui.QColor(5, 44, 50) + + def _draw_animation(self, painter, option, index): + grid_size = option.widget.gridSize() + x_offset = int( + (grid_size.width() / 2) + - (option.rect.width() / 2) + ) + item_x = option.rect.x() - x_offset + rect_offset = grid_size.width() / 20 + size = grid_size.width() - (rect_offset * 2) + anim_rect = QtCore.QRect( + item_x + rect_offset, + option.rect.y() + rect_offset, + size, + size + ) + + painter.save() + + painter.setBrush(QtCore.Qt.transparent) + + gradient = QtGui.QConicalGradient() + gradient.setCenter(QtCore.QPointF(anim_rect.center())) + gradient.setColorAt(0, self._anim_start_color) + gradient.setColorAt(1, self._anim_end_color) + + time_diff = time.time() - index.data(ANIMATION_START_ROLE) + + # Repeat 4 times + part_anim = 2.5 + part_time = time_diff % part_anim + offset = (part_time / part_anim) * 360 + angle = (offset + 90) % 360 + + gradient.setAngle(-angle) + + pen = QtGui.QPen(QtGui.QBrush(gradient), rect_offset) + pen.setCapStyle(QtCore.Qt.RoundCap) + painter.setPen(pen) + painter.drawArc( + anim_rect, + -16 * (angle + 10), + -16 * offset + ) + + painter.restore() + + @classmethod + def _get_extender_pixmap(cls, size): + pix = cls._cached_extender.get(size) + if pix is not None: + return pix + pix = QtGui.QPixmap(get_options_image_path()).scaled( + size, size, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + cls._cached_extender[size] = pix + return pix + + def paint(self, painter, option, index): + painter.setRenderHints( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + + if index.data(ANIMATION_STATE_ROLE): + self._draw_animation(painter, option, index) + + super(ActionDelegate, self).paint(painter, option, index) + + if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): + rect = QtCore.QRectF( + option.rect.x(), option.rect.height(), 5, 5) + painter.setPen(QtCore.Qt.NoPen) + painter.setBrush(QtGui.QColor(200, 0, 0)) + painter.drawEllipse(rect) + + if not index.data(ACTION_IS_GROUP_ROLE): + return + + grid_size = option.widget.gridSize() + x_offset = int( + (grid_size.width() / 2) + - (option.rect.width() / 2) + ) + item_x = option.rect.x() - x_offset + + tenth_size = int(grid_size.width() / 10) + extender_size = int(tenth_size * 2.4) + + extender_x = item_x + tenth_size + extender_y = option.rect.y() + tenth_size + + pix = self._get_extender_pixmap(extender_size) + painter.drawPixmap(extender_x, extender_y, pix) + + +class ActionsWidget(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(ActionsWidget, self).__init__(parent) + + self._controller = controller + + view = QtWidgets.QListView(self) + view.setProperty("mode", "icon") + view.setObjectName("IconView") + view.setViewMode(QtWidgets.QListView.IconMode) + view.setResizeMode(QtWidgets.QListView.Adjust) + view.setSelectionMode(QtWidgets.QListView.NoSelection) + view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + view.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) + view.setWrapping(True) + view.setGridSize(QtCore.QSize(70, 75)) + view.setIconSize(QtCore.QSize(30, 30)) + view.setSpacing(0) + view.setWordWrap(True) + + # Make view flickable + flick = FlickCharm(parent=view) + flick.activateOn(view) + + model = ActionsQtModel(controller) + + proxy_model = QtCore.QSortFilterProxyModel() + proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + proxy_model.setSortRole(ACTION_SORT_ROLE) + + proxy_model.setSourceModel(model) + view.setModel(proxy_model) + + delegate = ActionDelegate(self) + view.setItemDelegate(delegate) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(view) + + animation_timer = QtCore.QTimer() + animation_timer.setInterval(40) + animation_timer.timeout.connect(self._on_animation) + + view.clicked.connect(self._on_clicked) + view.customContextMenuRequested.connect(self._on_context_menu) + model.refreshed.connect(self._on_model_refresh) + + self._animated_items = set() + self._animation_timer = animation_timer + + self._context_menu = None + + self._flick = flick + self._view = view + self._model = model + self._proxy_model = proxy_model + + self._set_row_height(1) + + def refresh(self): + self._model.refresh() + + def _set_row_height(self, rows): + self.setMinimumHeight(rows * 75) + + def _on_model_refresh(self): + self._proxy_model.sort(0) + + def _on_animation(self): + time_now = time.time() + for action_id in tuple(self._animated_items): + item = self._model.get_item_by_id(action_id) + if item is None: + self._animated_items.discard(action_id) + continue + + start_time = item.data(ANIMATION_START_ROLE) + if start_time is None or (time_now - start_time) > ANIMATION_LEN: + item.setData(0, ANIMATION_STATE_ROLE) + self._animated_items.discard(action_id) + + if not self._animated_items: + self._animation_timer.stop() + + self.update() + + def _start_animation(self, index): + # Offset refresh timout + model_index = self._proxy_model.mapToSource(index) + if not model_index.isValid(): + return + action_id = model_index.data(ACTION_ID_ROLE) + self._model.setData(model_index, time.time(), ANIMATION_START_ROLE) + self._model.setData(model_index, 1, ANIMATION_STATE_ROLE) + self._animated_items.add(action_id) + self._animation_timer.start() + + def _on_context_menu(self, point): + """Creates menu to force skip opening last workfile.""" + index = self._view.indexAt(point) + if not index.isValid(): + return + + if not index.data(ACTION_IS_APPLICATION_ROLE): + return + + menu = QtWidgets.QMenu(self._view) + checkbox = QtWidgets.QCheckBox( + "Skip opening last workfile.", menu) + if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): + checkbox.setChecked(True) + + action_id = index.data(ACTION_ID_ROLE) + is_group = index.data(ACTION_IS_GROUP_ROLE) + if is_group: + action_items = self._model.get_group_items(action_id) + else: + action_items = [self._model.get_action_item_by_id(action_id)] + action_ids = {action_item.identifier for action_item in action_items} + checkbox.stateChanged.connect( + lambda: self._on_checkbox_changed( + action_ids, checkbox.isChecked() + ) + ) + action = QtWidgets.QWidgetAction(menu) + action.setDefaultWidget(checkbox) + + menu.addAction(action) + + self._context_menu = menu + global_point = self.mapToGlobal(point) + menu.exec_(global_point) + self._context_menu = None + + def _on_checkbox_changed(self, action_ids, is_checked): + if self._context_menu is not None: + self._context_menu.close() + + project_name = self._model.get_selected_project_name() + folder_id = self._model.get_selected_folder_id() + task_id = self._model.get_selected_task_id() + self._controller.set_application_force_not_open_workfile( + project_name, folder_id, task_id, action_ids, is_checked) + self._model.refresh() + + def _on_clicked(self, index): + if not index or not index.isValid(): + return + + is_group = index.data(ACTION_IS_GROUP_ROLE) + action_id = index.data(ACTION_ID_ROLE) + + project_name = self._model.get_selected_project_name() + folder_id = self._model.get_selected_folder_id() + task_id = self._model.get_selected_task_id() + + if not is_group: + self._controller.trigger_action( + project_name, folder_id, task_id, action_id + ) + self._start_animation(index) + return + + action_items = self._model.get_group_items(action_id) + + menu = QtWidgets.QMenu(self) + actions_mapping = {} + + for action_item in action_items: + menu_action = QtWidgets.QAction(action_item.full_label) + menu.addAction(menu_action) + actions_mapping[menu_action] = action_item + + result = menu.exec_(QtGui.QCursor.pos()) + if not result: + return + + action_item = actions_mapping[result] + + self._controller.trigger_action( + project_name, folder_id, task_id, action_item.identifier + ) + self._start_animation(index) diff --git a/openpype/tools/ayon_launcher/ui/hierarchy_page.py b/openpype/tools/ayon_launcher/ui/hierarchy_page.py new file mode 100644 index 0000000000..d56d43fdec --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/hierarchy_page.py @@ -0,0 +1,106 @@ +import qtawesome +from qtpy import QtWidgets, QtCore + +from openpype.tools.utils import ( + PlaceholderLineEdit, + SquareButton, + RefreshButton, +) +from openpype.tools.ayon_utils.widgets import ( + ProjectsCombobox, + FoldersWidget, + TasksWidget, +) + + +class HierarchyPage(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(HierarchyPage, self).__init__(parent) + + # Header + header_widget = QtWidgets.QWidget(self) + + btn_back_icon = qtawesome.icon("fa.angle-left", color="white") + btn_back = SquareButton(header_widget) + btn_back.setIcon(btn_back_icon) + + projects_combobox = ProjectsCombobox(controller, header_widget) + + refresh_btn = RefreshButton(header_widget) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(btn_back, 0) + header_layout.addWidget(projects_combobox, 1) + header_layout.addWidget(refresh_btn, 0) + + # Body - Folders + Tasks selection + content_body = QtWidgets.QSplitter(self) + content_body.setContentsMargins(0, 0, 0, 0) + content_body.setSizePolicy( + QtWidgets.QSizePolicy.Expanding, + QtWidgets.QSizePolicy.Expanding + ) + content_body.setOrientation(QtCore.Qt.Horizontal) + + # - Folders widget with filter + folders_wrapper = QtWidgets.QWidget(content_body) + + folders_filter_text = PlaceholderLineEdit(folders_wrapper) + folders_filter_text.setPlaceholderText("Filter folders...") + + folders_widget = FoldersWidget(controller, folders_wrapper) + + folders_wrapper_layout = QtWidgets.QVBoxLayout(folders_wrapper) + folders_wrapper_layout.setContentsMargins(0, 0, 0, 0) + folders_wrapper_layout.addWidget(folders_filter_text, 0) + folders_wrapper_layout.addWidget(folders_widget, 1) + + # - Tasks widget + tasks_widget = TasksWidget(controller, content_body) + + content_body.addWidget(folders_wrapper) + content_body.addWidget(tasks_widget) + content_body.setStretchFactor(0, 100) + content_body.setStretchFactor(1, 65) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(header_widget, 0) + main_layout.addWidget(content_body, 1) + + btn_back.clicked.connect(self._on_back_clicked) + refresh_btn.clicked.connect(self._on_refreh_clicked) + folders_filter_text.textChanged.connect(self._on_filter_text_changed) + + self._is_visible = False + self._controller = controller + + self._btn_back = btn_back + self._projects_combobox = projects_combobox + self._folders_widget = folders_widget + self._tasks_widget = tasks_widget + + # Post init + projects_combobox.set_listen_to_selection_change(self._is_visible) + + def set_page_visible(self, visible, project_name=None): + if self._is_visible == visible: + return + self._is_visible = visible + self._projects_combobox.set_listen_to_selection_change(visible) + if visible and project_name: + self._projects_combobox.set_selection(project_name) + + def refresh(self): + self._folders_widget.refresh() + self._tasks_widget.refresh() + + def _on_back_clicked(self): + self._controller.set_selected_project(None) + + def _on_refreh_clicked(self): + self._controller.refresh() + + def _on_filter_text_changed(self, text): + self._folders_widget.set_name_filter(text) diff --git a/openpype/tools/ayon_launcher/ui/projects_widget.py b/openpype/tools/ayon_launcher/ui/projects_widget.py new file mode 100644 index 0000000000..7dbaec5147 --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/projects_widget.py @@ -0,0 +1,148 @@ +from qtpy import QtWidgets, QtCore + +from openpype.tools.flickcharm import FlickCharm +from openpype.tools.utils import PlaceholderLineEdit, RefreshButton +from openpype.tools.ayon_utils.widgets import ( + ProjectsModel, + ProjectSortFilterProxy, +) +from openpype.tools.ayon_utils.models import PROJECTS_MODEL_SENDER + + +class ProjectIconView(QtWidgets.QListView): + """Styled ListView that allows to toggle between icon and list mode. + + Toggling between the two modes is done by Right Mouse Click. + """ + + IconMode = 0 + ListMode = 1 + + def __init__(self, parent=None, mode=ListMode): + super(ProjectIconView, self).__init__(parent=parent) + + # Workaround for scrolling being super slow or fast when + # toggling between the two visual modes + self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) + self.setObjectName("IconView") + + self._mode = None + self.set_mode(mode) + + def set_mode(self, mode): + if mode == self._mode: + return + + self._mode = mode + + if mode == self.IconMode: + self.setViewMode(QtWidgets.QListView.IconMode) + self.setResizeMode(QtWidgets.QListView.Adjust) + self.setWrapping(True) + self.setWordWrap(True) + self.setGridSize(QtCore.QSize(151, 90)) + self.setIconSize(QtCore.QSize(50, 50)) + self.setSpacing(0) + self.setAlternatingRowColors(False) + + self.setProperty("mode", "icon") + self.style().polish(self) + + self.verticalScrollBar().setSingleStep(30) + + elif self.ListMode: + self.setProperty("mode", "list") + self.style().polish(self) + + self.setViewMode(QtWidgets.QListView.ListMode) + self.setResizeMode(QtWidgets.QListView.Adjust) + self.setWrapping(False) + self.setWordWrap(False) + self.setIconSize(QtCore.QSize(20, 20)) + self.setGridSize(QtCore.QSize(100, 25)) + self.setSpacing(0) + self.setAlternatingRowColors(False) + + self.verticalScrollBar().setSingleStep(34) + + def mousePressEvent(self, event): + if event.button() == QtCore.Qt.RightButton: + self.set_mode(int(not self._mode)) + return super(ProjectIconView, self).mousePressEvent(event) + + +class ProjectsWidget(QtWidgets.QWidget): + """Projects Page""" + + refreshed = QtCore.Signal() + + def __init__(self, controller, parent=None): + super(ProjectsWidget, self).__init__(parent=parent) + + header_widget = QtWidgets.QWidget(self) + + projects_filter_text = PlaceholderLineEdit(header_widget) + projects_filter_text.setPlaceholderText("Filter projects...") + + refresh_btn = RefreshButton(header_widget) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(projects_filter_text, 1) + header_layout.addWidget(refresh_btn, 0) + + projects_view = ProjectIconView(parent=self) + projects_view.setSelectionMode(QtWidgets.QListView.NoSelection) + flick = FlickCharm(parent=self) + flick.activateOn(projects_view) + projects_model = ProjectsModel(controller) + projects_proxy_model = ProjectSortFilterProxy() + projects_proxy_model.setSourceModel(projects_model) + + projects_view.setModel(projects_proxy_model) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(header_widget, 0) + main_layout.addWidget(projects_view, 1) + + projects_view.clicked.connect(self._on_view_clicked) + projects_model.refreshed.connect(self.refreshed) + projects_filter_text.textChanged.connect( + self._on_project_filter_change) + refresh_btn.clicked.connect(self._on_refresh_clicked) + + controller.register_event_callback( + "projects.refresh.finished", + self._on_projects_refresh_finished + ) + + self._controller = controller + + self._projects_view = projects_view + self._projects_model = projects_model + self._projects_proxy_model = projects_proxy_model + + def has_content(self): + """Model has at least one project. + + Returns: + bool: True if there is any content in the model. + """ + + return self._projects_model.has_content() + + def _on_view_clicked(self, index): + if index.isValid(): + project_name = index.data(QtCore.Qt.DisplayRole) + self._controller.set_selected_project(project_name) + + def _on_project_filter_change(self, text): + self._projects_proxy_model.setFilterFixedString(text) + + def _on_refresh_clicked(self): + self._controller.refresh() + + def _on_projects_refresh_finished(self, event): + if event["sender"] != PROJECTS_MODEL_SENDER: + self._projects_model.refresh() diff --git a/openpype/tools/ayon_launcher/ui/resources/__init__.py b/openpype/tools/ayon_launcher/ui/resources/__init__.py new file mode 100644 index 0000000000..27c59af2ba --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/resources/__init__.py @@ -0,0 +1,7 @@ +import os + +RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) + + +def get_options_image_path(): + return os.path.join(RESOURCES_DIR, "options.png") diff --git a/openpype/tools/ayon_launcher/ui/resources/options.png b/openpype/tools/ayon_launcher/ui/resources/options.png new file mode 100644 index 0000000000..a9617d0d19 Binary files /dev/null and b/openpype/tools/ayon_launcher/ui/resources/options.png differ diff --git a/openpype/tools/ayon_launcher/ui/window.py b/openpype/tools/ayon_launcher/ui/window.py new file mode 100644 index 0000000000..ffc74a2fdc --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/window.py @@ -0,0 +1,312 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype import style +from openpype import resources + +from openpype.tools.ayon_launcher.control import BaseLauncherController + +from .projects_widget import ProjectsWidget +from .hierarchy_page import HierarchyPage +from .actions_widget import ActionsWidget + + +class LauncherWindow(QtWidgets.QWidget): + """Launcher interface""" + message_interval = 5000 + refresh_interval = 10000 + page_side_anim_interval = 250 + + def __init__(self, controller=None, parent=None): + super(LauncherWindow, self).__init__(parent) + + if controller is None: + controller = BaseLauncherController() + + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("Launcher") + self.setFocusPolicy(QtCore.Qt.StrongFocus) + self.setAttribute(QtCore.Qt.WA_DeleteOnClose, False) + + self.setStyleSheet(style.load_stylesheet()) + + # Allow minimize + self.setWindowFlags( + QtCore.Qt.Window + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.WindowTitleHint + | QtCore.Qt.WindowMinimizeButtonHint + | QtCore.Qt.WindowCloseButtonHint + ) + + self._controller = controller + + # Main content - Pages & Actions + content_body = QtWidgets.QSplitter(self) + + # Pages + pages_widget = QtWidgets.QWidget(content_body) + + # - First page - Projects + projects_page = ProjectsWidget(controller, pages_widget) + + # - Second page - Hierarchy (folders & tasks) + hierarchy_page = HierarchyPage(controller, pages_widget) + + pages_layout = QtWidgets.QHBoxLayout(pages_widget) + pages_layout.setContentsMargins(0, 0, 0, 0) + pages_layout.addWidget(projects_page, 1) + pages_layout.addWidget(hierarchy_page, 1) + + # Actions + actions_widget = ActionsWidget(controller, content_body) + + # Vertically split Pages and Actions + content_body.setContentsMargins(0, 0, 0, 0) + content_body.setSizePolicy( + QtWidgets.QSizePolicy.Expanding, + QtWidgets.QSizePolicy.Expanding + ) + content_body.setOrientation(QtCore.Qt.Vertical) + content_body.addWidget(pages_widget) + content_body.addWidget(actions_widget) + + # Set useful default sizes and set stretch + # for the pages so that is the only one that + # stretches on UI resize. + content_body.setStretchFactor(0, 10) + content_body.setSizes([580, 160]) + + # Footer + footer_widget = QtWidgets.QWidget(self) + + # - Message label + message_label = QtWidgets.QLabel(footer_widget) + + # action_history = ActionHistory(footer_widget) + # action_history.setStatusTip("Show Action History") + + footer_layout = QtWidgets.QHBoxLayout(footer_widget) + footer_layout.setContentsMargins(0, 0, 0, 0) + footer_layout.addWidget(message_label, 1) + # footer_layout.addWidget(action_history, 0) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(content_body, 1) + layout.addWidget(footer_widget, 0) + + message_timer = QtCore.QTimer() + message_timer.setInterval(self.message_interval) + message_timer.setSingleShot(True) + + actions_refresh_timer = QtCore.QTimer() + actions_refresh_timer.setInterval(self.refresh_interval) + + page_slide_anim = QtCore.QVariantAnimation(self) + page_slide_anim.setDuration(self.page_side_anim_interval) + page_slide_anim.setStartValue(0.0) + page_slide_anim.setEndValue(1.0) + page_slide_anim.setEasingCurve(QtCore.QEasingCurve.OutQuad) + + projects_page.refreshed.connect(self._on_projects_refresh) + message_timer.timeout.connect(self._on_message_timeout) + actions_refresh_timer.timeout.connect( + self._on_actions_refresh_timeout) + page_slide_anim.valueChanged.connect( + self._on_page_slide_value_changed) + page_slide_anim.finished.connect(self._on_page_slide_finished) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "action.trigger.started", + self._on_action_trigger_started, + ) + controller.register_event_callback( + "action.trigger.finished", + self._on_action_trigger_finished, + ) + + self._controller = controller + + self._is_on_projects_page = True + self._window_is_active = False + self._refresh_on_activate = False + self._selected_project_name = None + + self._pages_widget = pages_widget + self._pages_layout = pages_layout + self._projects_page = projects_page + self._hierarchy_page = hierarchy_page + self._actions_widget = actions_widget + + self._message_label = message_label + # self._action_history = action_history + + self._message_timer = message_timer + self._actions_refresh_timer = actions_refresh_timer + self._page_slide_anim = page_slide_anim + + hierarchy_page.setVisible(not self._is_on_projects_page) + self.resize(520, 740) + + def showEvent(self, event): + super(LauncherWindow, self).showEvent(event) + self._window_is_active = True + if not self._actions_refresh_timer.isActive(): + self._actions_refresh_timer.start() + self._controller.refresh() + + def closeEvent(self, event): + super(LauncherWindow, self).closeEvent(event) + self._window_is_active = False + self._actions_refresh_timer.stop() + + def changeEvent(self, event): + if event.type() in ( + QtCore.QEvent.Type.WindowStateChange, + QtCore.QEvent.ActivationChange, + ): + is_active = self.isActiveWindow() and not self.isMinimized() + self._window_is_active = is_active + if is_active and self._refresh_on_activate: + self._refresh_on_activate = False + self._on_actions_refresh_timeout() + self._actions_refresh_timer.start() + + super(LauncherWindow, self).changeEvent(event) + + def _on_actions_refresh_timeout(self): + # Stop timer if widget is not visible + if self._window_is_active: + self._controller.refresh_actions() + else: + self._refresh_on_activate = True + + def _echo(self, message): + self._message_label.setText(str(message)) + self._message_timer.start() + + def _on_message_timeout(self): + self._message_label.setText("") + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self._selected_project_name = project_name + if not project_name: + self._go_to_projects_page() + + elif self._is_on_projects_page: + self._go_to_hierarchy_page(project_name) + + def _on_projects_refresh(self): + # There is nothing to do, we're on projects page + if self._is_on_projects_page: + return + + # No projects were found -> go back to projects page + if not self._projects_page.has_content(): + self._go_to_projects_page() + return + + self._hierarchy_page.refresh() + self._actions_widget.refresh() + + def _on_action_trigger_started(self, event): + self._echo("Running action: {}".format(event["full_label"])) + + def _on_action_trigger_finished(self, event): + if not event["failed"]: + return + self._echo("Failed: {}".format(event["error_message"])) + + def _is_page_slide_anim_running(self): + return ( + self._page_slide_anim.state() == QtCore.QAbstractAnimation.Running + ) + + def _go_to_projects_page(self): + if self._is_on_projects_page: + return + self._is_on_projects_page = True + self._hierarchy_page.set_page_visible(False) + + self._start_page_slide_animation() + + def _go_to_hierarchy_page(self, project_name): + if not self._is_on_projects_page: + return + self._is_on_projects_page = False + self._hierarchy_page.set_page_visible(True, project_name) + + self._start_page_slide_animation() + + def _start_page_slide_animation(self): + if self._is_on_projects_page: + direction = QtCore.QAbstractAnimation.Backward + else: + direction = QtCore.QAbstractAnimation.Forward + self._page_slide_anim.setDirection(direction) + if self._is_page_slide_anim_running(): + return + + layout_spacing = self._pages_layout.spacing() + if self._is_on_projects_page: + hierarchy_geo = self._hierarchy_page.geometry() + projects_geo = QtCore.QRect(hierarchy_geo) + projects_geo.moveRight( + hierarchy_geo.left() - (layout_spacing + 1)) + + self._projects_page.setVisible(True) + + else: + projects_geo = self._projects_page.geometry() + hierarchy_geo = QtCore.QRect(projects_geo) + hierarchy_geo.moveLeft(projects_geo.right() + layout_spacing) + self._hierarchy_page.setVisible(True) + + while self._pages_layout.count(): + self._pages_layout.takeAt(0) + + self._projects_page.setGeometry(projects_geo) + self._hierarchy_page.setGeometry(hierarchy_geo) + + self._page_slide_anim.start() + + def _on_page_slide_value_changed(self, value): + layout_spacing = self._pages_layout.spacing() + content_width = self._pages_widget.width() - layout_spacing + content_height = self._pages_widget.height() + + # Visible widths of other widgets + hierarchy_width = int(content_width * value) + + hierarchy_geo = QtCore.QRect( + content_width - hierarchy_width, 0, content_width, content_height + ) + projects_geo = QtCore.QRect(hierarchy_geo) + projects_geo.moveRight(hierarchy_geo.left() - (layout_spacing + 1)) + + self._projects_page.setGeometry(projects_geo) + self._hierarchy_page.setGeometry(hierarchy_geo) + + def _on_page_slide_finished(self): + self._pages_layout.addWidget(self._projects_page, 1) + self._pages_layout.addWidget(self._hierarchy_page, 1) + self._projects_page.setVisible(self._is_on_projects_page) + self._hierarchy_page.setVisible(not self._is_on_projects_page) + + # def _on_history_action(self, history_data): + # action, session = history_data + # app = QtWidgets.QApplication.instance() + # modifiers = app.keyboardModifiers() + # + # is_control_down = QtCore.Qt.ControlModifier & modifiers + # if is_control_down: + # # Revert to that "session" location + # self.set_session(session) + # else: + # # User is holding control, rerun the action + # self.run_action(action, session=session) diff --git a/openpype/tools/ayon_loader/__init__.py b/openpype/tools/ayon_loader/__init__.py new file mode 100644 index 0000000000..09ecf65f3a --- /dev/null +++ b/openpype/tools/ayon_loader/__init__.py @@ -0,0 +1,6 @@ +from .control import LoaderController + + +__all__ = ( + "LoaderController", +) diff --git a/openpype/tools/ayon_loader/abstract.py b/openpype/tools/ayon_loader/abstract.py new file mode 100644 index 0000000000..45042395d9 --- /dev/null +++ b/openpype/tools/ayon_loader/abstract.py @@ -0,0 +1,851 @@ +from abc import ABCMeta, abstractmethod +import six + +from openpype.lib.attribute_definitions import ( + AbstractAttrDef, + serialize_attr_defs, + deserialize_attr_defs, +) + + +class ProductTypeItem: + """Item representing product type. + + Args: + name (str): Product type name. + icon (dict[str, Any]): Product type icon definition. + checked (bool): Is product type checked for filtering. + """ + + def __init__(self, name, icon, checked): + self.name = name + self.icon = icon + self.checked = checked + + def to_data(self): + return { + "name": self.name, + "icon": self.icon, + "checked": self.checked, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class ProductItem: + """Product item with it versions. + + Args: + product_id (str): Product id. + product_type (str): Product type. + product_name (str): Product name. + product_icon (dict[str, Any]): Product icon definition. + product_type_icon (dict[str, Any]): Product type icon definition. + product_in_scene (bool): Is product in scene (only when used in DCC). + group_name (str): Group name. + folder_id (str): Folder id. + folder_label (str): Folder label. + version_items (dict[str, VersionItem]): Version items by id. + """ + + def __init__( + self, + product_id, + product_type, + product_name, + product_icon, + product_type_icon, + product_in_scene, + group_name, + folder_id, + folder_label, + version_items, + ): + self.product_id = product_id + self.product_type = product_type + self.product_name = product_name + self.product_icon = product_icon + self.product_type_icon = product_type_icon + self.product_in_scene = product_in_scene + self.group_name = group_name + self.folder_id = folder_id + self.folder_label = folder_label + self.version_items = version_items + + def to_data(self): + return { + "product_id": self.product_id, + "product_type": self.product_type, + "product_name": self.product_name, + "product_icon": self.product_icon, + "product_type_icon": self.product_type_icon, + "product_in_scene": self.product_in_scene, + "group_name": self.group_name, + "folder_id": self.folder_id, + "folder_label": self.folder_label, + "version_items": { + version_id: version_item.to_data() + for version_id, version_item in self.version_items.items() + }, + } + + @classmethod + def from_data(cls, data): + version_items = { + version_id: VersionItem.from_data(version) + for version_id, version in data["version_items"].items() + } + data["version_items"] = version_items + return cls(**data) + + +class VersionItem: + """Version item. + + Object have implemented comparison operators to be sortable. + + Args: + version_id (str): Version id. + version (int): Version. Can be negative when is hero version. + is_hero (bool): Is hero version. + product_id (str): Product id. + thumbnail_id (Union[str, None]): Thumbnail id. + published_time (Union[str, None]): Published time in format + '%Y%m%dT%H%M%SZ'. + author (Union[str, None]): Author. + frame_range (Union[str, None]): Frame range. + duration (Union[int, None]): Duration. + handles (Union[str, None]): Handles. + step (Union[int, None]): Step. + comment (Union[str, None]): Comment. + source (Union[str, None]): Source. + """ + + def __init__( + self, + version_id, + version, + is_hero, + product_id, + thumbnail_id, + published_time, + author, + frame_range, + duration, + handles, + step, + comment, + source + ): + self.version_id = version_id + self.product_id = product_id + self.thumbnail_id = thumbnail_id + self.version = version + self.is_hero = is_hero + self.published_time = published_time + self.author = author + self.frame_range = frame_range + self.duration = duration + self.handles = handles + self.step = step + self.comment = comment + self.source = source + + def __eq__(self, other): + if not isinstance(other, VersionItem): + return False + return ( + self.is_hero == other.is_hero + and self.version == other.version + and self.version_id == other.version_id + and self.product_id == other.product_id + ) + + def __ne__(self, other): + return not self.__eq__(other) + + def __gt__(self, other): + if not isinstance(other, VersionItem): + return False + if ( + other.version == self.version + and self.is_hero + ): + return True + return other.version < self.version + + def to_data(self): + return { + "version_id": self.version_id, + "product_id": self.product_id, + "thumbnail_id": self.thumbnail_id, + "version": self.version, + "is_hero": self.is_hero, + "published_time": self.published_time, + "author": self.author, + "frame_range": self.frame_range, + "duration": self.duration, + "handles": self.handles, + "step": self.step, + "comment": self.comment, + "source": self.source, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class RepreItem: + """Representation item. + + Args: + representation_id (str): Representation id. + representation_name (str): Representation name. + representation_icon (dict[str, Any]): Representation icon definition. + product_name (str): Product name. + folder_label (str): Folder label. + """ + + def __init__( + self, + representation_id, + representation_name, + representation_icon, + product_name, + folder_label, + ): + self.representation_id = representation_id + self.representation_name = representation_name + self.representation_icon = representation_icon + self.product_name = product_name + self.folder_label = folder_label + + def to_data(self): + return { + "representation_id": self.representation_id, + "representation_name": self.representation_name, + "representation_icon": self.representation_icon, + "product_name": self.product_name, + "folder_label": self.folder_label, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class ActionItem: + """Action item that can be triggered. + + Action item is defined for a specific context. To trigger the action + use 'identifier' and context, it necessary also use 'options'. + + Args: + identifier (str): Action identifier. + label (str): Action label. + icon (dict[str, Any]): Action icon definition. + tooltip (str): Action tooltip. + options (Union[list[AbstractAttrDef], list[qargparse.QArgument]]): + Action options. Note: 'qargparse' is considered as deprecated. + order (int): Action order. + project_name (str): Project name. + folder_ids (list[str]): Folder ids. + product_ids (list[str]): Product ids. + version_ids (list[str]): Version ids. + representation_ids (list[str]): Representation ids. + """ + + def __init__( + self, + identifier, + label, + icon, + tooltip, + options, + order, + project_name, + folder_ids, + product_ids, + version_ids, + representation_ids, + ): + self.identifier = identifier + self.label = label + self.icon = icon + self.tooltip = tooltip + self.options = options + self.order = order + self.project_name = project_name + self.folder_ids = folder_ids + self.product_ids = product_ids + self.version_ids = version_ids + self.representation_ids = representation_ids + + def _options_to_data(self): + options = self.options + if not options: + return options + if isinstance(options[0], AbstractAttrDef): + return serialize_attr_defs(options) + # NOTE: Data conversion is not used by default in loader tool. But for + # future development of detached UI tools it would be better to be + # prepared for it. + raise NotImplementedError( + "{}.to_data is not implemented. Use Attribute definitions" + " from 'openpype.lib' instead of 'qargparse'.".format( + self.__class__.__name__ + ) + ) + + def to_data(self): + options = self._options_to_data() + return { + "identifier": self.identifier, + "label": self.label, + "icon": self.icon, + "tooltip": self.tooltip, + "options": options, + "order": self.order, + "project_name": self.project_name, + "folder_ids": self.folder_ids, + "product_ids": self.product_ids, + "version_ids": self.version_ids, + "representation_ids": self.representation_ids, + } + + @classmethod + def from_data(cls, data): + options = data["options"] + if options: + options = deserialize_attr_defs(options) + data["options"] = options + return cls(**data) + + +@six.add_metaclass(ABCMeta) +class _BaseLoaderController(object): + """Base loader controller abstraction. + + Abstract base class that is required for both frontend and backed. + """ + + @abstractmethod + def get_current_context(self): + """Current context is a context of the current scene. + + Example output: + { + "project_name": "MyProject", + "folder_id": "0011223344-5566778-99", + "task_name": "Compositing", + } + + Returns: + dict[str, Union[str, None]]: Context data. + """ + + pass + + @abstractmethod + def reset(self): + """Reset all cached data to reload everything. + + Triggers events "controller.reset.started" and + "controller.reset.finished". + """ + + pass + + # Model wrappers + @abstractmethod + def get_folder_items(self, project_name, sender=None): + """Folder items for a project. + + Args: + project_name (str): Project name. + sender (Optional[str]): Sender who requested the name. + + Returns: + list[FolderItem]: Folder items for the project. + """ + + pass + + # Expected selection helpers + @abstractmethod + def get_expected_selection_data(self): + """Full expected selection information. + + Expected selection is a selection that may not be yet selected in UI + e.g. because of refreshing, this data tell the UI what should be + selected when they finish their refresh. + + Returns: + dict[str, Any]: Expected selection data. + """ + + pass + + @abstractmethod + def set_expected_selection(self, project_name, folder_id): + """Set expected selection. + + Args: + project_name (str): Name of project to be selected. + folder_id (str): Id of folder to be selected. + """ + + pass + + +class BackendLoaderController(_BaseLoaderController): + """Backend loader controller abstraction. + + What backend logic requires from a controller for proper logic. + """ + + @abstractmethod + def emit_event(self, topic, data=None, source=None): + """Emit event with a certain topic, data and source. + + The event should be sent to both frontend and backend. + + Args: + topic (str): Event topic name. + data (Optional[dict[str, Any]]): Event data. + source (Optional[str]): Event source. + """ + + pass + + @abstractmethod + def get_loaded_product_ids(self): + """Return set of loaded product ids. + + Returns: + set[str]: Set of loaded product ids. + """ + + pass + + +class FrontendLoaderController(_BaseLoaderController): + @abstractmethod + def register_event_callback(self, topic, callback): + """Register callback for an event topic. + + Args: + topic (str): Event topic name. + callback (func): Callback triggered when the event is emitted. + """ + + pass + + # Expected selection helpers + @abstractmethod + def expected_project_selected(self, project_name): + """Expected project was selected in frontend. + + Args: + project_name (str): Project name. + """ + + pass + + @abstractmethod + def expected_folder_selected(self, folder_id): + """Expected folder was selected in frontend. + + Args: + folder_id (str): Folder id. + """ + + pass + + # Model wrapper calls + @abstractmethod + def get_project_items(self, sender=None): + """Items for all projects available on server. + + Triggers event topics "projects.refresh.started" and + "projects.refresh.finished" with data: + { + "sender": sender + } + + Notes: + Filtering of projects is done in UI. + + Args: + sender (Optional[str]): Sender who requested the items. + + Returns: + list[ProjectItem]: List of project items. + """ + + pass + + @abstractmethod + def get_product_items(self, project_name, folder_ids, sender=None): + """Product items for folder ids. + + Triggers event topics "products.refresh.started" and + "products.refresh.finished" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender + } + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + sender (Optional[str]): Sender who requested the items. + + Returns: + list[ProductItem]: List of product items. + """ + + pass + + @abstractmethod + def get_product_item(self, project_name, product_id): + """Receive single product item. + + Args: + project_name (str): Project name. + product_id (str): Product id. + + Returns: + Union[ProductItem, None]: Product info or None if not found. + """ + + pass + + @abstractmethod + def get_product_type_items(self, project_name): + """Product type items for a project. + + Product types have defined if are checked for filtering or not. + + Returns: + list[ProductTypeItem]: List of product type items for a project. + """ + + pass + + @abstractmethod + def get_representation_items( + self, project_name, version_ids, sender=None + ): + """Representation items for version ids. + + Triggers event topics "model.representations.refresh.started" and + "model.representations.refresh.finished" with data: + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender + } + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + sender (Optional[str]): Sender who requested the items. + + Returns: + list[RepreItem]: List of representation items. + """ + + pass + + @abstractmethod + def get_version_thumbnail_ids(self, project_name, version_ids): + """Get thumbnail ids for version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + dict[str, Union[str, Any]]: Thumbnail id by version id. + """ + + pass + + @abstractmethod + def get_folder_thumbnail_ids(self, project_name, folder_ids): + """Get thumbnail ids for folder ids. + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Union[str, Any]]: Thumbnail id by folder id. + """ + + pass + + @abstractmethod + def get_thumbnail_path(self, project_name, thumbnail_id): + """Get thumbnail path for thumbnail id. + + This method should get a path to a thumbnail based on thumbnail id. + Which probably means to download the thumbnail from server and store + it locally. + + Args: + project_name (str): Project name. + thumbnail_id (str): Thumbnail id. + + Returns: + Union[str, None]: Thumbnail path or None if not found. + """ + + pass + + # Selection model wrapper calls + @abstractmethod + def get_selected_project_name(self): + """Get selected project name. + + The information is based on last selection from UI. + + Returns: + Union[str, None]: Selected project name. + """ + + pass + + @abstractmethod + def get_selected_folder_ids(self): + """Get selected folder ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected folder ids. + """ + + pass + + @abstractmethod + def get_selected_version_ids(self): + """Get selected version ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected version ids. + """ + + pass + + @abstractmethod + def get_selected_representation_ids(self): + """Get selected representation ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected representation ids. + """ + + pass + + @abstractmethod + def set_selected_project(self, project_name): + """Set selected project. + + Project selection changed in UI. Method triggers event with topic + "selection.project.changed" with data: + { + "project_name": self._project_name + } + + Args: + project_name (Union[str, None]): Selected project name. + """ + + pass + + @abstractmethod + def set_selected_folders(self, folder_ids): + """Set selected folders. + + Folder selection changed in UI. Method triggers event with topic + "selection.folders.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids + } + + Args: + folder_ids (Iterable[str]): Selected folder ids. + """ + + pass + + @abstractmethod + def set_selected_versions(self, version_ids): + """Set selected versions. + + Version selection changed in UI. Method triggers event with topic + "selection.versions.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "version_ids": version_ids + } + + Args: + version_ids (Iterable[str]): Selected version ids. + """ + + pass + + @abstractmethod + def set_selected_representations(self, repre_ids): + """Set selected representations. + + Representation selection changed in UI. Method triggers event with + topic "selection.representations.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "version_ids": version_ids, + "representation_ids": representation_ids + } + + Args: + repre_ids (Iterable[str]): Selected representation ids. + """ + + pass + + # Load action items + @abstractmethod + def get_versions_action_items(self, project_name, version_ids): + """Action items for versions selection. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + list[ActionItem]: List of action items. + """ + + pass + + @abstractmethod + def get_representations_action_items( + self, project_name, representation_ids + ): + """Action items for representations selection. + + Args: + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + + Returns: + list[ActionItem]: List of action items. + """ + + pass + + @abstractmethod + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + """Trigger action item. + + Triggers event "load.started" with data: + { + "identifier": identifier, + "id": , + } + + And triggers "load.finished" with data: + { + "identifier": identifier, + "id": , + "error_info": [...], + } + + Args: + identifier (str): Action identifier. + options (dict[str, Any]): Action option values from UI. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + representation_ids (Iterable[str]): Representation ids. + """ + + pass + + @abstractmethod + def change_products_group(self, project_name, product_ids, group_name): + """Change group of products. + + Triggers event "products.group.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name, + } + + Args: + project_name (str): Project name. + product_ids (Iterable[str]): Product ids. + group_name (str): New group name. + """ + + pass + + @abstractmethod + def fill_root_in_source(self, source): + """Fill root in source path. + + Args: + source (Union[str, None]): Source of a published version. Usually + rootless workfile path. + """ + + pass + + # NOTE: Methods 'is_loaded_products_supported' and + # 'is_standard_projects_filter_enabled' are both based on being in host + # or not. Maybe we could implement only single method 'is_in_host'? + @abstractmethod + def is_loaded_products_supported(self): + """Is capable to get information about loaded products. + + Returns: + bool: True if it is supported. + """ + + pass + + @abstractmethod + def is_standard_projects_filter_enabled(self): + """Is standard projects filter enabled. + + This is used for filtering out when loader tool is used in a host. In + that case only current project and library projects should be shown. + + Returns: + bool: Frontend should filter out non-library projects, except + current context project. + """ + + pass diff --git a/openpype/tools/ayon_loader/control.py b/openpype/tools/ayon_loader/control.py new file mode 100644 index 0000000000..2b779f5c2e --- /dev/null +++ b/openpype/tools/ayon_loader/control.py @@ -0,0 +1,343 @@ +import logging + +import ayon_api + +from openpype.lib.events import QueuedEventSystem +from openpype.pipeline import Anatomy, get_current_context +from openpype.host import ILoadHost +from openpype.tools.ayon_utils.models import ( + ProjectsModel, + HierarchyModel, + NestedCacheItem, + CacheItem, + ThumbnailsModel, +) + +from .abstract import BackendLoaderController, FrontendLoaderController +from .models import SelectionModel, ProductsModel, LoaderActionsModel + + +class ExpectedSelection: + def __init__(self, controller): + self._project_name = None + self._folder_id = None + + self._project_selected = True + self._folder_selected = True + + self._controller = controller + + def _emit_change(self): + self._controller.emit_event( + "expected_selection_changed", + self.get_expected_selection_data(), + ) + + def set_expected_selection(self, project_name, folder_id): + self._project_name = project_name + self._folder_id = folder_id + + self._project_selected = False + self._folder_selected = False + self._emit_change() + + def get_expected_selection_data(self): + project_current = False + folder_current = False + if not self._project_selected: + project_current = True + elif not self._folder_selected: + folder_current = True + return { + "project": { + "name": self._project_name, + "current": project_current, + "selected": self._project_selected, + }, + "folder": { + "id": self._folder_id, + "current": folder_current, + "selected": self._folder_selected, + }, + } + + def is_expected_project_selected(self, project_name): + return project_name == self._project_name and self._project_selected + + def is_expected_folder_selected(self, folder_id): + return folder_id == self._folder_id and self._folder_selected + + def expected_project_selected(self, project_name): + if project_name != self._project_name: + return False + self._project_selected = True + self._emit_change() + return True + + def expected_folder_selected(self, folder_id): + if folder_id != self._folder_id: + return False + self._folder_selected = True + self._emit_change() + return True + + +class LoaderController(BackendLoaderController, FrontendLoaderController): + """ + + Args: + host (Optional[AbstractHost]): Host object. Defaults to None. + """ + + def __init__(self, host=None): + self._log = None + self._host = host + + self._event_system = self._create_event_system() + + self._project_anatomy_cache = NestedCacheItem( + levels=1, lifetime=60) + self._loaded_products_cache = CacheItem( + default_factory=set, lifetime=60) + + self._selection_model = SelectionModel(self) + self._expected_selection = ExpectedSelection(self) + self._projects_model = ProjectsModel(self) + self._hierarchy_model = HierarchyModel(self) + self._products_model = ProductsModel(self) + self._loader_actions_model = LoaderActionsModel(self) + self._thumbnails_model = ThumbnailsModel() + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + # --------------------------------- + # Implementation of abstract methods + # --------------------------------- + # Events system + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self._event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self._event_system.add_callback(topic, callback) + + def reset(self): + self._emit_event("controller.reset.started") + + project_name = self.get_selected_project_name() + folder_ids = self.get_selected_folder_ids() + + self._project_anatomy_cache.reset() + self._loaded_products_cache.reset() + + self._products_model.reset() + self._hierarchy_model.reset() + self._loader_actions_model.reset() + self._projects_model.reset() + self._thumbnails_model.reset() + + self._projects_model.refresh() + + if not project_name and not folder_ids: + context = self.get_current_context() + project_name = context["project_name"] + folder_id = context["folder_id"] + self.set_expected_selection(project_name, folder_id) + + self._emit_event("controller.reset.finished") + + # Expected selection helpers + def get_expected_selection_data(self): + return self._expected_selection.get_expected_selection_data() + + def set_expected_selection(self, project_name, folder_id): + self._expected_selection.set_expected_selection( + project_name, folder_id + ) + + def expected_project_selected(self, project_name): + self._expected_selection.expected_project_selected(project_name) + + def expected_folder_selected(self, folder_id): + self._expected_selection.expected_folder_selected(folder_id) + + # Entity model wrappers + def get_project_items(self, sender=None): + return self._projects_model.get_project_items(sender) + + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_product_items(self, project_name, folder_ids, sender=None): + return self._products_model.get_product_items( + project_name, folder_ids, sender) + + def get_product_item(self, project_name, product_id): + return self._products_model.get_product_item( + project_name, product_id + ) + + def get_product_type_items(self, project_name): + return self._products_model.get_product_type_items(project_name) + + def get_representation_items( + self, project_name, version_ids, sender=None + ): + return self._products_model.get_repre_items( + project_name, version_ids, sender + ) + + def get_folder_thumbnail_ids(self, project_name, folder_ids): + return self._thumbnails_model.get_folder_thumbnail_ids( + project_name, folder_ids) + + def get_version_thumbnail_ids(self, project_name, version_ids): + return self._thumbnails_model.get_version_thumbnail_ids( + project_name, version_ids) + + def get_thumbnail_path(self, project_name, thumbnail_id): + return self._thumbnails_model.get_thumbnail_path( + project_name, thumbnail_id + ) + + def change_products_group(self, project_name, product_ids, group_name): + self._products_model.change_products_group( + project_name, product_ids, group_name + ) + + def get_versions_action_items(self, project_name, version_ids): + return self._loader_actions_model.get_versions_action_items( + project_name, version_ids) + + def get_representations_action_items( + self, project_name, representation_ids): + return self._loader_actions_model.get_representations_action_items( + project_name, representation_ids) + + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + self._loader_actions_model.trigger_action_item( + identifier, + options, + project_name, + version_ids, + representation_ids + ) + + # Selection model wrappers + def get_selected_project_name(self): + return self._selection_model.get_selected_project_name() + + def set_selected_project(self, project_name): + self._selection_model.set_selected_project(project_name) + + # Selection model wrappers + def get_selected_folder_ids(self): + return self._selection_model.get_selected_folder_ids() + + def set_selected_folders(self, folder_ids): + self._selection_model.set_selected_folders(folder_ids) + + def get_selected_version_ids(self): + return self._selection_model.get_selected_version_ids() + + def set_selected_versions(self, version_ids): + self._selection_model.set_selected_versions(version_ids) + + def get_selected_representation_ids(self): + return self._selection_model.get_selected_representation_ids() + + def set_selected_representations(self, repre_ids): + self._selection_model.set_selected_representations(repre_ids) + + def fill_root_in_source(self, source): + project_name = self.get_selected_project_name() + anatomy = self._get_project_anatomy(project_name) + if anatomy is None: + return source + + try: + return anatomy.fill_root(source) + except Exception: + return source + + def get_current_context(self): + if self._host is None: + return { + "project_name": None, + "folder_id": None, + "task_name": None, + } + if hasattr(self._host, "get_current_context"): + context = self._host.get_current_context() + else: + context = get_current_context() + folder_id = None + project_name = context.get("project_name") + asset_name = context.get("asset_name") + if project_name and asset_name: + folder = ayon_api.get_folder_by_name( + project_name, asset_name, fields=["id"] + ) + if folder: + folder_id = folder["id"] + return { + "project_name": project_name, + "folder_id": folder_id, + "task_name": context.get("task_name"), + } + + def get_loaded_product_ids(self): + if self._host is None: + return set() + + context = self.get_current_context() + project_name = context["project_name"] + if not project_name: + return set() + + if not self._loaded_products_cache.is_valid: + if isinstance(self._host, ILoadHost): + containers = self._host.get_containers() + else: + containers = self._host.ls() + repre_ids = {c.get("representation") for c in containers} + repre_ids.discard(None) + product_ids = self._products_model.get_product_ids_by_repre_ids( + project_name, repre_ids + ) + self._loaded_products_cache.update_data(product_ids) + return self._loaded_products_cache.get_data() + + def is_loaded_products_supported(self): + return self._host is not None + + def is_standard_projects_filter_enabled(self): + return self._host is not None + + def _get_project_anatomy(self, project_name): + if not project_name: + return None + cache = self._project_anatomy_cache[project_name] + if not cache.is_valid: + cache.update_data(Anatomy(project_name)) + return cache.get_data() + + def _create_event_system(self): + return QueuedEventSystem() + + def _emit_event(self, topic, data=None): + self._event_system.emit(topic, data or {}, "controller") diff --git a/openpype/tools/ayon_loader/models/__init__.py b/openpype/tools/ayon_loader/models/__init__.py new file mode 100644 index 0000000000..6adfe71d86 --- /dev/null +++ b/openpype/tools/ayon_loader/models/__init__.py @@ -0,0 +1,10 @@ +from .selection import SelectionModel +from .products import ProductsModel +from .actions import LoaderActionsModel + + +__all__ = ( + "SelectionModel", + "ProductsModel", + "LoaderActionsModel", +) diff --git a/openpype/tools/ayon_loader/models/actions.py b/openpype/tools/ayon_loader/models/actions.py new file mode 100644 index 0000000000..3edb04e9eb --- /dev/null +++ b/openpype/tools/ayon_loader/models/actions.py @@ -0,0 +1,870 @@ +import sys +import traceback +import inspect +import copy +import collections +import uuid + +from openpype.client import ( + get_project, + get_assets, + get_subsets, + get_versions, + get_representations, +) +from openpype.pipeline.load import ( + discover_loader_plugins, + SubsetLoaderPlugin, + filter_repre_contexts_by_loader, + get_loader_identifier, + load_with_repre_context, + load_with_subset_context, + load_with_subset_contexts, + LoadError, + IncompatibleLoaderError, +) +from openpype.tools.ayon_utils.models import NestedCacheItem +from openpype.tools.ayon_loader.abstract import ActionItem + +ACTIONS_MODEL_SENDER = "actions.model" +NOT_SET = object() + + +class LoaderActionsModel: + """Model for loader actions. + + This is probably only part of models that requires to use codebase from + 'openpype.client' because of backwards compatibility with loaders logic + which are expecting mongo documents. + + TODOs: + Deprecate 'qargparse' usage in loaders and implement conversion + of 'ActionItem' to data (and 'from_data'). + Use controller to get entities (documents) -> possible only when + loaders are able to handle AYON vs. OpenPype logic. + Add missing site sync logic, and if possible remove it from loaders. + Implement loader actions to replace load plugins. + Ask loader actions to return action items instead of guessing them. + """ + + # Cache loader plugins for some time + # NOTE Set to '0' for development + loaders_cache_lifetime = 30 + + def __init__(self, controller): + self._controller = controller + self._current_context_project = NOT_SET + self._loaders_by_identifier = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + self._product_loaders = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + self._repre_loaders = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + + def reset(self): + """Reset the model with all cached items.""" + + self._current_context_project = NOT_SET + self._loaders_by_identifier.reset() + self._product_loaders.reset() + self._repre_loaders.reset() + + def get_versions_action_items(self, project_name, version_ids): + """Get action items for given version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + list[ActionItem]: List of action items. + """ + + ( + version_context_by_id, + repre_context_by_id + ) = self._contexts_for_versions( + project_name, + version_ids + ) + return self._get_action_items_for_contexts( + project_name, + version_context_by_id, + repre_context_by_id + ) + + def get_representations_action_items( + self, project_name, representation_ids + ): + """Get action items for given representation ids. + + Args: + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + + Returns: + list[ActionItem]: List of action items. + """ + + ( + product_context_by_id, + repre_context_by_id + ) = self._contexts_for_representations( + project_name, + representation_ids + ) + return self._get_action_items_for_contexts( + project_name, + product_context_by_id, + repre_context_by_id + ) + + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + """Trigger action by identifier. + + Triggers the action by identifier for given contexts. + + Triggers events "load.started" and "load.finished". Finished event + also contains "error_info" key with error information if any + happened. + + Args: + identifier (str): Loader identifier. + options (dict[str, Any]): Loader option values. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + representation_ids (Iterable[str]): Representation ids. + """ + + event_data = { + "identifier": identifier, + "id": uuid.uuid4().hex, + } + self._controller.emit_event( + "load.started", + event_data, + ACTIONS_MODEL_SENDER, + ) + loader = self._get_loader_by_identifier(project_name, identifier) + if representation_ids is not None: + error_info = self._trigger_representation_loader( + loader, + options, + project_name, + representation_ids, + ) + elif version_ids is not None: + error_info = self._trigger_version_loader( + loader, + options, + project_name, + version_ids, + ) + else: + raise NotImplementedError( + "Invalid arguments to trigger action item") + + event_data["error_info"] = error_info + self._controller.emit_event( + "load.finished", + event_data, + ACTIONS_MODEL_SENDER, + ) + + def _get_current_context_project(self): + """Get current context project name. + + The value is based on controller (host) and cached. + + Returns: + Union[str, None]: Current context project. + """ + + if self._current_context_project is NOT_SET: + context = self._controller.get_current_context() + self._current_context_project = context["project_name"] + return self._current_context_project + + def _get_action_label(self, loader, representation=None): + """Pull label info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + representation (Optional[dict[str, Any]]): Representation data. + + Returns: + str: Action label. + """ + + label = getattr(loader, "label", None) + if label is None: + label = loader.__name__ + if representation: + # Add the representation as suffix + label = "{} ({})".format(label, representation["name"]) + return label + + def _get_action_icon(self, loader): + """Pull icon info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + + Returns: + Union[dict[str, Any], None]: Icon definition based on + loader plugin. + """ + + # Support font-awesome icons using the `.icon` and `.color` + # attributes on plug-ins. + icon = getattr(loader, "icon", None) + if icon is not None and not isinstance(icon, dict): + icon = { + "type": "awesome-font", + "name": icon, + "color": getattr(loader, "color", None) or "white" + } + return icon + + def _get_action_tooltip(self, loader): + """Pull tooltip info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + + Returns: + str: Action tooltip. + """ + + # Add tooltip and statustip from Loader docstring + return inspect.getdoc(loader) + + def _filter_loaders_by_tool_name(self, project_name, loaders): + """Filter loaders by tool name. + + Tool names are based on OpenPype tools loader tool and library + loader tool. The new tool merged both into one tool and the difference + is based only on current project name. + + Args: + project_name (str): Project name. + loaders (list[LoaderPlugin]): List of loader plugins. + + Returns: + list[LoaderPlugin]: Filtered list of loader plugins. + """ + + # Keep filtering by tool name + # - if current context project name is same as project name we do + # expect the tool is used as OpenPype loader tool, otherwise + # as library loader tool. + if project_name == self._get_current_context_project(): + tool_name = "loader" + else: + tool_name = "library_loader" + filtered_loaders = [] + for loader in loaders: + tool_names = getattr(loader, "tool_names", None) + if ( + tool_names is None + or "*" in tool_names + or tool_name in tool_names + ): + filtered_loaders.append(loader) + return filtered_loaders + + def _create_loader_action_item( + self, + loader, + contexts, + project_name, + folder_ids=None, + product_ids=None, + version_ids=None, + representation_ids=None, + repre_name=None, + ): + label = self._get_action_label(loader) + if repre_name: + label = "{} ({})".format(label, repre_name) + return ActionItem( + get_loader_identifier(loader), + label=label, + icon=self._get_action_icon(loader), + tooltip=self._get_action_tooltip(loader), + options=loader.get_options(contexts), + order=loader.order, + project_name=project_name, + folder_ids=folder_ids, + product_ids=product_ids, + version_ids=version_ids, + representation_ids=representation_ids, + ) + + def _get_loaders(self, project_name): + """Loaders with loaded settings for a project. + + Questions: + Project name is required because of settings. Should we actually + pass in current project name instead of project name where + we want to show loaders for? + + Returns: + tuple[list[SubsetLoaderPlugin], list[LoaderPlugin]]: Discovered + loader plugins. + """ + + loaders_by_identifier_c = self._loaders_by_identifier[project_name] + product_loaders_c = self._product_loaders[project_name] + repre_loaders_c = self._repre_loaders[project_name] + if loaders_by_identifier_c.is_valid: + return product_loaders_c.get_data(), repre_loaders_c.get_data() + + # Get all representation->loader combinations available for the + # index under the cursor, so we can list the user the options. + available_loaders = self._filter_loaders_by_tool_name( + project_name, discover_loader_plugins(project_name) + ) + + repre_loaders = [] + product_loaders = [] + loaders_by_identifier = {} + for loader_cls in available_loaders: + if not loader_cls.enabled: + continue + + identifier = get_loader_identifier(loader_cls) + loaders_by_identifier[identifier] = loader_cls + if issubclass(loader_cls, SubsetLoaderPlugin): + product_loaders.append(loader_cls) + else: + repre_loaders.append(loader_cls) + + loaders_by_identifier_c.update_data(loaders_by_identifier) + product_loaders_c.update_data(product_loaders) + repre_loaders_c.update_data(repre_loaders) + return product_loaders, repre_loaders + + def _get_loader_by_identifier(self, project_name, identifier): + if not self._loaders_by_identifier[project_name].is_valid: + self._get_loaders(project_name) + loaders_by_identifier_c = self._loaders_by_identifier[project_name] + loaders_by_identifier = loaders_by_identifier_c.get_data() + return loaders_by_identifier.get(identifier) + + def _actions_sorter(self, action_item): + """Sort the Loaders by their order and then their name. + + Returns: + tuple[int, str]: Sort keys. + """ + + return action_item.order, action_item.label + + def _get_version_docs(self, project_name, version_ids): + """Get version documents for given version ids. + + This function also handles hero versions and copies data from + source version to it. + + Todos: + Remove this function when this is completely rewritten to + use AYON calls. + """ + + version_docs = list(get_versions( + project_name, version_ids=version_ids, hero=True + )) + hero_versions_by_src_id = collections.defaultdict(list) + src_hero_version = set() + for version_doc in version_docs: + if version_doc["type"] != "hero": + continue + version_id = "" + src_hero_version.add(version_id) + hero_versions_by_src_id[version_id].append(version_doc) + + src_versions = [] + if src_hero_version: + src_versions = get_versions(project_name, version_ids=version_ids) + for src_version in src_versions: + src_version_id = src_version["_id"] + for hero_version in hero_versions_by_src_id[src_version_id]: + hero_version["data"] = copy.deepcopy(src_version["data"]) + + return version_docs + + def _contexts_for_versions(self, project_name, version_ids): + """Get contexts for given version ids. + + Prepare version contexts for 'SubsetLoaderPlugin' and representation + contexts for 'LoaderPlugin' for all children representations of + given versions. + + This method is very similar to '_contexts_for_representations' but the + queries of documents are called in a different order. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + tuple[list[dict[str, Any]], list[dict[str, Any]]]: Version and + representation contexts. + """ + + # TODO fix hero version + version_context_by_id = {} + repre_context_by_id = {} + if not project_name and not version_ids: + return version_context_by_id, repre_context_by_id + + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = {} + version_docs_by_product_id = collections.defaultdict(list) + for version_doc in version_docs: + version_id = version_doc["_id"] + product_id = version_doc["parent"] + version_docs_by_id[version_id] = version_doc + version_docs_by_product_id[product_id].append(version_doc) + + _product_ids = set(version_docs_by_product_id.keys()) + _product_docs = get_subsets(project_name, subset_ids=_product_ids) + product_docs_by_id = {p["_id"]: p for p in _product_docs} + + _folder_ids = {p["parent"] for p in product_docs_by_id.values()} + _folder_docs = get_assets(project_name, asset_ids=_folder_ids) + folder_docs_by_id = {f["_id"]: f for f in _folder_docs} + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + for version_doc in version_docs: + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + version_context_by_id[product_id] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + } + + repre_docs = get_representations( + project_name, version_ids=version_ids) + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + + repre_context_by_id[repre_doc["_id"]] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + } + + return version_context_by_id, repre_context_by_id + + def _contexts_for_representations(self, project_name, repre_ids): + """Get contexts for given representation ids. + + Prepare version contexts for 'SubsetLoaderPlugin' and representation + contexts for 'LoaderPlugin' for all children representations of + given versions. + + This method is very similar to '_contexts_for_versions' but the + queries of documents are called in a different order. + + Args: + project_name (str): Project name. + repre_ids (Iterable[str]): Representation ids. + + Returns: + tuple[list[dict[str, Any]], list[dict[str, Any]]]: Version and + representation contexts. + """ + + product_context_by_id = {} + repre_context_by_id = {} + if not project_name and not repre_ids: + return product_context_by_id, repre_context_by_id + + repre_docs = list(get_representations( + project_name, representation_ids=repre_ids + )) + version_ids = {r["parent"] for r in repre_docs} + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = { + v["_id"]: v for v in version_docs + } + + product_ids = {v["parent"] for v in version_docs_by_id.values()} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = { + p["_id"]: p for p in product_docs + } + + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = { + f["_id"]: f for f in folder_docs + } + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + for product_id, product_doc in product_docs_by_id.items(): + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + product_context_by_id[product_id] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + } + + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + + repre_context_by_id[repre_doc["_id"]] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + } + return product_context_by_id, repre_context_by_id + + def _get_action_items_for_contexts( + self, + project_name, + version_context_by_id, + repre_context_by_id + ): + """Prepare action items based on contexts. + + Actions are prepared based on discovered loader plugins and contexts. + The context must be valid for the loader plugin. + + Args: + project_name (str): Project name. + version_context_by_id (dict[str, dict[str, Any]]): Version + contexts by version id. + repre_context_by_id (dict[str, dict[str, Any]]): Representation + """ + + action_items = [] + if not version_context_by_id and not repre_context_by_id: + return action_items + + product_loaders, repre_loaders = self._get_loaders(project_name) + + repre_contexts_by_name = collections.defaultdict(list) + for repre_context in repre_context_by_id.values(): + repre_name = repre_context["representation"]["name"] + repre_contexts_by_name[repre_name].append(repre_context) + + for loader in repre_loaders: + # # do not allow download whole repre, select specific repre + # if tools_lib.is_sync_loader(loader): + # continue + + for repre_name, repre_contexts in repre_contexts_by_name.items(): + filtered_repre_contexts = filter_repre_contexts_by_loader( + repre_contexts, loader) + if not filtered_repre_contexts: + continue + + repre_ids = set() + repre_version_ids = set() + repre_product_ids = set() + repre_folder_ids = set() + for repre_context in filtered_repre_contexts: + repre_ids.add(repre_context["representation"]["_id"]) + repre_product_ids.add(repre_context["subset"]["_id"]) + repre_version_ids.add(repre_context["version"]["_id"]) + repre_folder_ids.add(repre_context["asset"]["_id"]) + + item = self._create_loader_action_item( + loader, + repre_contexts, + project_name=project_name, + folder_ids=repre_folder_ids, + product_ids=repre_product_ids, + version_ids=repre_version_ids, + representation_ids=repre_ids, + repre_name=repre_name, + ) + action_items.append(item) + + # Subset Loaders. + version_ids = set(version_context_by_id.keys()) + product_folder_ids = set() + product_ids = set() + for product_context in version_context_by_id.values(): + product_ids.add(product_context["subset"]["_id"]) + product_folder_ids.add(product_context["asset"]["_id"]) + + version_contexts = list(version_context_by_id.values()) + for loader in product_loaders: + item = self._create_loader_action_item( + loader, + version_contexts, + project_name=project_name, + folder_ids=product_folder_ids, + product_ids=product_ids, + version_ids=version_ids, + ) + action_items.append(item) + + action_items.sort(key=self._actions_sorter) + return action_items + + def _trigger_version_loader( + self, + loader, + options, + project_name, + version_ids, + ): + """Trigger version loader. + + This triggers 'load' method of 'SubsetLoaderPlugin' for given version + ids. + + Note: + Even when the plugin is 'SubsetLoaderPlugin' it actually expects + versions and should be named 'VersionLoaderPlugin'. Because it + is planned to refactor load system and introduce + 'LoaderAction' plugins it is not relevant to change it + anymore. + + Args: + loader (SubsetLoaderPlugin): Loader plugin to use. + options (dict): Option values for loader. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + """ + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + version_docs = self._get_version_docs(project_name, version_ids) + product_ids = {v["parent"] for v in version_docs} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = {f["_id"]: f for f in product_docs} + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = {f["_id"]: f for f in folder_docs} + product_contexts = [] + for version_doc in version_docs: + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + product_contexts.append({ + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + }) + + return self._load_products_by_loader( + loader, product_contexts, options + ) + + def _trigger_representation_loader( + self, + loader, + options, + project_name, + representation_ids, + ): + """Trigger representation loader. + + This triggers 'load' method of 'LoaderPlugin' for given representation + ids. For that are prepared contexts for each representation, with + all parent documents. + + Args: + loader (LoaderPlugin): Loader plugin to use. + options (dict): Option values for loader. + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + """ + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + repre_docs = list(get_representations( + project_name, representation_ids=representation_ids + )) + version_ids = {r["parent"] for r in repre_docs} + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = {v["_id"]: v for v in version_docs} + product_ids = {v["parent"] for v in version_docs_by_id.values()} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = {p["_id"]: p for p in product_docs} + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = {f["_id"]: f for f in folder_docs} + repre_contexts = [] + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + repre_contexts.append({ + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + }) + + return self._load_representations_by_loader( + loader, repre_contexts, options + ) + + def _load_representations_by_loader(self, loader, repre_contexts, options): + """Loops through list of repre_contexts and loads them with one loader + + Args: + loader (LoaderPlugin): Loader plugin to use. + repre_contexts (list[dict]): Full info about selected + representations, containing repre, version, subset, asset and + project documents. + options (dict): Data from options. + """ + + error_info = [] + for repre_context in repre_contexts: + version_doc = repre_context["version"] + if version_doc["type"] == "hero_version": + version_name = "Hero" + else: + version_name = version_doc.get("name") + try: + load_with_repre_context( + loader, + repre_context, + options=options + ) + + except IncompatibleLoaderError as exc: + print(exc) + error_info.append(( + "Incompatible Loader", + None, + repre_context["representation"]["name"], + repre_context["subset"]["name"], + version_name + )) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join(traceback.format_exception( + exc_type, exc_value, exc_traceback + )) + + error_info.append(( + str(exc), + formatted_traceback, + repre_context["representation"]["name"], + repre_context["subset"]["name"], + version_name + )) + return error_info + + def _load_products_by_loader(self, loader, version_contexts, options): + """Triggers load with SubsetLoader type of loaders. + + Warning: + Plugin is named 'SubsetLoader' but version is passed to context + too. + + Args: + loader (SubsetLoder): Loader used to load. + version_contexts (list[dict[str, Any]]): For context for each + version. + options (dict[str, Any]): Options for loader that user could fill. + """ + + error_info = [] + if loader.is_multiple_contexts_compatible: + subset_names = [] + for context in version_contexts: + subset_name = context.get("subset", {}).get("name") or "N/A" + subset_names.append(subset_name) + try: + load_with_subset_contexts( + loader, + version_contexts, + options=options + ) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join(traceback.format_exception( + exc_type, exc_value, exc_traceback + )) + error_info.append(( + str(exc), + formatted_traceback, + None, + ", ".join(subset_names), + None + )) + else: + for version_context in version_contexts: + subset_name = ( + version_context.get("subset", {}).get("name") or "N/A" + ) + try: + load_with_subset_context( + loader, + version_context, + options=options + ) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join( + traceback.format_exception( + exc_type, exc_value, exc_traceback + ) + ) + + error_info.append(( + str(exc), + formatted_traceback, + None, + subset_name, + None + )) + + return error_info diff --git a/openpype/tools/ayon_loader/models/products.py b/openpype/tools/ayon_loader/models/products.py new file mode 100644 index 0000000000..33023cc164 --- /dev/null +++ b/openpype/tools/ayon_loader/models/products.py @@ -0,0 +1,682 @@ +import collections +import contextlib + +import arrow +import ayon_api +from ayon_api.operations import OperationsSession + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.models import NestedCacheItem +from openpype.tools.ayon_loader.abstract import ( + ProductTypeItem, + ProductItem, + VersionItem, + RepreItem, +) + +PRODUCTS_MODEL_SENDER = "products.model" + + +def version_item_from_entity(version): + version_attribs = version["attrib"] + frame_start = version_attribs.get("frameStart") + frame_end = version_attribs.get("frameEnd") + handle_start = version_attribs.get("handleStart") + handle_end = version_attribs.get("handleEnd") + step = version_attribs.get("step") + comment = version_attribs.get("comment") + source = version_attribs.get("source") + + frame_range = None + duration = None + handles = None + if frame_start is not None and frame_end is not None: + # Remove superfluous zeros from numbers (3.0 -> 3) to improve + # readability for most frame ranges + frame_start = int(frame_start) + frame_end = int(frame_end) + frame_range = "{}-{}".format(frame_start, frame_end) + duration = frame_end - frame_start + 1 + + if handle_start is not None and handle_end is not None: + handles = "{}-{}".format(int(handle_start), int(handle_end)) + + # NOTE There is also 'updatedAt', should be used that instead? + # TODO skip conversion - converting to '%Y%m%dT%H%M%SZ' is because + # 'PrettyTimeDelegate' expects it + created_at = arrow.get(version["createdAt"]) + published_time = created_at.strftime("%Y%m%dT%H%M%SZ") + author = version["author"] + version_num = version["version"] + is_hero = version_num < 0 + + return VersionItem( + version_id=version["id"], + version=version_num, + is_hero=is_hero, + product_id=version["productId"], + thumbnail_id=version["thumbnailId"], + published_time=published_time, + author=author, + frame_range=frame_range, + duration=duration, + handles=handles, + step=step, + comment=comment, + source=source, + ) + + +def product_item_from_entity( + product_entity, + version_entities, + product_type_items_by_name, + folder_label, + product_in_scene, +): + product_attribs = product_entity["attrib"] + group = product_attribs.get("productGroup") + product_type = product_entity["productType"] + product_type_item = product_type_items_by_name[product_type] + product_type_icon = product_type_item.icon + + product_icon = { + "type": "awesome-font", + "name": "fa.file-o", + "color": get_default_entity_icon_color(), + } + version_items = { + version_entity["id"]: version_item_from_entity(version_entity) + for version_entity in version_entities + } + + return ProductItem( + product_id=product_entity["id"], + product_type=product_type, + product_name=product_entity["name"], + product_icon=product_icon, + product_type_icon=product_type_icon, + product_in_scene=product_in_scene, + group_name=group, + folder_id=product_entity["folderId"], + folder_label=folder_label, + version_items=version_items, + ) + + +def product_type_item_from_data(product_type_data): + # TODO implement icon implementation + # icon = product_type_data["icon"] + # color = product_type_data["color"] + icon = { + "type": "awesome-font", + "name": "fa.folder", + "color": "#0091B2", + } + # TODO implement checked logic + return ProductTypeItem(product_type_data["name"], icon, True) + + +class ProductsModel: + """Model for products, version and representation. + + All of the entities are product based. This model prepares data for UI + and caches it for faster access. + + Note: + Data are not used for actions model because that would require to + break OpenPype compatibility of 'LoaderPlugin's. + """ + + lifetime = 60 # In seconds (minute by default) + + def __init__(self, controller): + self._controller = controller + + # Mapping helpers + # NOTE - mapping must be cleaned up with cache cleanup + self._product_item_by_id = collections.defaultdict(dict) + self._version_item_by_id = collections.defaultdict(dict) + self._product_folder_ids_mapping = collections.defaultdict(dict) + + # Cache helpers + self._product_type_items_cache = NestedCacheItem( + levels=1, default_factory=list, lifetime=self.lifetime) + self._product_items_cache = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + self._repre_items_cache = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + + def reset(self): + """Reset model with all cached data.""" + + self._product_item_by_id.clear() + self._version_item_by_id.clear() + self._product_folder_ids_mapping.clear() + + self._product_type_items_cache.reset() + self._product_items_cache.reset() + self._repre_items_cache.reset() + + def get_product_type_items(self, project_name): + """Product type items for project. + + Args: + project_name (str): Project name. + + Returns: + list[ProductTypeItem]: Product type items. + """ + + cache = self._product_type_items_cache[project_name] + if not cache.is_valid: + product_types = ayon_api.get_project_product_types(project_name) + cache.update_data([ + product_type_item_from_data(product_type) + for product_type in product_types + ]) + return cache.get_data() + + def get_product_items(self, project_name, folder_ids, sender): + """Product items with versions for project and folder ids. + + Product items also contain version items. They're directly connected + to product items in the UI and the separation is not needed. + + Args: + project_name (Union[str, None]): Project name. + folder_ids (Iterable[str]): Folder ids. + sender (Union[str, None]): Who triggered the method. + + Returns: + list[ProductItem]: Product items. + """ + + if not project_name or not folder_ids: + return [] + + project_cache = self._product_items_cache[project_name] + output = [] + folder_ids_to_update = set() + for folder_id in folder_ids: + cache = project_cache[folder_id] + if cache.is_valid: + output.extend(cache.get_data().values()) + else: + folder_ids_to_update.add(folder_id) + + self._refresh_product_items( + project_name, folder_ids_to_update, sender) + + for folder_id in folder_ids_to_update: + cache = project_cache[folder_id] + output.extend(cache.get_data().values()) + return output + + def get_product_item(self, project_name, product_id): + """Get product item based on passed product id. + + This method is using cached items, but if cache is not valid it also + can query the item. + + Args: + project_name (Union[str, None]): Where to look for product. + product_id (Union[str, None]): Product id to receive. + + Returns: + Union[ProductItem, None]: Product item or 'None' if not found. + """ + + if not any((project_name, product_id)): + return None + + product_items_by_id = self._product_item_by_id[project_name] + product_item = product_items_by_id.get(product_id) + if product_item is not None: + return product_item + for product_item in self._query_product_items_by_ids( + project_name, product_ids=[product_id] + ).values(): + return product_item + + def get_product_ids_by_repre_ids(self, project_name, repre_ids): + """Get product ids based on passed representation ids. + + Args: + project_name (str): Where to look for representations. + repre_ids (Iterable[str]): Representation ids. + + Returns: + set[str]: Product ids for passed representation ids. + """ + + # TODO look out how to use single server call + if not repre_ids: + return set() + repres = ayon_api.get_representations( + project_name, repre_ids, fields=["versionId"] + ) + version_ids = {repre["versionId"] for repre in repres} + if not version_ids: + return set() + versions = ayon_api.get_versions( + project_name, version_ids=version_ids, fields=["productId"] + ) + return {v["productId"] for v in versions} + + def get_repre_items(self, project_name, version_ids, sender): + """Get representation items for passed version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + sender (Union[str, None]): Who triggered the method. + + Returns: + list[RepreItem]: Representation items. + """ + + output = [] + if not any((project_name, version_ids)): + return output + + invalid_version_ids = set() + project_cache = self._repre_items_cache[project_name] + for version_id in version_ids: + version_cache = project_cache[version_id] + if version_cache.is_valid: + output.extend(version_cache.get_data().values()) + else: + invalid_version_ids.add(version_id) + + if invalid_version_ids: + self.refresh_representation_items( + project_name, invalid_version_ids, sender + ) + + for version_id in invalid_version_ids: + version_cache = project_cache[version_id] + output.extend(version_cache.get_data().values()) + + return output + + def change_products_group(self, project_name, product_ids, group_name): + """Change group name for passed product ids. + + Group name is stored in 'attrib' of product entity and is used in UI + to group items. + + Method triggers "products.group.changed" event with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name + } + + Args: + project_name (str): Project name. + product_ids (Iterable[str]): Product ids to change group name for. + group_name (str): Group name to set. + """ + + if not product_ids: + return + + product_items = self._get_product_items_by_id( + project_name, product_ids + ) + if not product_items: + return + + session = OperationsSession() + folder_ids = set() + for product_item in product_items.values(): + session.update_entity( + project_name, + "product", + product_item.product_id, + {"attrib": {"productGroup": group_name}} + ) + folder_ids.add(product_item.folder_id) + product_item.group_name = group_name + + session.commit() + self._controller.emit_event( + "products.group.changed", + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name, + }, + PRODUCTS_MODEL_SENDER + ) + + def _get_product_items_by_id(self, project_name, product_ids): + product_item_by_id = self._product_item_by_id[project_name] + missing_product_ids = set() + output = {} + for product_id in product_ids: + product_item = product_item_by_id.get(product_id) + if product_item is not None: + output[product_id] = product_item + else: + missing_product_ids.add(product_id) + + output.update( + self._query_product_items_by_ids( + project_name, missing_product_ids + ) + ) + return output + + def _get_version_items_by_id(self, project_name, version_ids): + version_item_by_id = self._version_item_by_id[project_name] + missing_version_ids = set() + output = {} + for version_id in version_ids: + version_item = version_item_by_id.get(version_id) + if version_item is not None: + output[version_id] = version_item + else: + missing_version_ids.add(version_id) + + output.update( + self._query_version_items_by_ids( + project_name, missing_version_ids + ) + ) + return output + + def _create_product_items( + self, + project_name, + products, + versions, + folder_items=None, + product_type_items=None, + ): + if folder_items is None: + folder_items = self._controller.get_folder_items(project_name) + + if product_type_items is None: + product_type_items = self.get_product_type_items(project_name) + + loaded_product_ids = self._controller.get_loaded_product_ids() + + versions_by_product_id = collections.defaultdict(list) + for version in versions: + versions_by_product_id[version["productId"]].append(version) + product_type_items_by_name = { + product_type_item.name: product_type_item + for product_type_item in product_type_items + } + output = {} + for product in products: + product_id = product["id"] + folder_id = product["folderId"] + folder_item = folder_items.get(folder_id) + if not folder_item: + continue + versions = versions_by_product_id[product_id] + if not versions: + continue + product_item = product_item_from_entity( + product, + versions, + product_type_items_by_name, + folder_item.label, + product_id in loaded_product_ids, + ) + output[product_id] = product_item + return output + + def _query_product_items_by_ids( + self, + project_name, + folder_ids=None, + product_ids=None, + folder_items=None + ): + """Query product items. + + This method does get from, or store to, cache attributes. + + One of 'product_ids' or 'folder_ids' must be passed to the method. + + Args: + project_name (str): Project name. + folder_ids (Optional[Iterable[str]]): Folder ids under which are + products. + product_ids (Optional[Iterable[str]]): Product ids to use. + folder_items (Optional[Dict[str, FolderItem]]): Prepared folder + items from controller. + + Returns: + dict[str, ProductItem]: Product items by product id. + """ + + if not folder_ids and not product_ids: + return {} + + kwargs = {} + if folder_ids is not None: + kwargs["folder_ids"] = folder_ids + + if product_ids is not None: + kwargs["product_ids"] = product_ids + + products = list(ayon_api.get_products(project_name, **kwargs)) + product_ids = {product["id"] for product in products} + + versions = ayon_api.get_versions( + project_name, product_ids=product_ids + ) + + return self._create_product_items( + project_name, products, versions, folder_items=folder_items + ) + + def _query_version_items_by_ids(self, project_name, version_ids): + versions = list(ayon_api.get_versions( + project_name, version_ids=version_ids + )) + product_ids = {version["productId"] for version in versions} + products = list(ayon_api.get_products( + project_name, product_ids=product_ids + )) + product_items = self._create_product_items( + project_name, products, versions + ) + version_items = {} + for product_item in product_items.values(): + version_items.update(product_item.version_items) + return version_items + + def _clear_product_version_items(self, project_name, folder_ids): + """Clear product and version items from memory. + + When products are re-queried for a folders, the old product and version + items in '_product_item_by_id' and '_version_item_by_id' should + be cleaned up from memory. And mapping in stored in + '_product_folder_ids_mapping' is not relevant either. + + Args: + project_name (str): Name of project. + folder_ids (Iterable[str]): Folder ids which are being refreshed. + """ + + project_mapping = self._product_folder_ids_mapping[project_name] + if not project_mapping: + return + + product_item_by_id = self._product_item_by_id[project_name] + version_item_by_id = self._version_item_by_id[project_name] + for folder_id in folder_ids: + product_ids = project_mapping.pop(folder_id, None) + if not product_ids: + continue + + for product_id in product_ids: + product_item = product_item_by_id.pop(product_id, None) + if product_item is None: + continue + for version_item in product_item.version_items.values(): + version_item_by_id.pop(version_item.version_id, None) + + def _refresh_product_items(self, project_name, folder_ids, sender): + """Refresh product items and store them in cache. + + Args: + project_name (str): Name of project. + folder_ids (Iterable[str]): Folder ids which are being refreshed. + sender (Union[str, None]): Who triggered the refresh. + """ + + if not project_name or not folder_ids: + return + + self._clear_product_version_items(project_name, folder_ids) + + project_mapping = self._product_folder_ids_mapping[project_name] + product_item_by_id = self._product_item_by_id[project_name] + version_item_by_id = self._version_item_by_id[project_name] + + for folder_id in folder_ids: + project_mapping[folder_id] = set() + + with self._product_refresh_event_manager( + project_name, folder_ids, sender + ): + folder_items = self._controller.get_folder_items(project_name) + items_by_folder_id = { + folder_id: {} + for folder_id in folder_ids + } + product_items_by_id = self._query_product_items_by_ids( + project_name, + folder_ids=folder_ids, + folder_items=folder_items + ) + for product_id, product_item in product_items_by_id.items(): + folder_id = product_item.folder_id + items_by_folder_id[product_item.folder_id][product_id] = ( + product_item + ) + + project_mapping[folder_id].add(product_id) + product_item_by_id[product_id] = product_item + for version_id, version_item in ( + product_item.version_items.items() + ): + version_item_by_id[version_id] = version_item + + project_cache = self._product_items_cache[project_name] + for folder_id, product_items in items_by_folder_id.items(): + project_cache[folder_id].update_data(product_items) + + @contextlib.contextmanager + def _product_refresh_event_manager( + self, project_name, folder_ids, sender + ): + self._controller.emit_event( + "products.refresh.started", + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "products.refresh.finished", + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + + def refresh_representation_items( + self, project_name, version_ids, sender + ): + if not any((project_name, version_ids)): + return + self._controller.emit_event( + "model.representations.refresh.started", + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + failed = False + try: + self._refresh_representation_items(project_name, version_ids) + except Exception: + # TODO add more information about failed refresh + failed = True + + self._controller.emit_event( + "model.representations.refresh.finished", + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender, + "failed": failed, + }, + PRODUCTS_MODEL_SENDER + ) + + def _refresh_representation_items(self, project_name, version_ids): + representations = list(ayon_api.get_representations( + project_name, + version_ids=version_ids, + fields=["id", "name", "versionId"] + )) + + version_items_by_id = self._get_version_items_by_id( + project_name, version_ids + ) + product_ids = { + version_item.product_id + for version_item in version_items_by_id.values() + } + product_items_by_id = self._get_product_items_by_id( + project_name, product_ids + ) + repre_icon = { + "type": "awesome-font", + "name": "fa.file-o", + "color": get_default_entity_icon_color(), + } + repre_items_by_version_id = collections.defaultdict(dict) + for representation in representations: + version_id = representation["versionId"] + version_item = version_items_by_id.get(version_id) + if version_item is None: + continue + product_item = product_items_by_id.get(version_item.product_id) + if product_item is None: + continue + repre_id = representation["id"] + repre_item = RepreItem( + repre_id, + representation["name"], + repre_icon, + product_item.product_name, + product_item.folder_label, + ) + repre_items_by_version_id[version_id][repre_id] = repre_item + + project_cache = self._repre_items_cache[project_name] + for version_id, repre_items in repre_items_by_version_id.items(): + version_cache = project_cache[version_id] + version_cache.update_data(repre_items) diff --git a/openpype/tools/ayon_loader/models/selection.py b/openpype/tools/ayon_loader/models/selection.py new file mode 100644 index 0000000000..326ff835f6 --- /dev/null +++ b/openpype/tools/ayon_loader/models/selection.py @@ -0,0 +1,85 @@ +class SelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.project.changed" + - "selection.folders.changed" + - "selection.versions.changed" + """ + + event_source = "selection.model" + + def __init__(self, controller): + self._controller = controller + + self._project_name = None + self._folder_ids = set() + self._version_ids = set() + self._representation_ids = set() + + def get_selected_project_name(self): + return self._project_name + + def set_selected_project(self, project_name): + if self._project_name == project_name: + return + + self._project_name = project_name + self._controller.emit_event( + "selection.project.changed", + {"project_name": self._project_name}, + self.event_source + ) + + def get_selected_folder_ids(self): + return self._folder_ids + + def set_selected_folders(self, folder_ids): + if folder_ids == self._folder_ids: + return + + self._folder_ids = folder_ids + self._controller.emit_event( + "selection.folders.changed", + { + "project_name": self._project_name, + "folder_ids": folder_ids, + }, + self.event_source + ) + + def get_selected_version_ids(self): + return self._version_ids + + def set_selected_versions(self, version_ids): + if version_ids == self._version_ids: + return + + self._version_ids = version_ids + self._controller.emit_event( + "selection.versions.changed", + { + "project_name": self._project_name, + "folder_ids": self._folder_ids, + "version_ids": self._version_ids, + }, + self.event_source + ) + + def get_selected_representation_ids(self): + return self._representation_ids + + def set_selected_representations(self, repre_ids): + if repre_ids == self._representation_ids: + return + + self._representation_ids = repre_ids + self._controller.emit_event( + "selection.representations.changed", + { + "project_name": self._project_name, + "folder_ids": self._folder_ids, + "version_ids": self._version_ids, + "representation_ids": self._representation_ids, + } + ) diff --git a/openpype/tools/ayon_loader/ui/__init__.py b/openpype/tools/ayon_loader/ui/__init__.py new file mode 100644 index 0000000000..41e4418641 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/__init__.py @@ -0,0 +1,6 @@ +from .window import LoaderWindow + + +__all__ = ( + "LoaderWindow", +) diff --git a/openpype/tools/ayon_loader/ui/actions_utils.py b/openpype/tools/ayon_loader/ui/actions_utils.py new file mode 100644 index 0000000000..a269b643dc --- /dev/null +++ b/openpype/tools/ayon_loader/ui/actions_utils.py @@ -0,0 +1,118 @@ +import uuid + +from qtpy import QtWidgets, QtGui +import qtawesome + +from openpype.lib.attribute_definitions import AbstractAttrDef +from openpype.tools.attribute_defs import AttributeDefinitionsDialog +from openpype.tools.utils.widgets import ( + OptionalMenu, + OptionalAction, + OptionDialog, +) +from openpype.tools.ayon_utils.widgets import get_qt_icon + + +def show_actions_menu(action_items, global_point, one_item_selected, parent): + selected_action_item = None + selected_options = None + + if not action_items: + menu = QtWidgets.QMenu(parent) + action = _get_no_loader_action(menu, one_item_selected) + menu.addAction(action) + menu.exec_(global_point) + return selected_action_item, selected_options + + menu = OptionalMenu(parent) + + action_items_by_id = {} + for action_item in action_items: + item_id = uuid.uuid4().hex + action_items_by_id[item_id] = action_item + item_options = action_item.options + icon = get_qt_icon(action_item.icon) + use_option = bool(item_options) + action = OptionalAction( + action_item.label, + icon, + use_option, + menu + ) + if use_option: + # Add option box tip + action.set_option_tip(item_options) + + tip = action_item.tooltip + if tip: + action.setToolTip(tip) + action.setStatusTip(tip) + + action.setData(item_id) + + menu.addAction(action) + + action = menu.exec_(global_point) + if action is not None: + item_id = action.data() + selected_action_item = action_items_by_id.get(item_id) + + if selected_action_item is not None: + selected_options = _get_options(action, selected_action_item, parent) + + return selected_action_item, selected_options + + +def _get_options(action, action_item, parent): + """Provides dialog to select value from loader provided options. + + Loader can provide static or dynamically created options based on + AttributeDefinitions, and for backwards compatibility qargparse. + + Args: + action (OptionalAction) - Action object in menu. + action_item (ActionItem) - Action item with context information. + parent (QtCore.QObject) - Parent object for dialog. + + Returns: + Union[dict[str, Any], None]: Selected value from attributes or + 'None' if dialog was cancelled. + """ + + # Pop option dialog + options = action_item.options + if not getattr(action, "optioned", False) or not options: + return {} + + if isinstance(options[0], AbstractAttrDef): + qargparse_options = False + dialog = AttributeDefinitionsDialog(options, parent) + else: + qargparse_options = True + dialog = OptionDialog(parent) + dialog.create(options) + + dialog.setWindowTitle(action.label + " Options") + + if not dialog.exec_(): + return None + + # Get option + if qargparse_options: + return dialog.parse() + return dialog.get_values() + + +def _get_no_loader_action(menu, one_item_selected): + """Creates dummy no loader option in 'menu'""" + + if one_item_selected: + submsg = "this version." + else: + submsg = "your selection." + msg = "No compatible loaders for {}".format(submsg) + icon = qtawesome.icon( + "fa.exclamation", + color=QtGui.QColor(255, 51, 0) + ) + return QtWidgets.QAction(icon, ("*" + msg), menu) diff --git a/openpype/tools/ayon_loader/ui/folders_widget.py b/openpype/tools/ayon_loader/ui/folders_widget.py new file mode 100644 index 0000000000..53351f76d9 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/folders_widget.py @@ -0,0 +1,407 @@ +import qtpy +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) +from openpype.style import get_objected_colors + +from openpype.tools.ayon_utils.widgets import ( + FoldersModel, + FOLDERS_MODEL_SENDER_NAME, +) +from openpype.tools.ayon_utils.widgets.folders_widget import FOLDER_ID_ROLE + +if qtpy.API == "pyside": + from PySide.QtGui import QStyleOptionViewItemV4 +elif qtpy.API == "pyqt4": + from PyQt4.QtGui import QStyleOptionViewItemV4 + +UNDERLINE_COLORS_ROLE = QtCore.Qt.UserRole + 50 + + +class UnderlinesFolderDelegate(QtWidgets.QItemDelegate): + """Item delegate drawing bars under folder label. + + This is used in loader tool. Multiselection of folders + may group products by name under colored groups. Selected color groups are + then propagated back to selected folders as underlines. + """ + bar_height = 3 + + def __init__(self, *args, **kwargs): + super(UnderlinesFolderDelegate, self).__init__(*args, **kwargs) + colors = get_objected_colors("loader", "asset-view") + self._selected_color = colors["selected"].get_qcolor() + self._hover_color = colors["hover"].get_qcolor() + self._selected_hover_color = colors["selected-hover"].get_qcolor() + + def sizeHint(self, option, index): + """Add bar height to size hint.""" + result = super(UnderlinesFolderDelegate, self).sizeHint(option, index) + height = result.height() + result.setHeight(height + self.bar_height) + + return result + + def paint(self, painter, option, index): + """Replicate painting of an item and draw color bars if needed.""" + # Qt4 compat + if qtpy.API in ("pyside", "pyqt4"): + option = QStyleOptionViewItemV4(option) + + painter.save() + + item_rect = QtCore.QRect(option.rect) + item_rect.setHeight(option.rect.height() - self.bar_height) + + subset_colors = index.data(UNDERLINE_COLORS_ROLE) or [] + + subset_colors_width = 0 + if subset_colors: + subset_colors_width = option.rect.width() / len(subset_colors) + + subset_rects = [] + counter = 0 + for subset_c in subset_colors: + new_color = None + new_rect = None + if subset_c: + new_color = QtGui.QColor(subset_c) + + new_rect = QtCore.QRect( + option.rect.left() + (counter * subset_colors_width), + option.rect.top() + ( + option.rect.height() - self.bar_height + ), + subset_colors_width, + self.bar_height + ) + subset_rects.append((new_color, new_rect)) + counter += 1 + + # Background + if option.state & QtWidgets.QStyle.State_Selected: + if len(subset_colors) == 0: + item_rect.setTop(item_rect.top() + (self.bar_height / 2)) + + if option.state & QtWidgets.QStyle.State_MouseOver: + bg_color = self._selected_hover_color + else: + bg_color = self._selected_color + else: + item_rect.setTop(item_rect.top() + (self.bar_height / 2)) + if option.state & QtWidgets.QStyle.State_MouseOver: + bg_color = self._hover_color + else: + bg_color = QtGui.QColor() + bg_color.setAlpha(0) + + # When not needed to do a rounded corners (easier and without + # painter restore): + painter.fillRect( + option.rect, + QtGui.QBrush(bg_color) + ) + + if option.state & QtWidgets.QStyle.State_Selected: + for color, subset_rect in subset_rects: + if not color or not subset_rect: + continue + painter.fillRect(subset_rect, QtGui.QBrush(color)) + + # Icon + icon_index = index.model().index( + index.row(), index.column(), index.parent() + ) + # - Default icon_rect if not icon + icon_rect = QtCore.QRect( + item_rect.left(), + item_rect.top(), + # To make sure it's same size all the time + option.rect.height() - self.bar_height, + option.rect.height() - self.bar_height + ) + icon = index.model().data(icon_index, QtCore.Qt.DecorationRole) + + if icon: + mode = QtGui.QIcon.Normal + if not (option.state & QtWidgets.QStyle.State_Enabled): + mode = QtGui.QIcon.Disabled + elif option.state & QtWidgets.QStyle.State_Selected: + mode = QtGui.QIcon.Selected + + if isinstance(icon, QtGui.QPixmap): + icon = QtGui.QIcon(icon) + option.decorationSize = icon.size() / icon.devicePixelRatio() + + elif isinstance(icon, QtGui.QColor): + pixmap = QtGui.QPixmap(option.decorationSize) + pixmap.fill(icon) + icon = QtGui.QIcon(pixmap) + + elif isinstance(icon, QtGui.QImage): + icon = QtGui.QIcon(QtGui.QPixmap.fromImage(icon)) + option.decorationSize = icon.size() / icon.devicePixelRatio() + + elif isinstance(icon, QtGui.QIcon): + state = QtGui.QIcon.Off + if option.state & QtWidgets.QStyle.State_Open: + state = QtGui.QIcon.On + actual_size = option.icon.actualSize( + option.decorationSize, mode, state + ) + option.decorationSize = QtCore.QSize( + min(option.decorationSize.width(), actual_size.width()), + min(option.decorationSize.height(), actual_size.height()) + ) + + state = QtGui.QIcon.Off + if option.state & QtWidgets.QStyle.State_Open: + state = QtGui.QIcon.On + + icon.paint( + painter, icon_rect, + QtCore.Qt.AlignLeft, mode, state + ) + + # Text + text_rect = QtCore.QRect( + icon_rect.left() + icon_rect.width() + 2, + item_rect.top(), + item_rect.width(), + item_rect.height() + ) + + painter.drawText( + text_rect, QtCore.Qt.AlignVCenter, + index.data(QtCore.Qt.DisplayRole) + ) + + painter.restore() + + +class LoaderFoldersModel(FoldersModel): + def __init__(self, *args, **kwargs): + super(LoaderFoldersModel, self).__init__(*args, **kwargs) + + self._colored_items = set() + + def _fill_item_data(self, item, folder_item): + """ + + Args: + item (QtGui.QStandardItem): Item to fill data. + folder_item (FolderItem): Folder item. + """ + + super(LoaderFoldersModel, self)._fill_item_data(item, folder_item) + + def set_merged_products_selection(self, items): + changes = { + folder_id: None + for folder_id in self._colored_items + } + + all_folder_ids = set() + for item in items: + folder_ids = item["folder_ids"] + all_folder_ids.update(folder_ids) + + for folder_id in all_folder_ids: + changes[folder_id] = [] + + for item in items: + item_color = item["color"] + item_folder_ids = item["folder_ids"] + for folder_id in all_folder_ids: + folder_color = ( + item_color + if folder_id in item_folder_ids + else None + ) + changes[folder_id].append(folder_color) + + for folder_id, color_value in changes.items(): + item = self._items_by_id.get(folder_id) + if item is not None: + item.setData(color_value, UNDERLINE_COLORS_ROLE) + + self._colored_items = all_folder_ids + + +class LoaderFoldersWidget(QtWidgets.QWidget): + """Folders widget. + + Widget that handles folders view, model and selection. + + Expected selection handling is disabled by default. If enabled, the + widget will handle the expected in predefined way. Widget is listening + to event 'expected_selection_changed' with expected event data below, + the same data must be available when called method + 'get_expected_selection_data' on controller. + + { + "folder": { + "current": bool, # Folder is what should be set now + "folder_id": Union[str, None], # Folder id that should be selected + }, + ... + } + + Selection is confirmed by calling method 'expected_folder_selected' on + controller. + + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller, parent): + super(LoaderFoldersWidget, self).__init__(parent) + + folders_view = DeselectableTreeView(self) + folders_view.setHeaderHidden(True) + folders_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection) + + folders_model = LoaderFoldersModel(controller) + folders_proxy_model = RecursiveSortFilterProxyModel() + folders_proxy_model.setSourceModel(folders_model) + folders_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + folders_label_delegate = UnderlinesFolderDelegate(folders_view) + + folders_view.setModel(folders_proxy_model) + folders_view.setItemDelegate(folders_label_delegate) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(folders_view, 1) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "folders.refresh.finished", + self._on_folders_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = folders_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + folders_model.refreshed.connect(self._on_model_refresh) + + self._controller = controller + self._folders_view = folders_view + self._folders_model = folders_model + self._folders_proxy_model = folders_proxy_model + self._folders_label_delegate = folders_label_delegate + + self._expected_selection = None + + def set_name_filter(self, name): + """Set filter of folder name. + + Args: + name (str): The string filter. + """ + + self._folders_proxy_model.setFilterFixedString(name) + + def set_merged_products_selection(self, items): + """ + + Args: + items (list[dict[str, Any]]): List of merged items with folder + ids. + """ + + self._folders_model.set_merged_products_selection(items) + + def refresh(self): + self._folders_model.refresh() + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self._set_project_name(project_name) + + def _set_project_name(self, project_name): + self._folders_model.set_project_name(project_name) + + def _clear(self): + self._folders_model.clear() + + def _on_folders_refresh_finished(self, event): + if event["sender"] != FOLDERS_MODEL_SENDER_NAME: + self._set_project_name(event["project_name"]) + + def _on_controller_refresh(self): + self._update_expected_selection() + + def _on_model_refresh(self): + if self._expected_selection: + self._set_expected_selection() + self._folders_proxy_model.sort(0) + self.refreshed.emit() + + def _get_selected_item_ids(self): + selection_model = self._folders_view.selectionModel() + item_ids = [] + for index in selection_model.selectedIndexes(): + item_id = index.data(FOLDER_ID_ROLE) + if item_id is not None: + item_ids.append(item_id) + return item_ids + + def _on_selection_change(self): + item_ids = self._get_selected_item_ids() + self._controller.set_selected_folders(item_ids) + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _update_expected_selection(self, expected_data=None): + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + folder_data = expected_data.get("folder") + if not folder_data or not folder_data["current"]: + return + + folder_id = folder_data["id"] + self._expected_selection = folder_id + if not self._folders_model.is_refreshing: + self._set_expected_selection() + + def _set_expected_selection(self): + folder_id = self._expected_selection + selected_ids = self._get_selected_item_ids() + self._expected_selection = None + skip_selection = ( + folder_id is None + or ( + folder_id in selected_ids + and len(selected_ids) == 1 + ) + ) + if not skip_selection: + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + proxy_index = self._folders_proxy_model.mapFromSource(index) + self._folders_view.setCurrentIndex(proxy_index) + self._controller.expected_folder_selected(folder_id) diff --git a/openpype/tools/ayon_loader/ui/info_widget.py b/openpype/tools/ayon_loader/ui/info_widget.py new file mode 100644 index 0000000000..b7d1b0811f --- /dev/null +++ b/openpype/tools/ayon_loader/ui/info_widget.py @@ -0,0 +1,141 @@ +import datetime + +from qtpy import QtWidgets + +from openpype.tools.utils.lib import format_version + + +class VersionTextEdit(QtWidgets.QTextEdit): + """QTextEdit that displays version specific information. + + This also overrides the context menu to add actions like copying + source path to clipboard or copying the raw data of the version + to clipboard. + + """ + def __init__(self, controller, parent): + super(VersionTextEdit, self).__init__(parent=parent) + + self._version_item = None + self._product_item = None + + self._controller = controller + + # Reset + self.set_current_item() + + def set_current_item(self, product_item=None, version_item=None): + """ + + Args: + product_item (Union[ProductItem, None]): Product item. + version_item (Union[VersionItem, None]): Version item to display. + """ + + self._product_item = product_item + self._version_item = version_item + + if version_item is None: + # Reset state to empty + self.setText("") + return + + version_label = format_version(abs(version_item.version)) + if version_item.version < 0: + version_label = "Hero version {}".format(version_label) + + # Define readable creation timestamp + created = version_item.published_time + created = datetime.datetime.strptime(created, "%Y%m%dT%H%M%SZ") + created = datetime.datetime.strftime(created, "%b %d %Y %H:%M") + + comment = version_item.comment or "No comment" + source = version_item.source or "No source" + + self.setHtml( + ( + "

{product_name}

" + "

{version_label}

" + "Comment
" + "{comment}

" + + "Created
" + "{created}

" + + "Source
" + "{source}" + ).format( + product_name=product_item.product_name, + version_label=version_label, + comment=comment, + created=created, + source=source, + ) + ) + + def contextMenuEvent(self, event): + """Context menu with additional actions""" + menu = self.createStandardContextMenu() + + # Add additional actions when any text, so we can assume + # the version is set. + source = None + if self._version_item is not None: + source = self._version_item.source + + if source: + menu.addSeparator() + action = QtWidgets.QAction( + "Copy source path to clipboard", menu + ) + action.triggered.connect(self._on_copy_source) + menu.addAction(action) + + menu.exec_(event.globalPos()) + + def _on_copy_source(self): + """Copy formatted source path to clipboard.""" + + source = self._version_item.source + if not source: + return + + filled_source = self._controller.fill_root_in_source(source) + clipboard = QtWidgets.QApplication.clipboard() + clipboard.setText(filled_source) + + +class InfoWidget(QtWidgets.QWidget): + """A Widget that display information about a specific version""" + def __init__(self, controller, parent): + super(InfoWidget, self).__init__(parent=parent) + + label_widget = QtWidgets.QLabel("Version Info", self) + info_text_widget = VersionTextEdit(controller, self) + info_text_widget.setReadOnly(True) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(label_widget, 0) + layout.addWidget(info_text_widget, 1) + + self._controller = controller + + self._info_text_widget = info_text_widget + self._label_widget = label_widget + + def set_selected_version_info(self, project_name, items): + if not items or not project_name: + self._info_text_widget.set_current_item() + return + first_item = next(iter(items)) + product_item = self._controller.get_product_item( + project_name, + first_item["product_id"], + ) + version_id = first_item["version_id"] + version_item = None + if product_item is not None: + version_item = product_item.version_items.get(version_id) + + self._info_text_widget.set_current_item(product_item, version_item) diff --git a/openpype/tools/ayon_loader/ui/product_group_dialog.py b/openpype/tools/ayon_loader/ui/product_group_dialog.py new file mode 100644 index 0000000000..5737ce58a4 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/product_group_dialog.py @@ -0,0 +1,45 @@ +from qtpy import QtWidgets + +from openpype.tools.utils import PlaceholderLineEdit + + +class ProductGroupDialog(QtWidgets.QDialog): + def __init__(self, controller, parent): + super(ProductGroupDialog, self).__init__(parent) + self.setWindowTitle("Grouping products") + self.setMinimumWidth(250) + self.setModal(True) + + main_label = QtWidgets.QLabel("Group Name", self) + + group_name_input = PlaceholderLineEdit(self) + group_name_input.setPlaceholderText("Remain blank to ungroup..") + + group_btn = QtWidgets.QPushButton("Apply", self) + group_btn.setAutoDefault(True) + group_btn.setDefault(True) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(main_label, 0) + layout.addWidget(group_name_input, 0) + layout.addWidget(group_btn, 0) + + group_btn.clicked.connect(self._on_apply_click) + + self._project_name = None + self._product_ids = set() + + self._controller = controller + self._group_btn = group_btn + self._group_name_input = group_name_input + + def set_product_ids(self, project_name, product_ids): + self._project_name = project_name + self._product_ids = product_ids + + def _on_apply_click(self): + group_name = self._group_name_input.text().strip() or None + self._controller.change_products_group( + self._project_name, self._product_ids, group_name + ) + self.close() diff --git a/openpype/tools/ayon_loader/ui/product_types_widget.py b/openpype/tools/ayon_loader/ui/product_types_widget.py new file mode 100644 index 0000000000..a84a7ff846 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/product_types_widget.py @@ -0,0 +1,220 @@ +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.tools.ayon_utils.widgets import get_qt_icon + +PRODUCT_TYPE_ROLE = QtCore.Qt.UserRole + 1 + + +class ProductTypesQtModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + filter_changed = QtCore.Signal() + + def __init__(self, controller): + super(ProductTypesQtModel, self).__init__() + self._controller = controller + + self._refreshing = False + self._bulk_change = False + self._items_by_name = {} + + def is_refreshing(self): + return self._refreshing + + def get_filter_info(self): + """Product types filtering info. + + Returns: + dict[str, bool]: Filtering value by product type name. False value + means to hide product type. + """ + + return { + name: item.checkState() == QtCore.Qt.Checked + for name, item in self._items_by_name.items() + } + + def refresh(self, project_name): + self._refreshing = True + product_type_items = self._controller.get_product_type_items( + project_name) + + items_to_remove = set(self._items_by_name.keys()) + new_items = [] + for product_type_item in product_type_items: + name = product_type_item.name + items_to_remove.discard(name) + item = self._items_by_name.get(product_type_item.name) + if item is None: + item = QtGui.QStandardItem(name) + item.setData(name, PRODUCT_TYPE_ROLE) + item.setEditable(False) + item.setCheckable(True) + new_items.append(item) + self._items_by_name[name] = item + + item.setCheckState( + QtCore.Qt.Checked + if product_type_item.checked + else QtCore.Qt.Unchecked + ) + icon = get_qt_icon(product_type_item.icon) + item.setData(icon, QtCore.Qt.DecorationRole) + + root_item = self.invisibleRootItem() + if new_items: + root_item.appendRows(new_items) + + for name in items_to_remove: + item = self._items_by_name.pop(name) + root_item.removeRow(item.row()) + + self._refreshing = False + self.refreshed.emit() + + def setData(self, index, value, role=None): + checkstate_changed = False + if role is None: + role = QtCore.Qt.EditRole + elif role == QtCore.Qt.CheckStateRole: + checkstate_changed = True + output = super(ProductTypesQtModel, self).setData(index, value, role) + if checkstate_changed and not self._bulk_change: + self.filter_changed.emit() + return output + + def change_state_for_all(self, checked): + if self._items_by_name: + self.change_states(checked, self._items_by_name.keys()) + + def change_states(self, checked, product_types): + product_types = set(product_types) + if not product_types: + return + + if checked is None: + state = None + elif checked: + state = QtCore.Qt.Checked + else: + state = QtCore.Qt.Unchecked + + self._bulk_change = True + + changed = False + for product_type in product_types: + item = self._items_by_name.get(product_type) + if item is None: + continue + new_state = state + item_checkstate = item.checkState() + if new_state is None: + if item_checkstate == QtCore.Qt.Checked: + new_state = QtCore.Qt.Unchecked + else: + new_state = QtCore.Qt.Checked + elif item_checkstate == new_state: + continue + changed = True + item.setCheckState(new_state) + + self._bulk_change = False + + if changed: + self.filter_changed.emit() + + +class ProductTypesView(QtWidgets.QListView): + filter_changed = QtCore.Signal() + + def __init__(self, controller, parent): + super(ProductTypesView, self).__init__(parent) + + self.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + self.setAlternatingRowColors(True) + self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + product_types_model = ProductTypesQtModel(controller) + product_types_proxy_model = QtCore.QSortFilterProxyModel() + product_types_proxy_model.setSourceModel(product_types_model) + + self.setModel(product_types_proxy_model) + + product_types_model.refreshed.connect(self._on_refresh_finished) + product_types_model.filter_changed.connect(self._on_filter_change) + self.customContextMenuRequested.connect(self._on_context_menu) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + + self._controller = controller + + self._product_types_model = product_types_model + self._product_types_proxy_model = product_types_proxy_model + + def get_filter_info(self): + return self._product_types_model.get_filter_info() + + def _on_project_change(self, event): + project_name = event["project_name"] + self._product_types_model.refresh(project_name) + + def _on_refresh_finished(self): + self.filter_changed.emit() + + def _on_filter_change(self): + if not self._product_types_model.is_refreshing(): + self.filter_changed.emit() + + def _change_selection_state(self, checkstate): + selection_model = self.selectionModel() + product_types = { + index.data(PRODUCT_TYPE_ROLE) + for index in selection_model.selectedIndexes() + } + product_types.discard(None) + self._product_types_model.change_states(checkstate, product_types) + + def _on_enable_all(self): + self._product_types_model.change_state_for_all(True) + + def _on_disable_all(self): + self._product_types_model.change_state_for_all(False) + + def _on_context_menu(self, pos): + menu = QtWidgets.QMenu(self) + + # Add enable all action + action_check_all = QtWidgets.QAction(menu) + action_check_all.setText("Enable All") + action_check_all.triggered.connect(self._on_enable_all) + # Add disable all action + action_uncheck_all = QtWidgets.QAction(menu) + action_uncheck_all.setText("Disable All") + action_uncheck_all.triggered.connect(self._on_disable_all) + + menu.addAction(action_check_all) + menu.addAction(action_uncheck_all) + + # Get mouse position + global_pos = self.viewport().mapToGlobal(pos) + menu.exec_(global_pos) + + def event(self, event): + if event.type() == QtCore.QEvent.KeyPress: + if event.key() == QtCore.Qt.Key_Space: + self._change_selection_state(None) + return True + + if event.key() == QtCore.Qt.Key_Backspace: + self._change_selection_state(False) + return True + + if event.key() == QtCore.Qt.Key_Return: + self._change_selection_state(True) + return True + + return super(ProductTypesView, self).event(event) diff --git a/openpype/tools/ayon_loader/ui/products_delegates.py b/openpype/tools/ayon_loader/ui/products_delegates.py new file mode 100644 index 0000000000..6729468bfa --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_delegates.py @@ -0,0 +1,191 @@ +import numbers +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.utils.lib import format_version + +from .products_model import ( + PRODUCT_ID_ROLE, + VERSION_NAME_EDIT_ROLE, + VERSION_ID_ROLE, + PRODUCT_IN_SCENE_ROLE, +) + + +class VersionComboBox(QtWidgets.QComboBox): + value_changed = QtCore.Signal(str) + + def __init__(self, product_id, parent): + super(VersionComboBox, self).__init__(parent) + self._product_id = product_id + self._items_by_id = {} + + self._current_id = None + + self.currentIndexChanged.connect(self._on_index_change) + + def update_versions(self, version_items, current_version_id): + model = self.model() + root_item = model.invisibleRootItem() + version_items = list(reversed(version_items)) + version_ids = [ + version_item.version_id + for version_item in version_items + ] + if current_version_id not in version_ids and version_ids: + current_version_id = version_ids[0] + self._current_id = current_version_id + + to_remove = set(self._items_by_id.keys()) - set(version_ids) + for item_id in to_remove: + item = self._items_by_id.pop(item_id) + root_item.removeRow(item.row()) + + for idx, version_item in enumerate(version_items): + version_id = version_item.version_id + + item = self._items_by_id.get(version_id) + if item is None: + label = format_version( + abs(version_item.version), version_item.is_hero + ) + item = QtGui.QStandardItem(label) + item.setData(version_id, QtCore.Qt.UserRole) + self._items_by_id[version_id] = item + + if item.row() != idx: + root_item.insertRow(idx, item) + + index = version_ids.index(current_version_id) + if self.currentIndex() != index: + self.setCurrentIndex(index) + + def _on_index_change(self): + idx = self.currentIndex() + value = self.itemData(idx) + if value == self._current_id: + return + self._current_id = value + self.value_changed.emit(self._product_id) + + +class VersionDelegate(QtWidgets.QStyledItemDelegate): + """A delegate that display version integer formatted as version string.""" + + version_changed = QtCore.Signal() + + def __init__(self, *args, **kwargs): + super(VersionDelegate, self).__init__(*args, **kwargs) + self._editor_by_product_id = {} + + def displayText(self, value, locale): + if not isinstance(value, numbers.Integral): + return "N/A" + return format_version(abs(value), value < 0) + + def paint(self, painter, option, index): + fg_color = index.data(QtCore.Qt.ForegroundRole) + if fg_color: + if isinstance(fg_color, QtGui.QBrush): + fg_color = fg_color.color() + elif isinstance(fg_color, QtGui.QColor): + pass + else: + fg_color = None + + if not fg_color: + return super(VersionDelegate, self).paint(painter, option, index) + + if option.widget: + style = option.widget.style() + else: + style = QtWidgets.QApplication.style() + + style.drawControl( + style.CE_ItemViewItem, option, painter, option.widget + ) + + painter.save() + + text = self.displayText( + index.data(QtCore.Qt.DisplayRole), option.locale + ) + pen = painter.pen() + pen.setColor(fg_color) + painter.setPen(pen) + + text_rect = style.subElementRect(style.SE_ItemViewItemText, option) + text_margin = style.proxy().pixelMetric( + style.PM_FocusFrameHMargin, option, option.widget + ) + 1 + + painter.drawText( + text_rect.adjusted(text_margin, 0, - text_margin, 0), + option.displayAlignment, + text + ) + + painter.restore() + + def createEditor(self, parent, option, index): + product_id = index.data(PRODUCT_ID_ROLE) + if not product_id: + return + + editor = VersionComboBox(product_id, parent) + self._editor_by_product_id[product_id] = editor + editor.value_changed.connect(self._on_editor_change) + + return editor + + def _on_editor_change(self, product_id): + editor = self._editor_by_product_id[product_id] + + # Update model data + self.commitData.emit(editor) + # Display model data + self.version_changed.emit() + + def setEditorData(self, editor, index): + editor.clear() + + # Current value of the index + versions = index.data(VERSION_NAME_EDIT_ROLE) or [] + version_id = index.data(VERSION_ID_ROLE) + editor.update_versions(versions, version_id) + + def setModelData(self, editor, model, index): + """Apply the integer version back in the model""" + + version_id = editor.itemData(editor.currentIndex()) + model.setData(index, version_id, VERSION_NAME_EDIT_ROLE) + + +class LoadedInSceneDelegate(QtWidgets.QStyledItemDelegate): + """Delegate for Loaded in Scene state columns. + + Shows "Yes" or "No" for 1 or 0 values, or "N/A" for other values. + Colorizes green or dark grey based on values. + """ + + def __init__(self, *args, **kwargs): + super(LoadedInSceneDelegate, self).__init__(*args, **kwargs) + self._colors = { + 1: QtGui.QColor(80, 170, 80), + 0: QtGui.QColor(90, 90, 90), + } + self._default_color = QtGui.QColor(90, 90, 90) + + def displayText(self, value, locale): + if value == 0: + return "No" + elif value == 1: + return "Yes" + return "N/A" + + def initStyleOption(self, option, index): + super(LoadedInSceneDelegate, self).initStyleOption(option, index) + + # Colorize based on value + value = index.data(PRODUCT_IN_SCENE_ROLE) + color = self._colors.get(value, self._default_color) + option.palette.setBrush(QtGui.QPalette.Text, color) diff --git a/openpype/tools/ayon_loader/ui/products_model.py b/openpype/tools/ayon_loader/ui/products_model.py new file mode 100644 index 0000000000..741f15766b --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_model.py @@ -0,0 +1,590 @@ +import collections + +import qtawesome +from qtpy import QtGui, QtCore + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.widgets import get_qt_icon + +PRODUCTS_MODEL_SENDER_NAME = "qt_products_model" + +GROUP_TYPE_ROLE = QtCore.Qt.UserRole + 1 +MERGED_COLOR_ROLE = QtCore.Qt.UserRole + 2 +FOLDER_LABEL_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_ID_ROLE = QtCore.Qt.UserRole + 4 +PRODUCT_ID_ROLE = QtCore.Qt.UserRole + 5 +PRODUCT_NAME_ROLE = QtCore.Qt.UserRole + 6 +PRODUCT_TYPE_ROLE = QtCore.Qt.UserRole + 7 +PRODUCT_TYPE_ICON_ROLE = QtCore.Qt.UserRole + 8 +PRODUCT_IN_SCENE_ROLE = QtCore.Qt.UserRole + 9 +VERSION_ID_ROLE = QtCore.Qt.UserRole + 10 +VERSION_HERO_ROLE = QtCore.Qt.UserRole + 11 +VERSION_NAME_ROLE = QtCore.Qt.UserRole + 12 +VERSION_NAME_EDIT_ROLE = QtCore.Qt.UserRole + 13 +VERSION_PUBLISH_TIME_ROLE = QtCore.Qt.UserRole + 14 +VERSION_AUTHOR_ROLE = QtCore.Qt.UserRole + 15 +VERSION_FRAME_RANGE_ROLE = QtCore.Qt.UserRole + 16 +VERSION_DURATION_ROLE = QtCore.Qt.UserRole + 17 +VERSION_HANDLES_ROLE = QtCore.Qt.UserRole + 18 +VERSION_STEP_ROLE = QtCore.Qt.UserRole + 19 +VERSION_AVAILABLE_ROLE = QtCore.Qt.UserRole + 20 +VERSION_THUMBNAIL_ID_ROLE = QtCore.Qt.UserRole + 21 + + +class ProductsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + version_changed = QtCore.Signal() + column_labels = [ + "Product name", + "Product type", + "Folder", + "Version", + "Time", + "Author", + "Frames", + "Duration", + "Handles", + "Step", + "In scene", + "Availability", + ] + merged_items_colors = [ + ("#{0:02x}{1:02x}{2:02x}".format(*c), QtGui.QColor(*c)) + for c in [ + (55, 161, 222), # Light Blue + (231, 176, 0), # Yellow + (154, 13, 255), # Purple + (130, 184, 30), # Light Green + (211, 79, 63), # Light Red + (179, 181, 182), # Grey + (194, 57, 179), # Pink + (0, 120, 215), # Dark Blue + (0, 204, 106), # Dark Green + (247, 99, 12), # Orange + ] + ] + + version_col = column_labels.index("Version") + published_time_col = column_labels.index("Time") + folders_label_col = column_labels.index("Folder") + in_scene_col = column_labels.index("In scene") + + def __init__(self, controller): + super(ProductsModel, self).__init__() + self.setColumnCount(len(self.column_labels)) + for idx, label in enumerate(self.column_labels): + self.setHeaderData(idx, QtCore.Qt.Horizontal, label) + self._controller = controller + + # Variables to store 'QStandardItem' + self._items_by_id = {} + self._group_items_by_name = {} + self._merged_items_by_id = {} + + # product item objects (they have version information) + self._product_items_by_id = {} + self._grouping_enabled = True + self._reset_merge_color = False + self._color_iterator = self._color_iter() + self._group_icon = None + + self._last_project_name = None + self._last_folder_ids = [] + + def get_product_item_indexes(self): + return [ + item.index() + for item in self._items_by_id.values() + ] + + def get_product_item_by_id(self, product_id): + """ + + Args: + product_id (str): Product id. + + Returns: + Union[ProductItem, None]: Product item with version information. + """ + + return self._product_items_by_id.get(product_id) + + def set_enable_grouping(self, enable_grouping): + if enable_grouping is self._grouping_enabled: + return + self._grouping_enabled = enable_grouping + # Ignore change if groups are not available + self.refresh(self._last_project_name, self._last_folder_ids) + + def flags(self, index): + # Make the version column editable + if index.column() == self.version_col and index.data(PRODUCT_ID_ROLE): + return ( + QtCore.Qt.ItemIsEnabled + | QtCore.Qt.ItemIsSelectable + | QtCore.Qt.ItemIsEditable + ) + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(ProductsModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + if not index.isValid(): + return None + + col = index.column() + if col == 0: + return super(ProductsModel, self).data(index, role) + + if role == QtCore.Qt.DecorationRole: + if col == 1: + role = PRODUCT_TYPE_ICON_ROLE + else: + return None + + if ( + role == VERSION_NAME_EDIT_ROLE + or (role == QtCore.Qt.EditRole and col == self.version_col) + ): + index = self.index(index.row(), 0, index.parent()) + product_id = index.data(PRODUCT_ID_ROLE) + product_item = self._product_items_by_id.get(product_id) + if product_item is None: + return None + return list(product_item.version_items.values()) + + if role == QtCore.Qt.EditRole: + return None + + if role == QtCore.Qt.DisplayRole: + if not index.data(PRODUCT_ID_ROLE): + return None + if col == self.version_col: + role = VERSION_NAME_ROLE + elif col == 1: + role = PRODUCT_TYPE_ROLE + elif col == 2: + role = FOLDER_LABEL_ROLE + elif col == 4: + role = VERSION_PUBLISH_TIME_ROLE + elif col == 5: + role = VERSION_AUTHOR_ROLE + elif col == 6: + role = VERSION_FRAME_RANGE_ROLE + elif col == 7: + role = VERSION_DURATION_ROLE + elif col == 8: + role = VERSION_HANDLES_ROLE + elif col == 9: + role = VERSION_STEP_ROLE + elif col == 10: + role = PRODUCT_IN_SCENE_ROLE + elif col == 11: + role = VERSION_AVAILABLE_ROLE + else: + return None + + index = self.index(index.row(), 0, index.parent()) + + return super(ProductsModel, self).data(index, role) + + def setData(self, index, value, role=None): + if not index.isValid(): + return False + + if role is None: + role = QtCore.Qt.EditRole + + col = index.column() + if col == self.version_col and role == QtCore.Qt.EditRole: + role = VERSION_NAME_EDIT_ROLE + + if role == VERSION_NAME_EDIT_ROLE: + if col != 0: + index = self.index(index.row(), 0, index.parent()) + product_id = index.data(PRODUCT_ID_ROLE) + product_item = self._product_items_by_id[product_id] + final_version_item = None + for v_id, version_item in product_item.version_items.items(): + if v_id == value: + final_version_item = version_item + break + + if final_version_item is None: + return False + if index.data(VERSION_ID_ROLE) == final_version_item.version_id: + return True + item = self.itemFromIndex(index) + self._set_version_data_to_product_item(item, final_version_item) + self.version_changed.emit() + return True + return super(ProductsModel, self).setData(index, value, role) + + def _get_next_color(self): + return next(self._color_iterator) + + def _color_iter(self): + while True: + for color in self.merged_items_colors: + if self._reset_merge_color: + self._reset_merge_color = False + break + yield color + + def _clear(self): + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + self._items_by_id = {} + self._group_items_by_name = {} + self._merged_items_by_id = {} + self._product_items_by_id = {} + self._reset_merge_color = True + + def _get_group_icon(self): + if self._group_icon is None: + self._group_icon = qtawesome.icon( + "fa.object-group", + color=get_default_entity_icon_color() + ) + return self._group_icon + + def _get_group_model_item(self, group_name): + model_item = self._group_items_by_name.get(group_name) + if model_item is None: + model_item = QtGui.QStandardItem(group_name) + model_item.setData( + self._get_group_icon(), QtCore.Qt.DecorationRole + ) + model_item.setData(0, GROUP_TYPE_ROLE) + model_item.setEditable(False) + model_item.setColumnCount(self.columnCount()) + self._group_items_by_name[group_name] = model_item + return model_item + + def _get_merged_model_item(self, path, count, hex_color): + model_item = self._merged_items_by_id.get(path) + if model_item is None: + model_item = QtGui.QStandardItem() + model_item.setData(1, GROUP_TYPE_ROLE) + model_item.setData(hex_color, MERGED_COLOR_ROLE) + model_item.setEditable(False) + model_item.setColumnCount(self.columnCount()) + self._merged_items_by_id[path] = model_item + label = "{} ({})".format(path, count) + model_item.setData(label, QtCore.Qt.DisplayRole) + return model_item + + def _set_version_data_to_product_item(self, model_item, version_item): + """ + + Args: + model_item (QtGui.QStandardItem): Item which should have values + from version item. + version_item (VersionItem): Item from entities model with + information about version. + """ + + model_item.setData(version_item.version_id, VERSION_ID_ROLE) + model_item.setData(version_item.version, VERSION_NAME_ROLE) + model_item.setData(version_item.version_id, VERSION_ID_ROLE) + model_item.setData(version_item.is_hero, VERSION_HERO_ROLE) + model_item.setData( + version_item.published_time, VERSION_PUBLISH_TIME_ROLE + ) + model_item.setData(version_item.author, VERSION_AUTHOR_ROLE) + model_item.setData(version_item.frame_range, VERSION_FRAME_RANGE_ROLE) + model_item.setData(version_item.duration, VERSION_DURATION_ROLE) + model_item.setData(version_item.handles, VERSION_HANDLES_ROLE) + model_item.setData(version_item.step, VERSION_STEP_ROLE) + model_item.setData( + version_item.thumbnail_id, VERSION_THUMBNAIL_ID_ROLE) + + def _get_product_model_item(self, product_item): + model_item = self._items_by_id.get(product_item.product_id) + versions = list(product_item.version_items.values()) + versions.sort() + last_version = versions[-1] + if model_item is None: + product_id = product_item.product_id + model_item = QtGui.QStandardItem(product_item.product_name) + model_item.setEditable(False) + icon = get_qt_icon(product_item.product_icon) + product_type_icon = get_qt_icon(product_item.product_type_icon) + model_item.setColumnCount(self.columnCount()) + model_item.setData(icon, QtCore.Qt.DecorationRole) + model_item.setData(product_id, PRODUCT_ID_ROLE) + model_item.setData(product_item.product_name, PRODUCT_NAME_ROLE) + model_item.setData(product_item.product_type, PRODUCT_TYPE_ROLE) + model_item.setData(product_type_icon, PRODUCT_TYPE_ICON_ROLE) + model_item.setData(product_item.folder_id, FOLDER_ID_ROLE) + + self._product_items_by_id[product_id] = product_item + self._items_by_id[product_id] = model_item + + model_item.setData(product_item.folder_label, FOLDER_LABEL_ROLE) + in_scene = 1 if product_item.product_in_scene else 0 + model_item.setData(in_scene, PRODUCT_IN_SCENE_ROLE) + + self._set_version_data_to_product_item(model_item, last_version) + return model_item + + def get_last_project_name(self): + return self._last_project_name + + def refresh(self, project_name, folder_ids): + self._clear() + + self._last_project_name = project_name + self._last_folder_ids = folder_ids + + product_items = self._controller.get_product_items( + project_name, + folder_ids, + sender=PRODUCTS_MODEL_SENDER_NAME + ) + product_items_by_id = { + product_item.product_id: product_item + for product_item in product_items + } + + # Prepare product groups + product_name_matches_by_group = collections.defaultdict(dict) + for product_item in product_items_by_id.values(): + group_name = None + if self._grouping_enabled: + group_name = product_item.group_name + + product_name = product_item.product_name + group = product_name_matches_by_group[group_name] + if product_name not in group: + group[product_name] = [product_item] + continue + group[product_name].append(product_item) + + group_names = set(product_name_matches_by_group.keys()) + + root_item = self.invisibleRootItem() + new_root_items = [] + merged_paths = set() + for group_name in group_names: + key_parts = [] + if group_name: + key_parts.append(group_name) + + groups = product_name_matches_by_group[group_name] + merged_product_items = {} + top_items = [] + group_product_types = set() + for product_name, product_items in groups.items(): + group_product_types |= {p.product_type for p in product_items} + if len(product_items) == 1: + top_items.append(product_items[0]) + else: + path = "/".join(key_parts + [product_name]) + merged_paths.add(path) + merged_product_items[path] = ( + product_name, + product_items, + ) + + parent_item = None + if group_name: + parent_item = self._get_group_model_item(group_name) + parent_item.setData( + "|".join(group_product_types), PRODUCT_TYPE_ROLE) + + new_items = [] + if parent_item is not None and parent_item.row() < 0: + new_root_items.append(parent_item) + + for product_item in top_items: + item = self._get_product_model_item(product_item) + new_items.append(item) + + for path_info in merged_product_items.values(): + product_name, product_items = path_info + (merged_color_hex, merged_color_qt) = self._get_next_color() + merged_color = qtawesome.icon( + "fa.circle", color=merged_color_qt) + merged_item = self._get_merged_model_item( + product_name, len(product_items), merged_color_hex) + merged_item.setData(merged_color, QtCore.Qt.DecorationRole) + new_items.append(merged_item) + + merged_product_types = set() + new_merged_items = [] + for product_item in product_items: + item = self._get_product_model_item(product_item) + new_merged_items.append(item) + merged_product_types.add(product_item.product_type) + + merged_item.setData( + "|".join(merged_product_types), PRODUCT_TYPE_ROLE) + if new_merged_items: + merged_item.appendRows(new_merged_items) + + if not new_items: + continue + + if parent_item is None: + new_root_items.extend(new_items) + else: + parent_item.appendRows(new_items) + + if new_root_items: + root_item.appendRows(new_root_items) + + self.refreshed.emit() + # --------------------------------- + # This implementation does not call '_clear' at the start + # but is more complex and probably slower + # --------------------------------- + # def _remove_items(self, items): + # if not items: + # return + # root_item = self.invisibleRootItem() + # for item in items: + # row = item.row() + # if row < 0: + # continue + # parent = item.parent() + # if parent is None: + # parent = root_item + # parent.removeRow(row) + # + # def _remove_group_items(self, group_names): + # group_items = [ + # self._group_items_by_name.pop(group_name) + # for group_name in group_names + # ] + # self._remove_items(group_items) + # + # def _remove_merged_items(self, paths): + # merged_items = [ + # self._merged_items_by_id.pop(path) + # for path in paths + # ] + # self._remove_items(merged_items) + # + # def _remove_product_items(self, product_ids): + # product_items = [] + # for product_id in product_ids: + # self._product_items_by_id.pop(product_id) + # product_items.append(self._items_by_id.pop(product_id)) + # self._remove_items(product_items) + # + # def _add_to_new_items(self, item, parent_item, new_items, root_item): + # if item.row() < 0: + # new_items.append(item) + # else: + # item_parent = item.parent() + # if item_parent is not parent_item: + # if item_parent is None: + # item_parent = root_item + # item_parent.takeRow(item.row()) + # new_items.append(item) + + # def refresh(self, project_name, folder_ids): + # product_items = self._controller.get_product_items( + # project_name, + # folder_ids, + # sender=PRODUCTS_MODEL_SENDER_NAME + # ) + # product_items_by_id = { + # product_item.product_id: product_item + # for product_item in product_items + # } + # # Remove product items that are not available + # product_ids_to_remove = ( + # set(self._items_by_id.keys()) - set(product_items_by_id.keys()) + # ) + # self._remove_product_items(product_ids_to_remove) + # + # # Prepare product groups + # product_name_matches_by_group = collections.defaultdict(dict) + # for product_item in product_items_by_id.values(): + # group_name = None + # if self._grouping_enabled: + # group_name = product_item.group_name + # + # product_name = product_item.product_name + # group = product_name_matches_by_group[group_name] + # if product_name not in group: + # group[product_name] = [product_item] + # continue + # group[product_name].append(product_item) + # + # group_names = set(product_name_matches_by_group.keys()) + # + # root_item = self.invisibleRootItem() + # new_root_items = [] + # merged_paths = set() + # for group_name in group_names: + # key_parts = [] + # if group_name: + # key_parts.append(group_name) + # + # groups = product_name_matches_by_group[group_name] + # merged_product_items = {} + # top_items = [] + # for product_name, product_items in groups.items(): + # if len(product_items) == 1: + # top_items.append(product_items[0]) + # else: + # path = "/".join(key_parts + [product_name]) + # merged_paths.add(path) + # merged_product_items[path] = product_items + # + # parent_item = None + # if group_name: + # parent_item = self._get_group_model_item(group_name) + # + # new_items = [] + # if parent_item is not None and parent_item.row() < 0: + # new_root_items.append(parent_item) + # + # for product_item in top_items: + # item = self._get_product_model_item(product_item) + # self._add_to_new_items( + # item, parent_item, new_items, root_item + # ) + # + # for path, product_items in merged_product_items.items(): + # merged_item = self._get_merged_model_item(path) + # self._add_to_new_items( + # merged_item, parent_item, new_items, root_item + # ) + # + # new_merged_items = [] + # for product_item in product_items: + # item = self._get_product_model_item(product_item) + # self._add_to_new_items( + # item, merged_item, new_merged_items, root_item + # ) + # + # if new_merged_items: + # merged_item.appendRows(new_merged_items) + # + # if not new_items: + # continue + # + # if parent_item is not None: + # parent_item.appendRows(new_items) + # continue + # + # new_root_items.extend(new_items) + # + # root_item.appendRows(new_root_items) + # + # merged_item_ids_to_remove = ( + # set(self._merged_items_by_id.keys()) - merged_paths + # ) + # group_names_to_remove = ( + # set(self._group_items_by_name.keys()) - set(group_names) + # ) + # self._remove_merged_items(merged_item_ids_to_remove) + # self._remove_group_items(group_names_to_remove) diff --git a/openpype/tools/ayon_loader/ui/products_widget.py b/openpype/tools/ayon_loader/ui/products_widget.py new file mode 100644 index 0000000000..2d4959dc19 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_widget.py @@ -0,0 +1,400 @@ +import collections + +from qtpy import QtWidgets, QtCore + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) +from openpype.tools.utils.delegates import PrettyTimeDelegate + +from .products_model import ( + ProductsModel, + PRODUCTS_MODEL_SENDER_NAME, + PRODUCT_TYPE_ROLE, + GROUP_TYPE_ROLE, + MERGED_COLOR_ROLE, + FOLDER_ID_ROLE, + PRODUCT_ID_ROLE, + VERSION_ID_ROLE, + VERSION_THUMBNAIL_ID_ROLE, +) +from .products_delegates import VersionDelegate, LoadedInSceneDelegate +from .actions_utils import show_actions_menu + + +class ProductsProxyModel(RecursiveSortFilterProxyModel): + def __init__(self, parent=None): + super(ProductsProxyModel, self).__init__(parent) + + self._product_type_filters = {} + self._ascending_sort = True + + def set_product_type_filters(self, product_type_filters): + self._product_type_filters = product_type_filters + self.invalidateFilter() + + def filterAcceptsRow(self, source_row, source_parent): + source_model = self.sourceModel() + index = source_model.index(source_row, 0, source_parent) + product_types_s = source_model.data(index, PRODUCT_TYPE_ROLE) + product_types = [] + if product_types_s: + product_types = product_types_s.split("|") + + for product_type in product_types: + if not self._product_type_filters.get(product_type, True): + return False + return super(ProductsProxyModel, self).filterAcceptsRow( + source_row, source_parent) + + def lessThan(self, left, right): + l_model = left.model() + r_model = right.model() + left_group_type = l_model.data(left, GROUP_TYPE_ROLE) + right_group_type = r_model.data(right, GROUP_TYPE_ROLE) + # Groups are always on top, merged product types are below + # and items without group at the bottom + # QUESTION Do we need to do it this way? + if left_group_type != right_group_type: + if left_group_type is None: + output = False + elif right_group_type is None: + output = True + else: + output = left_group_type < right_group_type + if not self._ascending_sort: + output = not output + return output + return super(ProductsProxyModel, self).lessThan(left, right) + + def sort(self, column, order=None): + if order is None: + order = QtCore.Qt.AscendingOrder + self._ascending_sort = order == QtCore.Qt.AscendingOrder + super(ProductsProxyModel, self).sort(column, order) + + +class ProductsWidget(QtWidgets.QWidget): + refreshed = QtCore.Signal() + merged_products_selection_changed = QtCore.Signal() + selection_changed = QtCore.Signal() + version_changed = QtCore.Signal() + default_widths = ( + 200, # Product name + 90, # Product type + 130, # Folder label + 60, # Version + 125, # Time + 75, # Author + 75, # Frames + 60, # Duration + 55, # Handles + 10, # Step + 25, # Loaded in scene + 65, # Site info (maybe?) + ) + + def __init__(self, controller, parent): + super(ProductsWidget, self).__init__(parent) + + self._controller = controller + + products_view = DeselectableTreeView(self) + # TODO - define custom object name in style + products_view.setObjectName("SubsetView") + products_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + products_view.setAllColumnsShowFocus(True) + # TODO - add context menu + products_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + products_view.setSortingEnabled(True) + # Sort by product type + products_view.sortByColumn(1, QtCore.Qt.AscendingOrder) + products_view.setAlternatingRowColors(True) + + products_model = ProductsModel(controller) + products_proxy_model = ProductsProxyModel() + products_proxy_model.setSourceModel(products_model) + + products_view.setModel(products_proxy_model) + + for idx, width in enumerate(self.default_widths): + products_view.setColumnWidth(idx, width) + + version_delegate = VersionDelegate() + products_view.setItemDelegateForColumn( + products_model.version_col, version_delegate) + + time_delegate = PrettyTimeDelegate() + products_view.setItemDelegateForColumn( + products_model.published_time_col, time_delegate) + + in_scene_delegate = LoadedInSceneDelegate() + products_view.setItemDelegateForColumn( + products_model.in_scene_col, in_scene_delegate) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(products_view, 1) + + products_proxy_model.rowsInserted.connect(self._on_rows_inserted) + products_proxy_model.rowsMoved.connect(self._on_rows_moved) + products_model.refreshed.connect(self._on_refresh) + products_view.customContextMenuRequested.connect( + self._on_context_menu) + products_view.selectionModel().selectionChanged.connect( + self._on_selection_change) + products_model.version_changed.connect(self._on_version_change) + + controller.register_event_callback( + "selection.folders.changed", + self._on_folders_selection_change, + ) + controller.register_event_callback( + "products.refresh.finished", + self._on_products_refresh_finished + ) + controller.register_event_callback( + "products.group.changed", + self._on_group_changed + ) + + self._products_view = products_view + self._products_model = products_model + self._products_proxy_model = products_proxy_model + + self._version_delegate = version_delegate + self._time_delegate = time_delegate + + self._selected_project_name = None + self._selected_folder_ids = set() + + self._selected_merged_products = [] + self._selected_versions_info = [] + + # Set initial state of widget + # - Hide folders column + self._update_folders_label_visible() + # - Hide in scene column if is not supported (this won't change) + products_view.setColumnHidden( + products_model.in_scene_col, + not controller.is_loaded_products_supported() + ) + + def set_name_filter(self, name): + """Set filter of product name. + + Args: + name (str): The string filter. + """ + + self._products_proxy_model.setFilterFixedString(name) + + def set_product_type_filter(self, product_type_filters): + """ + + Args: + product_type_filters (dict[str, bool]): The filter of product + types. + """ + + self._products_proxy_model.set_product_type_filters( + product_type_filters + ) + + def set_enable_grouping(self, enable_grouping): + self._products_model.set_enable_grouping(enable_grouping) + + def get_selected_merged_products(self): + return self._selected_merged_products + + def get_selected_version_info(self): + return self._selected_versions_info + + def refresh(self): + self._refresh_model() + + def _fill_version_editor(self): + model = self._products_proxy_model + index_queue = collections.deque() + for row in range(model.rowCount()): + index_queue.append((row, None)) + + version_col = self._products_model.version_col + while index_queue: + (row, parent_index) = index_queue.popleft() + args = [row, 0] + if parent_index is not None: + args.append(parent_index) + index = model.index(*args) + rows = model.rowCount(index) + for row in range(rows): + index_queue.append((row, index)) + + product_id = model.data(index, PRODUCT_ID_ROLE) + if product_id is not None: + args[1] = version_col + v_index = model.index(*args) + self._products_view.openPersistentEditor(v_index) + + def _on_refresh(self): + self._fill_version_editor() + self.refreshed.emit() + + def _on_rows_inserted(self): + self._fill_version_editor() + + def _on_rows_moved(self): + self._fill_version_editor() + + def _refresh_model(self): + self._products_model.refresh( + self._selected_project_name, + self._selected_folder_ids + ) + + def _on_context_menu(self, point): + selection_model = self._products_view.selectionModel() + model = self._products_view.model() + project_name = self._products_model.get_last_project_name() + + version_ids = set() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + while indexes_queue: + index = indexes_queue.popleft() + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + version_id = model.data(index, VERSION_ID_ROLE) + if version_id is not None: + version_ids.add(version_id) + + action_items = self._controller.get_versions_action_items( + project_name, version_ids) + + # Prepare global point where to show the menu + global_point = self._products_view.mapToGlobal(point) + + result = show_actions_menu( + action_items, + global_point, + len(version_ids) == 1, + self + ) + action_item, options = result + if action_item is None or options is None: + return + + self._controller.trigger_action_item( + action_item.identifier, + options, + action_item.project_name, + version_ids=action_item.version_ids, + representation_ids=action_item.representation_ids, + ) + + def _on_selection_change(self): + selected_merged_products = [] + selection_model = self._products_view.selectionModel() + model = self._products_view.model() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + + # Helper for 'version_items' to avoid duplicated items + all_product_ids = set() + selected_version_ids = set() + # Version items contains information about selected version items + selected_versions_info = [] + while indexes_queue: + index = indexes_queue.popleft() + if index.column() != 0: + continue + + group_type = model.data(index, GROUP_TYPE_ROLE) + if group_type is None: + product_id = model.data(index, PRODUCT_ID_ROLE) + # Skip duplicates - when group and item are selected the item + # would be in the loop multiple times + if product_id in all_product_ids: + continue + + all_product_ids.add(product_id) + + version_id = model.data(index, VERSION_ID_ROLE) + selected_version_ids.add(version_id) + + thumbnail_id = model.data(index, VERSION_THUMBNAIL_ID_ROLE) + selected_versions_info.append({ + "folder_id": model.data(index, FOLDER_ID_ROLE), + "product_id": product_id, + "version_id": version_id, + "thumbnail_id": thumbnail_id, + }) + continue + + if group_type == 0: + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + continue + + if group_type != 1: + continue + + item_folder_ids = set() + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + + folder_id = model.data(child_index, FOLDER_ID_ROLE) + item_folder_ids.add(folder_id) + + if not item_folder_ids: + continue + + hex_color = model.data(index, MERGED_COLOR_ROLE) + item_data = { + "color": hex_color, + "folder_ids": item_folder_ids + } + selected_merged_products.append(item_data) + + prev_selected_merged_products = self._selected_merged_products + self._selected_merged_products = selected_merged_products + self._selected_versions_info = selected_versions_info + + if selected_merged_products != prev_selected_merged_products: + self.merged_products_selection_changed.emit() + self.selection_changed.emit() + self._controller.set_selected_versions(selected_version_ids) + + def _on_version_change(self): + self._on_selection_change() + + def _on_folders_selection_change(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_ids = event["folder_ids"] + self._refresh_model() + self._update_folders_label_visible() + + def _update_folders_label_visible(self): + folders_label_hidden = len(self._selected_folder_ids) <= 1 + self._products_view.setColumnHidden( + self._products_model.folders_label_col, + folders_label_hidden + ) + + def _on_products_refresh_finished(self, event): + if event["sender"] != PRODUCTS_MODEL_SENDER_NAME: + self._refresh_model() + + def _on_group_changed(self, event): + if event["project_name"] != self._selected_project_name: + return + folder_ids = event["folder_ids"] + if not set(folder_ids).intersection(set(self._selected_folder_ids)): + return + self.refresh() diff --git a/openpype/tools/ayon_loader/ui/repres_widget.py b/openpype/tools/ayon_loader/ui/repres_widget.py new file mode 100644 index 0000000000..7de582e629 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/repres_widget.py @@ -0,0 +1,338 @@ +import collections + +from qtpy import QtWidgets, QtGui, QtCore +import qtawesome + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.widgets import get_qt_icon +from openpype.tools.utils import DeselectableTreeView + +from .actions_utils import show_actions_menu + +REPRESENTAION_NAME_ROLE = QtCore.Qt.UserRole + 1 +REPRESENTATION_ID_ROLE = QtCore.Qt.UserRole + 2 +PRODUCT_NAME_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_LABEL_ROLE = QtCore.Qt.UserRole + 4 +GROUP_TYPE_ROLE = QtCore.Qt.UserRole + 5 + + +class RepresentationsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + colums_info = [ + ("Name", 120), + ("Product name", 125), + ("Folder", 125), + # ("Active site", 85), + # ("Remote site", 85) + ] + column_labels = [label for label, _ in colums_info] + column_widths = [width for _, width in colums_info] + folder_column = column_labels.index("Product name") + + def __init__(self, controller): + super(RepresentationsModel, self).__init__() + + self.setColumnCount(len(self.column_labels)) + + for idx, label in enumerate(self.column_labels): + self.setHeaderData(idx, QtCore.Qt.Horizontal, label) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + controller.register_event_callback( + "selection.versions.changed", + self._on_version_change + ) + self._selected_project_name = None + self._selected_version_ids = None + + self._group_icon = None + + self._items_by_id = {} + self._groups_items_by_name = {} + + self._controller = controller + + def refresh(self): + repre_items = self._controller.get_representation_items( + self._selected_project_name, self._selected_version_ids + ) + self._fill_items(repre_items) + self.refreshed.emit() + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + col = index.column() + if col != 0: + if role == QtCore.Qt.DecorationRole: + return None + + if role == QtCore.Qt.DisplayRole: + if col == 1: + role = PRODUCT_NAME_ROLE + elif col == 2: + role = FOLDER_LABEL_ROLE + index = self.index(index.row(), 0, index.parent()) + return super(RepresentationsModel, self).data(index, role) + + def setData(self, index, value, role=None): + if role is None: + role = QtCore.Qt.EditRole + return super(RepresentationsModel, self).setData(index, value, role) + + def _clear_items(self): + self._items_by_id = {} + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + def _get_repre_item(self, repre_item): + repre_id = repre_item.representation_id + repre_name = repre_item.representation_name + repre_icon = repre_item.representation_icon + item = self._items_by_id.get(repre_id) + is_new_item = False + if item is None: + is_new_item = True + item = QtGui.QStandardItem() + self._items_by_id[repre_id] = item + item.setColumnCount(self.columnCount()) + item.setEditable(False) + + icon = get_qt_icon(repre_icon) + item.setData(repre_name, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(repre_name, REPRESENTAION_NAME_ROLE) + item.setData(repre_id, REPRESENTATION_ID_ROLE) + item.setData(repre_item.product_name, PRODUCT_NAME_ROLE) + item.setData(repre_item.folder_label, FOLDER_LABEL_ROLE) + return is_new_item, item + + def _get_group_icon(self): + if self._group_icon is None: + self._group_icon = qtawesome.icon( + "fa.folder", + color=get_default_entity_icon_color() + ) + return self._group_icon + + def _get_group_item(self, repre_name): + item = self._groups_items_by_name.get(repre_name) + if item is not None: + return False, item + + # TODO add color + item = QtGui.QStandardItem() + item.setColumnCount(self.columnCount()) + item.setData(repre_name, QtCore.Qt.DisplayRole) + item.setData(self._get_group_icon(), QtCore.Qt.DecorationRole) + item.setData(0, GROUP_TYPE_ROLE) + item.setEditable(False) + self._groups_items_by_name[repre_name] = item + return True, item + + def _fill_items(self, repre_items): + items_to_remove = set(self._items_by_id.keys()) + repre_items_by_name = collections.defaultdict(list) + for repre_item in repre_items: + items_to_remove.discard(repre_item.representation_id) + repre_name = repre_item.representation_name + repre_items_by_name[repre_name].append(repre_item) + + root_item = self.invisibleRootItem() + for repre_id in items_to_remove: + item = self._items_by_id.pop(repre_id) + parent_item = item.parent() + if parent_item is None: + parent_item = root_item + parent_item.removeRow(item.row()) + + group_names = set() + new_root_items = [] + for repre_name, repre_name_items in repre_items_by_name.items(): + group_item = None + parent_is_group = False + if len(repre_name_items) > 1: + group_names.add(repre_name) + is_new_group, group_item = self._get_group_item(repre_name) + if is_new_group: + new_root_items.append(group_item) + parent_is_group = True + + new_group_items = [] + for repre_item in repre_name_items: + is_new_item, item = self._get_repre_item(repre_item) + item_parent = item.parent() + if item_parent is None: + item_parent = root_item + + if not is_new_item: + if parent_is_group: + if item_parent is group_item: + continue + elif item_parent is root_item: + continue + item_parent.takeRow(item.row()) + is_new_item = True + + if is_new_item: + new_group_items.append(item) + + if not new_group_items: + continue + + if group_item is not None: + group_item.appendRows(new_group_items) + else: + new_root_items.extend(new_group_items) + + if new_root_items: + root_item.appendRows(new_root_items) + + for group_name in set(self._groups_items_by_name) - group_names: + item = self._groups_items_by_name.pop(group_name) + parent_item = item.parent() + if parent_item is None: + parent_item = root_item + parent_item.removeRow(item.row()) + + def _on_project_change(self, event): + self._selected_project_name = event["project_name"] + + def _on_version_change(self, event): + self._selected_version_ids = event["version_ids"] + self.refresh() + + +class RepresentationsWidget(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(RepresentationsWidget, self).__init__(parent) + + repre_view = DeselectableTreeView(self) + repre_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + repre_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + repre_view.setSortingEnabled(True) + repre_view.setAlternatingRowColors(True) + + repre_model = RepresentationsModel(controller) + repre_proxy_model = QtCore.QSortFilterProxyModel() + repre_proxy_model.setSourceModel(repre_model) + repre_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + repre_view.setModel(repre_proxy_model) + + for idx, width in enumerate(repre_model.column_widths): + repre_view.setColumnWidth(idx, width) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(repre_view, 1) + + repre_view.customContextMenuRequested.connect( + self._on_context_menu) + repre_view.selectionModel().selectionChanged.connect( + self._on_selection_change) + repre_model.refreshed.connect(self._on_model_refresh) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + controller.register_event_callback( + "selection.folders.changed", + self._on_folder_change + ) + + self._controller = controller + self._selected_project_name = None + self._selected_multiple_folders = None + + self._repre_view = repre_view + self._repre_model = repre_model + self._repre_proxy_model = repre_proxy_model + + self._set_multiple_folders_selected(False) + + def refresh(self): + self._repre_model.refresh() + + def _on_folder_change(self, event): + self._set_multiple_folders_selected(len(event["folder_ids"]) > 1) + + def _on_project_change(self, event): + self._selected_project_name = event["project_name"] + + def _set_multiple_folders_selected(self, selected_multiple_folders): + if selected_multiple_folders == self._selected_multiple_folders: + return + self._selected_multiple_folders = selected_multiple_folders + self._repre_view.setColumnHidden( + self._repre_model.folder_column, + not self._selected_multiple_folders + ) + + def _on_model_refresh(self): + self._repre_proxy_model.sort(0) + + def _get_selected_repre_indexes(self): + selection_model = self._repre_view.selectionModel() + model = self._repre_view.model() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + + selected_indexes = [] + while indexes_queue: + index = indexes_queue.popleft() + if index.column() != 0: + continue + + group_type = model.data(index, GROUP_TYPE_ROLE) + if group_type is None: + selected_indexes.append(index) + + elif group_type == 0: + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + + return selected_indexes + + def _get_selected_repre_ids(self): + repre_ids = { + index.data(REPRESENTATION_ID_ROLE) + for index in self._get_selected_repre_indexes() + } + repre_ids.discard(None) + return repre_ids + + def _on_selection_change(self): + selected_repre_ids = self._get_selected_repre_ids() + self._controller.set_selected_representations(selected_repre_ids) + + def _on_context_menu(self, point): + repre_ids = self._get_selected_repre_ids() + action_items = self._controller.get_representations_action_items( + self._selected_project_name, repre_ids + ) + global_point = self._repre_view.mapToGlobal(point) + result = show_actions_menu( + action_items, + global_point, + len(repre_ids) == 1, + self + ) + action_item, options = result + if action_item is None or options is None: + return + + self._controller.trigger_action_item( + action_item.identifier, + options, + action_item.project_name, + version_ids=action_item.version_ids, + representation_ids=action_item.representation_ids, + ) diff --git a/openpype/tools/ayon_loader/ui/window.py b/openpype/tools/ayon_loader/ui/window.py new file mode 100644 index 0000000000..a6d40d52e7 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/window.py @@ -0,0 +1,511 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.resources import get_openpype_icon_filepath +from openpype.style import load_stylesheet +from openpype.tools.utils import ( + PlaceholderLineEdit, + ErrorMessageBox, + ThumbnailPainterWidget, + RefreshButton, + GoToCurrentButton, +) +from openpype.tools.utils.lib import center_window +from openpype.tools.ayon_utils.widgets import ProjectsCombobox +from openpype.tools.ayon_loader.control import LoaderController + +from .folders_widget import LoaderFoldersWidget +from .products_widget import ProductsWidget +from .product_types_widget import ProductTypesView +from .product_group_dialog import ProductGroupDialog +from .info_widget import InfoWidget +from .repres_widget import RepresentationsWidget + + +class LoadErrorMessageBox(ErrorMessageBox): + def __init__(self, messages, parent=None): + self._messages = messages + super(LoadErrorMessageBox, self).__init__("Loading failed", parent) + + def _create_top_widget(self, parent_widget): + label_widget = QtWidgets.QLabel(parent_widget) + label_widget.setText( + "Failed to load items" + ) + return label_widget + + def _get_report_data(self): + report_data = [] + for exc_msg, tb_text, repre, product, version in self._messages: + report_message = ( + "During load error happened on Product: \"{product}\"" + " Representation: \"{repre}\" Version: {version}" + "\n\nError message: {message}" + ).format( + product=product, + repre=repre, + version=version, + message=exc_msg + ) + if tb_text: + report_message += "\n\n{}".format(tb_text) + report_data.append(report_message) + return report_data + + def _create_content(self, content_layout): + item_name_template = ( + "Product: {}
" + "Version: {}
" + "Representation: {}
" + ) + exc_msg_template = "{}" + + for exc_msg, tb_text, repre, product, version in self._messages: + line = self._create_line() + content_layout.addWidget(line) + + item_name = item_name_template.format(product, version, repre) + item_name_widget = QtWidgets.QLabel( + item_name.replace("\n", "
"), self + ) + item_name_widget.setWordWrap(True) + content_layout.addWidget(item_name_widget) + + exc_msg = exc_msg_template.format(exc_msg.replace("\n", "
")) + message_label_widget = QtWidgets.QLabel(exc_msg, self) + message_label_widget.setWordWrap(True) + content_layout.addWidget(message_label_widget) + + if tb_text: + line = self._create_line() + tb_widget = self._create_traceback_widget(tb_text, self) + content_layout.addWidget(line) + content_layout.addWidget(tb_widget) + + +class RefreshHandler: + def __init__(self): + self._project_refreshed = False + self._folders_refreshed = False + self._products_refreshed = False + + @property + def project_refreshed(self): + return self._products_refreshed + + @property + def folders_refreshed(self): + return self._folders_refreshed + + @property + def products_refreshed(self): + return self._products_refreshed + + def reset(self): + self._project_refreshed = False + self._folders_refreshed = False + self._products_refreshed = False + + def set_project_refreshed(self): + self._project_refreshed = True + + def set_folders_refreshed(self): + self._folders_refreshed = True + + def set_products_refreshed(self): + self._products_refreshed = True + + +class LoaderWindow(QtWidgets.QWidget): + def __init__(self, controller=None, parent=None): + super(LoaderWindow, self).__init__(parent) + + icon = QtGui.QIcon(get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("AYON Loader") + self.setFocusPolicy(QtCore.Qt.StrongFocus) + self.setAttribute(QtCore.Qt.WA_DeleteOnClose, False) + self.setWindowFlags(self.windowFlags() | QtCore.Qt.Window) + + if controller is None: + controller = LoaderController() + + main_splitter = QtWidgets.QSplitter(self) + + context_splitter = QtWidgets.QSplitter(main_splitter) + context_splitter.setOrientation(QtCore.Qt.Vertical) + + # Context selection widget + context_widget = QtWidgets.QWidget(context_splitter) + + context_top_widget = QtWidgets.QWidget(context_widget) + projects_combobox = ProjectsCombobox( + controller, + context_top_widget, + handle_expected_selection=True + ) + projects_combobox.set_select_item_visible(True) + projects_combobox.set_libraries_separator_visible(True) + projects_combobox.set_standard_filter_enabled( + controller.is_standard_projects_filter_enabled() + ) + + go_to_current_btn = GoToCurrentButton(context_top_widget) + refresh_btn = RefreshButton(context_top_widget) + + context_top_layout = QtWidgets.QHBoxLayout(context_top_widget) + context_top_layout.setContentsMargins(0, 0, 0, 0,) + context_top_layout.addWidget(projects_combobox, 1) + context_top_layout.addWidget(go_to_current_btn, 0) + context_top_layout.addWidget(refresh_btn, 0) + + folders_filter_input = PlaceholderLineEdit(context_widget) + folders_filter_input.setPlaceholderText("Folder name filter...") + + folders_widget = LoaderFoldersWidget(controller, context_widget) + + product_types_widget = ProductTypesView(controller, context_splitter) + + context_layout = QtWidgets.QVBoxLayout(context_widget) + context_layout.setContentsMargins(0, 0, 0, 0) + context_layout.addWidget(context_top_widget, 0) + context_layout.addWidget(folders_filter_input, 0) + context_layout.addWidget(folders_widget, 1) + + context_splitter.addWidget(context_widget) + context_splitter.addWidget(product_types_widget) + context_splitter.setStretchFactor(0, 65) + context_splitter.setStretchFactor(1, 35) + + # Product + version selection item + products_wrap_widget = QtWidgets.QWidget(main_splitter) + + products_inputs_widget = QtWidgets.QWidget(products_wrap_widget) + + products_filter_input = PlaceholderLineEdit(products_inputs_widget) + products_filter_input.setPlaceholderText("Product name filter...") + product_group_checkbox = QtWidgets.QCheckBox( + "Enable grouping", products_inputs_widget) + product_group_checkbox.setChecked(True) + + products_widget = ProductsWidget(controller, products_wrap_widget) + + products_inputs_layout = QtWidgets.QHBoxLayout(products_inputs_widget) + products_inputs_layout.setContentsMargins(0, 0, 0, 0) + products_inputs_layout.addWidget(products_filter_input, 1) + products_inputs_layout.addWidget(product_group_checkbox, 0) + + products_wrap_layout = QtWidgets.QVBoxLayout(products_wrap_widget) + products_wrap_layout.setContentsMargins(0, 0, 0, 0) + products_wrap_layout.addWidget(products_inputs_widget, 0) + products_wrap_layout.addWidget(products_widget, 1) + + right_panel_splitter = QtWidgets.QSplitter(main_splitter) + right_panel_splitter.setOrientation(QtCore.Qt.Vertical) + + thumbnails_widget = ThumbnailPainterWidget(right_panel_splitter) + thumbnails_widget.set_use_checkboard(False) + + info_widget = InfoWidget(controller, right_panel_splitter) + + repre_widget = RepresentationsWidget(controller, right_panel_splitter) + + right_panel_splitter.addWidget(thumbnails_widget) + right_panel_splitter.addWidget(info_widget) + right_panel_splitter.addWidget(repre_widget) + + right_panel_splitter.setStretchFactor(0, 1) + right_panel_splitter.setStretchFactor(1, 1) + right_panel_splitter.setStretchFactor(2, 2) + + main_splitter.addWidget(context_splitter) + main_splitter.addWidget(products_wrap_widget) + main_splitter.addWidget(right_panel_splitter) + + main_splitter.setStretchFactor(0, 4) + main_splitter.setStretchFactor(1, 6) + main_splitter.setStretchFactor(2, 1) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.addWidget(main_splitter) + + show_timer = QtCore.QTimer() + show_timer.setInterval(1) + + show_timer.timeout.connect(self._on_show_timer) + + projects_combobox.refreshed.connect(self._on_projects_refresh) + folders_widget.refreshed.connect(self._on_folders_refresh) + products_widget.refreshed.connect(self._on_products_refresh) + folders_filter_input.textChanged.connect( + self._on_folder_filter_change + ) + product_types_widget.filter_changed.connect( + self._on_product_type_filter_change + ) + products_filter_input.textChanged.connect( + self._on_product_filter_change + ) + product_group_checkbox.stateChanged.connect( + self._on_product_group_change + ) + products_widget.merged_products_selection_changed.connect( + self._on_merged_products_selection_change + ) + products_widget.selection_changed.connect( + self._on_products_selection_change + ) + go_to_current_btn.clicked.connect( + self._on_go_to_current_context_click + ) + refresh_btn.clicked.connect( + self._on_refresh_click + ) + controller.register_event_callback( + "load.finished", + self._on_load_finished, + ) + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_changed, + ) + controller.register_event_callback( + "selection.folders.changed", + self._on_folders_selection_changed, + ) + controller.register_event_callback( + "selection.versions.changed", + self._on_versions_selection_changed, + ) + controller.register_event_callback( + "controller.reset.started", + self._on_controller_reset_start, + ) + controller.register_event_callback( + "controller.reset.finished", + self._on_controller_reset_finish, + ) + + self._group_dialog = ProductGroupDialog(controller, self) + + self._main_splitter = main_splitter + + self._go_to_current_btn = go_to_current_btn + self._refresh_btn = refresh_btn + self._projects_combobox = projects_combobox + + self._folders_filter_input = folders_filter_input + self._folders_widget = folders_widget + + self._product_types_widget = product_types_widget + + self._products_filter_input = products_filter_input + self._product_group_checkbox = product_group_checkbox + self._products_widget = products_widget + + self._right_panel_splitter = right_panel_splitter + self._thumbnails_widget = thumbnails_widget + self._info_widget = info_widget + self._repre_widget = repre_widget + + self._controller = controller + self._refresh_handler = RefreshHandler() + self._first_show = True + self._reset_on_show = True + self._show_counter = 0 + self._show_timer = show_timer + self._selected_project_name = None + self._selected_folder_ids = set() + self._selected_version_ids = set() + + self._products_widget.set_enable_grouping( + self._product_group_checkbox.isChecked() + ) + + def refresh(self): + self._controller.reset() + + def showEvent(self, event): + super(LoaderWindow, self).showEvent(event) + + if self._first_show: + self._on_first_show() + + self._show_timer.start() + + def keyPressEvent(self, event): + modifiers = event.modifiers() + ctrl_pressed = QtCore.Qt.ControlModifier & modifiers + + # Grouping products on pressing Ctrl + G + if ( + ctrl_pressed + and event.key() == QtCore.Qt.Key_G + and not event.isAutoRepeat() + ): + self._show_group_dialog() + event.setAccepted(True) + return + + super(LoaderWindow, self).keyPressEvent(event) + + def _on_first_show(self): + self._first_show = False + # width, height = 1800, 900 + width, height = 1500, 750 + + self.resize(width, height) + + mid_width = int(width / 1.8) + sides_width = int((width - mid_width) * 0.5) + self._main_splitter.setSizes( + [sides_width, mid_width, sides_width] + ) + + thumbnail_height = int(height / 3.6) + info_height = int((height - thumbnail_height) * 0.5) + self._right_panel_splitter.setSizes( + [thumbnail_height, info_height, info_height] + ) + self.setStyleSheet(load_stylesheet()) + center_window(self) + + def _on_show_timer(self): + if self._show_counter < 2: + self._show_counter += 1 + return + + self._show_counter = 0 + self._show_timer.stop() + + if self._reset_on_show: + self._reset_on_show = False + self._controller.reset() + + def _show_group_dialog(self): + project_name = self._projects_combobox.get_selected_project_name() + if not project_name: + return + + product_ids = { + i["product_id"] + for i in self._products_widget.get_selected_version_info() + } + if not product_ids: + return + + self._group_dialog.set_product_ids(project_name, product_ids) + self._group_dialog.show() + + def _on_folder_filter_change(self, text): + self._folders_widget.set_name_filter(text) + + def _on_product_group_change(self): + self._products_widget.set_enable_grouping( + self._product_group_checkbox.isChecked() + ) + + def _on_product_filter_change(self, text): + self._products_widget.set_name_filter(text) + + def _on_product_type_filter_change(self): + self._products_widget.set_product_type_filter( + self._product_types_widget.get_filter_info() + ) + + def _on_merged_products_selection_change(self): + items = self._products_widget.get_selected_merged_products() + self._folders_widget.set_merged_products_selection(items) + + def _on_products_selection_change(self): + items = self._products_widget.get_selected_version_info() + self._info_widget.set_selected_version_info( + self._projects_combobox.get_selected_project_name(), + items + ) + + def _on_go_to_current_context_click(self): + context = self._controller.get_current_context() + self._controller.set_expected_selection( + context["project_name"], + context["folder_id"], + ) + + def _on_refresh_click(self): + self._controller.reset() + + def _on_controller_reset_start(self): + self._refresh_handler.reset() + + def _on_controller_reset_finish(self): + context = self._controller.get_current_context() + project_name = context["project_name"] + self._go_to_current_btn.setVisible(bool(project_name)) + self._projects_combobox.set_current_context_project(project_name) + if not self._refresh_handler.project_refreshed: + self._projects_combobox.refresh() + + def _on_load_finished(self, event): + error_info = event["error_info"] + if not error_info: + return + + box = LoadErrorMessageBox(error_info, self) + box.show() + + def _on_project_selection_changed(self, event): + self._selected_project_name = event["project_name"] + + def _on_folders_selection_changed(self, event): + self._selected_folder_ids = set(event["folder_ids"]) + self._update_thumbnails() + + def _on_versions_selection_changed(self, event): + self._selected_version_ids = set(event["version_ids"]) + self._update_thumbnails() + + def _update_thumbnails(self): + project_name = self._selected_project_name + thumbnail_ids = set() + if self._selected_version_ids: + thumbnail_id_by_entity_id = ( + self._controller.get_version_thumbnail_ids( + project_name, + self._selected_version_ids + ) + ) + thumbnail_ids = set(thumbnail_id_by_entity_id.values()) + elif self._selected_folder_ids: + thumbnail_id_by_entity_id = ( + self._controller.get_folder_thumbnail_ids( + project_name, + self._selected_folder_ids + ) + ) + thumbnail_ids = set(thumbnail_id_by_entity_id.values()) + + thumbnail_ids.discard(None) + + if not thumbnail_ids: + self._thumbnails_widget.set_current_thumbnails(None) + return + + thumbnail_paths = set() + for thumbnail_id in thumbnail_ids: + thumbnail_path = self._controller.get_thumbnail_path( + project_name, thumbnail_id) + thumbnail_paths.add(thumbnail_path) + thumbnail_paths.discard(None) + self._thumbnails_widget.set_current_thumbnail_paths(thumbnail_paths) + + def _on_projects_refresh(self): + self._refresh_handler.set_project_refreshed() + if not self._refresh_handler.folders_refreshed: + self._folders_widget.refresh() + + def _on_folders_refresh(self): + self._refresh_handler.set_folders_refreshed() + if not self._refresh_handler.products_refreshed: + self._products_widget.refresh() + + def _on_products_refresh(self): + self._refresh_handler.set_products_refreshed() diff --git a/openpype/tools/ayon_sceneinventory/__init__.py b/openpype/tools/ayon_sceneinventory/__init__.py new file mode 100644 index 0000000000..5412e2fea2 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/__init__.py @@ -0,0 +1,6 @@ +from .control import SceneInventoryController + + +__all__ = ( + "SceneInventoryController", +) diff --git a/openpype/tools/ayon_sceneinventory/control.py b/openpype/tools/ayon_sceneinventory/control.py new file mode 100644 index 0000000000..e98b0e307b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/control.py @@ -0,0 +1,134 @@ +import ayon_api + +from openpype.lib.events import QueuedEventSystem +from openpype.host import ILoadHost +from openpype.pipeline import ( + registered_host, + get_current_context, +) +from openpype.tools.ayon_utils.models import HierarchyModel + +from .models import SiteSyncModel + + +class SceneInventoryController: + """This is a temporary controller for AYON. + + Goal of this temporary controller is to provide a way to get current + context instead of using 'AvalonMongoDB' object (or 'legacy_io'). + + Also provides (hopefully) cleaner api for site sync. + """ + + def __init__(self, host=None): + if host is None: + host = registered_host() + self._host = host + self._current_context = None + self._current_project = None + self._current_folder_id = None + self._current_folder_set = False + + self._site_sync_model = SiteSyncModel(self) + # Switch dialog requirements + self._hierarchy_model = HierarchyModel(self) + self._event_system = self._create_event_system() + + def emit_event(self, topic, data=None, source=None): + if data is None: + data = {} + self._event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self._event_system.add_callback(topic, callback) + + def reset(self): + self._current_context = None + self._current_project = None + self._current_folder_id = None + self._current_folder_set = False + + self._site_sync_model.reset() + self._hierarchy_model.reset() + + def get_current_context(self): + if self._current_context is None: + if hasattr(self._host, "get_current_context"): + self._current_context = self._host.get_current_context() + else: + self._current_context = get_current_context() + return self._current_context + + def get_current_project_name(self): + if self._current_project is None: + self._current_project = self.get_current_context()["project_name"] + return self._current_project + + def get_current_folder_id(self): + if self._current_folder_set: + return self._current_folder_id + + context = self.get_current_context() + project_name = context["project_name"] + folder_path = context.get("folder_path") + folder_name = context.get("asset_name") + folder_id = None + if folder_path: + folder = ayon_api.get_folder_by_path(project_name, folder_path) + if folder: + folder_id = folder["id"] + elif folder_name: + for folder in ayon_api.get_folders( + project_name, folder_names=[folder_name] + ): + folder_id = folder["id"] + break + + self._current_folder_id = folder_id + self._current_folder_set = True + return self._current_folder_id + + def get_containers(self): + host = self._host + if isinstance(host, ILoadHost): + return host.get_containers() + elif hasattr(host, "ls"): + return host.ls() + return [] + + # Site Sync methods + def is_sync_server_enabled(self): + return self._site_sync_model.is_sync_server_enabled() + + def get_sites_information(self): + return self._site_sync_model.get_sites_information() + + def get_site_provider_icons(self): + return self._site_sync_model.get_site_provider_icons() + + def get_representations_site_progress(self, representation_ids): + return self._site_sync_model.get_representations_site_progress( + representation_ids + ) + + def resync_representations(self, representation_ids, site_type): + return self._site_sync_model.resync_representations( + representation_ids, site_type + ) + + # Switch dialog methods + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_folder_label(self, folder_id): + if not folder_id: + return None + project_name = self.get_current_project_name() + folder_item = self._hierarchy_model.get_folder_item( + project_name, folder_id) + if folder_item is None: + return None + return folder_item.label + + def _create_event_system(self): + return QueuedEventSystem() diff --git a/openpype/tools/ayon_sceneinventory/model.py b/openpype/tools/ayon_sceneinventory/model.py new file mode 100644 index 0000000000..16924b0a7e --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/model.py @@ -0,0 +1,622 @@ +import collections +import re +import logging +import uuid +import copy + +from collections import defaultdict + +from qtpy import QtCore, QtGui +import qtawesome + +from openpype.client import ( + get_assets, + get_subsets, + get_versions, + get_last_version_by_subset_id, + get_representations, +) +from openpype.pipeline import ( + get_current_project_name, + schema, + HeroVersionType, +) +from openpype.style import get_default_entity_icon_color +from openpype.tools.utils.models import TreeModel, Item + + +def walk_hierarchy(node): + """Recursively yield group node.""" + for child in node.children(): + if child.get("isGroupNode"): + yield child + + for _child in walk_hierarchy(child): + yield _child + + +class InventoryModel(TreeModel): + """The model for the inventory""" + + Columns = [ + "Name", + "version", + "count", + "family", + "group", + "loader", + "objectName", + "active_site", + "remote_site", + ] + active_site_col = Columns.index("active_site") + remote_site_col = Columns.index("remote_site") + + OUTDATED_COLOR = QtGui.QColor(235, 30, 30) + CHILD_OUTDATED_COLOR = QtGui.QColor(200, 160, 30) + GRAYOUT_COLOR = QtGui.QColor(160, 160, 160) + + UniqueRole = QtCore.Qt.UserRole + 2 # unique label role + + def __init__(self, controller, parent=None): + super(InventoryModel, self).__init__(parent) + self.log = logging.getLogger(self.__class__.__name__) + + self._controller = controller + + self._hierarchy_view = False + + self._default_icon_color = get_default_entity_icon_color() + + site_icons = self._controller.get_site_provider_icons() + + self._site_icons = { + provider: QtGui.QIcon(icon_path) + for provider, icon_path in site_icons.items() + } + + def outdated(self, item): + value = item.get("version") + if isinstance(value, HeroVersionType): + return False + + if item.get("version") == item.get("highest_version"): + return False + return True + + def data(self, index, role): + if not index.isValid(): + return + + item = index.internalPointer() + + if role == QtCore.Qt.FontRole: + # Make top-level entries bold + if item.get("isGroupNode") or item.get("isNotSet"): # group-item + font = QtGui.QFont() + font.setBold(True) + return font + + if role == QtCore.Qt.ForegroundRole: + # Set the text color to the OUTDATED_COLOR when the + # collected version is not the same as the highest version + key = self.Columns[index.column()] + if key == "version": # version + if item.get("isGroupNode"): # group-item + if self.outdated(item): + return self.OUTDATED_COLOR + + if self._hierarchy_view: + # If current group is not outdated, check if any + # outdated children. + for _node in walk_hierarchy(item): + if self.outdated(_node): + return self.CHILD_OUTDATED_COLOR + else: + + if self._hierarchy_view: + # Although this is not a group item, we still need + # to distinguish which one contain outdated child. + for _node in walk_hierarchy(item): + if self.outdated(_node): + return self.CHILD_OUTDATED_COLOR.darker(150) + + return self.GRAYOUT_COLOR + + if key == "Name" and not item.get("isGroupNode"): + return self.GRAYOUT_COLOR + + # Add icons + if role == QtCore.Qt.DecorationRole: + if index.column() == 0: + # Override color + color = item.get("color", self._default_icon_color) + if item.get("isGroupNode"): # group-item + return qtawesome.icon("fa.folder", color=color) + if item.get("isNotSet"): + return qtawesome.icon("fa.exclamation-circle", color=color) + + return qtawesome.icon("fa.file-o", color=color) + + if index.column() == 3: + # Family icon + return item.get("familyIcon", None) + + column_name = self.Columns[index.column()] + + if column_name == "group" and item.get("group"): + return qtawesome.icon("fa.object-group", + color=get_default_entity_icon_color()) + + if item.get("isGroupNode"): + if column_name == "active_site": + provider = item.get("active_site_provider") + return self._site_icons.get(provider) + + if column_name == "remote_site": + provider = item.get("remote_site_provider") + return self._site_icons.get(provider) + + if role == QtCore.Qt.DisplayRole and item.get("isGroupNode"): + column_name = self.Columns[index.column()] + progress = None + if column_name == "active_site": + progress = item.get("active_site_progress", 0) + elif column_name == "remote_site": + progress = item.get("remote_site_progress", 0) + if progress is not None: + return "{}%".format(max(progress, 0) * 100) + + if role == self.UniqueRole: + return item["representation"] + item.get("objectName", "") + + return super(InventoryModel, self).data(index, role) + + def set_hierarchy_view(self, state): + """Set whether to display subsets in hierarchy view.""" + state = bool(state) + + if state != self._hierarchy_view: + self._hierarchy_view = state + + def refresh(self, selected=None, containers=None): + """Refresh the model""" + + # for debugging or testing, injecting items from outside + if containers is None: + containers = self._controller.get_containers() + + self.clear() + if not selected or not self._hierarchy_view: + self._add_containers(containers) + return + + # Filter by cherry-picked items + self._add_containers(( + container + for container in containers + if container["objectName"] in selected + )) + + def _add_containers(self, containers, parent=None): + """Add the items to the model. + + The items should be formatted similar to `api.ls()` returns, an item + is then represented as: + {"filename_v001.ma": [full/filename/of/loaded/filename_v001.ma, + full/filename/of/loaded/filename_v001.ma], + "nodetype" : "reference", + "node": "referenceNode1"} + + Note: When performing an additional call to `add_items` it will *not* + group the new items with previously existing item groups of the + same type. + + Args: + containers (generator): Container items. + parent (Item, optional): Set this item as parent for the added + items when provided. Defaults to the root of the model. + + Returns: + node.Item: root node which has children added based on the data + """ + + project_name = get_current_project_name() + + self.beginResetModel() + + # Group by representation + grouped = defaultdict(lambda: {"containers": list()}) + for container in containers: + repre_id = container["representation"] + grouped[repre_id]["containers"].append(container) + + ( + repres_by_id, + versions_by_id, + products_by_id, + folders_by_id, + ) = self._query_entities(project_name, set(grouped.keys())) + # Add to model + not_found = defaultdict(list) + not_found_ids = [] + for repre_id, group_dict in sorted(grouped.items()): + group_containers = group_dict["containers"] + representation = repres_by_id.get(repre_id) + if not representation: + not_found["representation"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + version = versions_by_id.get(representation["parent"]) + if not version: + not_found["version"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + product = products_by_id.get(version["parent"]) + if not product: + not_found["product"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + folder = folders_by_id.get(product["parent"]) + if not folder: + not_found["folder"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + group_dict.update({ + "representation": representation, + "version": version, + "subset": product, + "asset": folder + }) + + for _repre_id in not_found_ids: + grouped.pop(_repre_id) + + for where, group_containers in not_found.items(): + # create the group header + group_node = Item() + name = "< NOT FOUND - {} >".format(where) + group_node["Name"] = name + group_node["representation"] = name + group_node["count"] = len(group_containers) + group_node["isGroupNode"] = False + group_node["isNotSet"] = True + + self.add_child(group_node, parent=parent) + + for container in group_containers: + item_node = Item() + item_node.update(container) + item_node["Name"] = container.get("objectName", "NO NAME") + item_node["isNotFound"] = True + self.add_child(item_node, parent=group_node) + + # TODO Use product icons + family_icon = qtawesome.icon( + "fa.folder", color="#0091B2" + ) + # Prepare site sync specific data + progress_by_id = self._controller.get_representations_site_progress( + set(grouped.keys()) + ) + sites_info = self._controller.get_sites_information() + + for repre_id, group_dict in sorted(grouped.items()): + group_containers = group_dict["containers"] + representation = group_dict["representation"] + version = group_dict["version"] + subset = group_dict["subset"] + asset = group_dict["asset"] + + # Get the primary family + maj_version, _ = schema.get_schema_version(subset["schema"]) + if maj_version < 3: + src_doc = version + else: + src_doc = subset + + prim_family = src_doc["data"].get("family") + if not prim_family: + families = src_doc["data"].get("families") + if families: + prim_family = families[0] + + # Store the highest available version so the model can know + # whether current version is currently up-to-date. + highest_version = get_last_version_by_subset_id( + project_name, version["parent"] + ) + + # create the group header + group_node = Item() + group_node["Name"] = "{}_{}: ({})".format( + asset["name"], subset["name"], representation["name"] + ) + group_node["representation"] = repre_id + group_node["version"] = version["name"] + group_node["highest_version"] = highest_version["name"] + group_node["family"] = prim_family or "" + group_node["familyIcon"] = family_icon + group_node["count"] = len(group_containers) + group_node["isGroupNode"] = True + group_node["group"] = subset["data"].get("subsetGroup") + + # Site sync specific data + progress = progress_by_id[repre_id] + group_node.update(sites_info) + group_node["active_site_progress"] = progress["active_site"] + group_node["remote_site_progress"] = progress["remote_site"] + + self.add_child(group_node, parent=parent) + + for container in group_containers: + item_node = Item() + item_node.update(container) + + # store the current version on the item + item_node["version"] = version["name"] + + # Remapping namespace to item name. + # Noted that the name key is capital "N", by doing this, we + # can view namespace in GUI without changing container data. + item_node["Name"] = container["namespace"] + + self.add_child(item_node, parent=group_node) + + self.endResetModel() + + return self._root_item + + def _query_entities(self, project_name, repre_ids): + """Query entities for representations from containers. + + Returns: + tuple[dict, dict, dict, dict]: Representation, version, product + and folder documents by id. + """ + + repres_by_id = {} + versions_by_id = {} + products_by_id = {} + folders_by_id = {} + output = ( + repres_by_id, + versions_by_id, + products_by_id, + folders_by_id, + ) + + filtered_repre_ids = set() + for repre_id in repre_ids: + # Filter out invalid representation ids + # NOTE: This is added because scenes from OpenPype did contain + # ObjectId from mongo. + try: + uuid.UUID(repre_id) + filtered_repre_ids.add(repre_id) + except ValueError: + continue + if not filtered_repre_ids: + return output + + repre_docs = get_representations(project_name, repre_ids) + repres_by_id.update({ + repre_doc["_id"]: repre_doc + for repre_doc in repre_docs + }) + version_ids = { + repre_doc["parent"] for repre_doc in repres_by_id.values() + } + if not version_ids: + return output + + version_docs = get_versions(project_name, version_ids, hero=True) + versions_by_id.update({ + version_doc["_id"]: version_doc + for version_doc in version_docs + }) + hero_versions_by_subversion_id = collections.defaultdict(list) + for version_doc in versions_by_id.values(): + if version_doc["type"] != "hero_version": + continue + subversion = version_doc["version_id"] + hero_versions_by_subversion_id[subversion].append(version_doc) + + if hero_versions_by_subversion_id: + subversion_ids = set( + hero_versions_by_subversion_id.keys() + ) + subversion_docs = get_versions(project_name, subversion_ids) + for subversion_doc in subversion_docs: + subversion_id = subversion_doc["_id"] + subversion_ids.discard(subversion_id) + h_version_docs = hero_versions_by_subversion_id[subversion_id] + for version_doc in h_version_docs: + version_doc["name"] = HeroVersionType( + subversion_doc["name"] + ) + version_doc["data"] = copy.deepcopy( + subversion_doc["data"] + ) + + for subversion_id in subversion_ids: + h_version_docs = hero_versions_by_subversion_id[subversion_id] + for version_doc in h_version_docs: + versions_by_id.pop(version_doc["_id"]) + + product_ids = { + version_doc["parent"] + for version_doc in versions_by_id.values() + } + if not product_ids: + return output + product_docs = get_subsets(project_name, product_ids) + products_by_id.update({ + product_doc["_id"]: product_doc + for product_doc in product_docs + }) + folder_ids = { + product_doc["parent"] + for product_doc in products_by_id.values() + } + if not folder_ids: + return output + + folder_docs = get_assets(project_name, folder_ids) + folders_by_id.update({ + folder_doc["_id"]: folder_doc + for folder_doc in folder_docs + }) + return output + + +class FilterProxyModel(QtCore.QSortFilterProxyModel): + """Filter model to where key column's value is in the filtered tags""" + + def __init__(self, *args, **kwargs): + super(FilterProxyModel, self).__init__(*args, **kwargs) + self._filter_outdated = False + self._hierarchy_view = False + + def filterAcceptsRow(self, row, parent): + model = self.sourceModel() + source_index = model.index(row, self.filterKeyColumn(), parent) + + # Always allow bottom entries (individual containers), since their + # parent group hidden if it wouldn't have been validated. + rows = model.rowCount(source_index) + if not rows: + return True + + # Filter by regex + if hasattr(self, "filterRegExp"): + regex = self.filterRegExp() + else: + regex = self.filterRegularExpression() + pattern = regex.pattern() + if pattern: + pattern = re.escape(pattern) + + if not self._matches(row, parent, pattern): + return False + + if self._filter_outdated: + # When filtering to outdated we filter the up to date entries + # thus we "allow" them when they are outdated + if not self._is_outdated(row, parent): + return False + + return True + + def set_filter_outdated(self, state): + """Set whether to show the outdated entries only.""" + state = bool(state) + + if state != self._filter_outdated: + self._filter_outdated = bool(state) + self.invalidateFilter() + + def set_hierarchy_view(self, state): + state = bool(state) + + if state != self._hierarchy_view: + self._hierarchy_view = state + + def _is_outdated(self, row, parent): + """Return whether row is outdated. + + A row is considered outdated if it has "version" and "highest_version" + data and in the internal data structure, and they are not of an + equal value. + + """ + def outdated(node): + version = node.get("version", None) + highest = node.get("highest_version", None) + + # Always allow indices that have no version data at all + if version is None and highest is None: + return True + + # If either a version or highest is present but not the other + # consider the item invalid. + if not self._hierarchy_view: + # Skip this check if in hierarchy view, or the child item + # node will be hidden even it's actually outdated. + if version is None or highest is None: + return False + return version != highest + + index = self.sourceModel().index(row, self.filterKeyColumn(), parent) + + # The scene contents are grouped by "representation", e.g. the same + # "representation" loaded twice is grouped under the same header. + # Since the version check filters these parent groups we skip that + # check for the individual children. + has_parent = index.parent().isValid() + if has_parent and not self._hierarchy_view: + return True + + # Filter to those that have the different version numbers + node = index.internalPointer() + if outdated(node): + return True + + if self._hierarchy_view: + for _node in walk_hierarchy(node): + if outdated(_node): + return True + + return False + + def _matches(self, row, parent, pattern): + """Return whether row matches regex pattern. + + Args: + row (int): row number in model + parent (QtCore.QModelIndex): parent index + pattern (regex.pattern): pattern to check for in key + + Returns: + bool + + """ + model = self.sourceModel() + column = self.filterKeyColumn() + role = self.filterRole() + + def matches(row, parent, pattern): + index = model.index(row, column, parent) + key = model.data(index, role) + if re.search(pattern, key, re.IGNORECASE): + return True + + if matches(row, parent, pattern): + return True + + # Also allow if any of the children matches + source_index = model.index(row, column, parent) + rows = model.rowCount(source_index) + + if any( + matches(idx, source_index, pattern) + for idx in range(rows) + ): + return True + + if not self._hierarchy_view: + return False + + for idx in range(rows): + child_index = model.index(idx, column, source_index) + child_rows = model.rowCount(child_index) + return any( + self._matches(child_idx, child_index, pattern) + for child_idx in range(child_rows) + ) + + return True diff --git a/openpype/tools/ayon_sceneinventory/models/__init__.py b/openpype/tools/ayon_sceneinventory/models/__init__.py new file mode 100644 index 0000000000..c861d3c1a0 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/models/__init__.py @@ -0,0 +1,6 @@ +from .site_sync import SiteSyncModel + + +__all__ = ( + "SiteSyncModel", +) diff --git a/openpype/tools/ayon_sceneinventory/models/site_sync.py b/openpype/tools/ayon_sceneinventory/models/site_sync.py new file mode 100644 index 0000000000..b8c9443230 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/models/site_sync.py @@ -0,0 +1,176 @@ +from openpype.client import get_representations +from openpype.modules import ModulesManager + +NOT_SET = object() + + +class SiteSyncModel: + def __init__(self, controller): + self._controller = controller + + self._sync_server_module = NOT_SET + self._sync_server_enabled = None + self._active_site = NOT_SET + self._remote_site = NOT_SET + self._active_site_provider = NOT_SET + self._remote_site_provider = NOT_SET + + def reset(self): + self._sync_server_module = NOT_SET + self._sync_server_enabled = None + self._active_site = NOT_SET + self._remote_site = NOT_SET + self._active_site_provider = NOT_SET + self._remote_site_provider = NOT_SET + + def is_sync_server_enabled(self): + """Site sync is enabled. + + Returns: + bool: Is enabled or not. + """ + + self._cache_sync_server_module() + return self._sync_server_enabled + + def get_site_provider_icons(self): + """Icon paths per provider. + + Returns: + dict[str, str]: Path by provider name. + """ + + site_sync = self._get_sync_server_module() + if site_sync is None: + return {} + return site_sync.get_site_icons() + + def get_sites_information(self): + return { + "active_site": self._get_active_site(), + "active_site_provider": self._get_active_site_provider(), + "remote_site": self._get_remote_site(), + "remote_site_provider": self._get_remote_site_provider() + } + + def get_representations_site_progress(self, representation_ids): + """Get progress of representations sync.""" + + representation_ids = set(representation_ids) + output = { + repre_id: { + "active_site": 0, + "remote_site": 0, + } + for repre_id in representation_ids + } + if not self.is_sync_server_enabled(): + return output + + project_name = self._controller.get_current_project_name() + site_sync = self._get_sync_server_module() + repre_docs = get_representations(project_name, representation_ids) + active_site = self._get_active_site() + remote_site = self._get_remote_site() + + for repre_doc in repre_docs: + repre_output = output[repre_doc["_id"]] + result = site_sync.get_progress_for_repre( + repre_doc, active_site, remote_site + ) + repre_output["active_site"] = result[active_site] + repre_output["remote_site"] = result[remote_site] + + return output + + def resync_representations(self, representation_ids, site_type): + """ + + Args: + representation_ids (Iterable[str]): Representation ids. + site_type (Literal[active_site, remote_site]): Site type. + """ + + project_name = self._controller.get_current_project_name() + site_sync = self._get_sync_server_module() + active_site = self._get_active_site() + remote_site = self._get_remote_site() + progress = self.get_representations_site_progress( + representation_ids + ) + for repre_id in representation_ids: + repre_progress = progress.get(repre_id) + if not repre_progress: + continue + + if site_type == "active_site": + # check opposite from added site, must be 1 or unable to sync + check_progress = repre_progress["remote_site"] + site = active_site + else: + check_progress = repre_progress["active_site"] + site = remote_site + + if check_progress == 1: + site_sync.add_site( + project_name, repre_id, site, force=True + ) + + def _get_sync_server_module(self): + self._cache_sync_server_module() + return self._sync_server_module + + def _cache_sync_server_module(self): + if self._sync_server_module is not NOT_SET: + return self._sync_server_module + manager = ModulesManager() + site_sync = manager.modules_by_name.get("sync_server") + sync_enabled = site_sync is not None and site_sync.enabled + self._sync_server_module = site_sync + self._sync_server_enabled = sync_enabled + + def _get_active_site(self): + if self._active_site is NOT_SET: + self._cache_sites() + return self._active_site + + def _get_remote_site(self): + if self._remote_site is NOT_SET: + self._cache_sites() + return self._remote_site + + def _get_active_site_provider(self): + if self._active_site_provider is NOT_SET: + self._cache_sites() + return self._active_site_provider + + def _get_remote_site_provider(self): + if self._remote_site_provider is NOT_SET: + self._cache_sites() + return self._remote_site_provider + + def _cache_sites(self): + site_sync = self._get_sync_server_module() + active_site = None + remote_site = None + active_site_provider = None + remote_site_provider = None + if site_sync is not None: + project_name = self._controller.get_current_project_name() + active_site = site_sync.get_active_site(project_name) + remote_site = site_sync.get_remote_site(project_name) + active_site_provider = "studio" + remote_site_provider = "studio" + if active_site != "studio": + active_site_provider = site_sync.get_active_provider( + project_name, active_site + ) + if remote_site != "studio": + remote_site_provider = site_sync.get_active_provider( + project_name, remote_site + ) + + self._active_site = active_site + self._remote_site = remote_site + self._active_site_provider = active_site_provider + self._remote_site_provider = remote_site_provider diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py b/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py new file mode 100644 index 0000000000..4c07832829 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py @@ -0,0 +1,6 @@ +from .dialog import SwitchAssetDialog + + +__all__ = ( + "SwitchAssetDialog", +) diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py b/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py new file mode 100644 index 0000000000..2ebed7f89b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py @@ -0,0 +1,1333 @@ +import collections +import logging + +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.client import ( + get_assets, + get_subset_by_name, + get_subsets, + get_versions, + get_hero_versions, + get_last_versions, + get_representations, +) +from openpype.pipeline.load import ( + discover_loader_plugins, + switch_container, + get_repres_contexts, + loaders_from_repre_context, + LoaderSwitchNotImplementedError, + IncompatibleLoaderError, + LoaderNotFoundError +) + +from .widgets import ( + ButtonWithMenu, + SearchComboBox +) +from .folders_input import FoldersField + +log = logging.getLogger("SwitchAssetDialog") + + +class ValidationState: + def __init__(self): + self.folder_ok = True + self.product_ok = True + self.repre_ok = True + + @property + def all_ok(self): + return ( + self.folder_ok + and self.product_ok + and self.repre_ok + ) + + +class SwitchAssetDialog(QtWidgets.QDialog): + """Widget to support asset switching""" + + MIN_WIDTH = 550 + + switched = QtCore.Signal() + + def __init__(self, controller, parent=None, items=None): + super(SwitchAssetDialog, self).__init__(parent) + + self.setWindowTitle("Switch selected items ...") + + # Force and keep focus dialog + self.setModal(True) + + folders_field = FoldersField(controller, self) + products_combox = SearchComboBox(self) + repres_combobox = SearchComboBox(self) + + products_combox.set_placeholder("") + repres_combobox.set_placeholder("") + + folder_label = QtWidgets.QLabel(self) + product_label = QtWidgets.QLabel(self) + repre_label = QtWidgets.QLabel(self) + + current_folder_btn = QtWidgets.QPushButton("Use current folder", self) + + accept_icon = qtawesome.icon("fa.check", color="white") + accept_btn = ButtonWithMenu(self) + accept_btn.setIcon(accept_icon) + + main_layout = QtWidgets.QGridLayout(self) + # Folder column + main_layout.addWidget(current_folder_btn, 0, 0) + main_layout.addWidget(folders_field, 1, 0) + main_layout.addWidget(folder_label, 2, 0) + # Product column + main_layout.addWidget(products_combox, 1, 1) + main_layout.addWidget(product_label, 2, 1) + # Representation column + main_layout.addWidget(repres_combobox, 1, 2) + main_layout.addWidget(repre_label, 2, 2) + # Btn column + main_layout.addWidget(accept_btn, 1, 3) + main_layout.setColumnStretch(0, 1) + main_layout.setColumnStretch(1, 1) + main_layout.setColumnStretch(2, 1) + main_layout.setColumnStretch(3, 0) + + show_timer = QtCore.QTimer() + show_timer.setInterval(0) + show_timer.setSingleShot(False) + + show_timer.timeout.connect(self._on_show_timer) + folders_field.value_changed.connect( + self._combobox_value_changed + ) + products_combox.currentIndexChanged.connect( + self._combobox_value_changed + ) + repres_combobox.currentIndexChanged.connect( + self._combobox_value_changed + ) + accept_btn.clicked.connect(self._on_accept) + current_folder_btn.clicked.connect(self._on_current_folder) + + self._show_timer = show_timer + self._show_counter = 0 + + self._current_folder_btn = current_folder_btn + + self._folders_field = folders_field + self._products_combox = products_combox + self._representations_box = repres_combobox + + self._folder_label = folder_label + self._product_label = product_label + self._repre_label = repre_label + + self._accept_btn = accept_btn + + self.setMinimumWidth(self.MIN_WIDTH) + + # Set default focus to accept button so you don't directly type in + # first asset field, this also allows to see the placeholder value. + accept_btn.setFocus() + + self._folder_docs_by_id = {} + self._product_docs_by_id = {} + self._version_docs_by_id = {} + self._repre_docs_by_id = {} + + self._missing_folder_ids = set() + self._missing_product_ids = set() + self._missing_version_ids = set() + self._missing_repre_ids = set() + self._missing_docs = False + + self._inactive_folder_ids = set() + self._inactive_product_ids = set() + self._inactive_repre_ids = set() + + self._init_folder_id = None + self._init_product_name = None + self._init_repre_name = None + + self._fill_check = False + + self._project_name = controller.get_current_project_name() + self._folder_id = controller.get_current_folder_id() + + self._current_folder_btn.setEnabled(self._folder_id is not None) + + self._controller = controller + + self._items = items + self._prepare_content_data() + + def showEvent(self, event): + super(SwitchAssetDialog, self).showEvent(event) + self._show_timer.start() + + def refresh(self, init_refresh=False): + """Build the need comboboxes with content""" + if not self._fill_check and not init_refresh: + return + + self._fill_check = False + + validation_state = ValidationState() + self._folders_field.refresh() + # Set other comboboxes to empty if any document is missing or + # any folder of loaded representations is archived. + self._is_folder_ok(validation_state) + if validation_state.folder_ok: + product_values = self._get_product_box_values() + self._fill_combobox(product_values, "product") + self._is_product_ok(validation_state) + + if validation_state.folder_ok and validation_state.product_ok: + repre_values = sorted(self._representations_box_values()) + self._fill_combobox(repre_values, "repre") + self._is_repre_ok(validation_state) + + # Fill comboboxes with values + self.set_labels() + + self.apply_validations(validation_state) + + self._build_loaders_menu() + + if init_refresh: + # pre select context if possible + self._folders_field.set_selected_item(self._init_folder_id) + self._products_combox.set_valid_value(self._init_product_name) + self._representations_box.set_valid_value(self._init_repre_name) + + self._fill_check = True + + def set_labels(self): + folder_label = self._folders_field.get_selected_folder_label() + product_label = self._products_combox.get_valid_value() + repre_label = self._representations_box.get_valid_value() + + default = "*No changes" + self._folder_label.setText(folder_label or default) + self._product_label.setText(product_label or default) + self._repre_label.setText(repre_label or default) + + def apply_validations(self, validation_state): + error_msg = "*Please select" + error_sheet = "border: 1px solid red;" + + product_sheet = None + repre_sheet = None + accept_state = "" + if validation_state.folder_ok is False: + self._folder_label.setText(error_msg) + elif validation_state.product_ok is False: + product_sheet = error_sheet + self._product_label.setText(error_msg) + elif validation_state.repre_ok is False: + repre_sheet = error_sheet + self._repre_label.setText(error_msg) + + if validation_state.all_ok: + accept_state = "1" + + self._folders_field.set_valid(validation_state.folder_ok) + self._products_combox.setStyleSheet(product_sheet or "") + self._representations_box.setStyleSheet(repre_sheet or "") + + self._accept_btn.setEnabled(validation_state.all_ok) + self._set_style_property(self._accept_btn, "state", accept_state) + + def find_last_versions(self, product_ids): + project_name = self._project_name + return get_last_versions( + project_name, + subset_ids=product_ids, + fields=["_id", "parent", "type"] + ) + + def _on_show_timer(self): + if self._show_counter == 2: + self._show_timer.stop() + self.refresh(True) + else: + self._show_counter += 1 + + def _prepare_content_data(self): + repre_ids = { + item["representation"] + for item in self._items + } + + project_name = self._project_name + repres = list(get_representations( + project_name, + representation_ids=repre_ids, + archived=True, + )) + repres_by_id = {str(repre["_id"]): repre for repre in repres} + + content_repre_docs_by_id = {} + inactive_repre_ids = set() + missing_repre_ids = set() + version_ids = set() + for repre_id in repre_ids: + repre_doc = repres_by_id.get(repre_id) + if repre_doc is None: + missing_repre_ids.add(repre_id) + elif repres_by_id[repre_id]["type"] == "archived_representation": + inactive_repre_ids.add(repre_id) + version_ids.add(repre_doc["parent"]) + else: + content_repre_docs_by_id[repre_id] = repre_doc + version_ids.add(repre_doc["parent"]) + + version_docs = get_versions( + project_name, + version_ids=version_ids, + hero=True + ) + content_version_docs_by_id = {} + for version_doc in version_docs: + version_id = version_doc["_id"] + content_version_docs_by_id[version_id] = version_doc + + missing_version_ids = set() + product_ids = set() + for version_id in version_ids: + version_doc = content_version_docs_by_id.get(version_id) + if version_doc is None: + missing_version_ids.add(version_id) + else: + product_ids.add(version_doc["parent"]) + + product_docs = get_subsets( + project_name, subset_ids=product_ids, archived=True + ) + product_docs_by_id = {sub["_id"]: sub for sub in product_docs} + + folder_ids = set() + inactive_product_ids = set() + missing_product_ids = set() + content_product_docs_by_id = {} + for product_id in product_ids: + product_doc = product_docs_by_id.get(product_id) + if product_doc is None: + missing_product_ids.add(product_id) + elif product_doc["type"] == "archived_subset": + folder_ids.add(product_doc["parent"]) + inactive_product_ids.add(product_id) + else: + folder_ids.add(product_doc["parent"]) + content_product_docs_by_id[product_id] = product_doc + + folder_docs = get_assets( + project_name, asset_ids=folder_ids, archived=True + ) + folder_docs_by_id = { + folder_doc["_id"]: folder_doc + for folder_doc in folder_docs + } + + missing_folder_ids = set() + inactive_folder_ids = set() + content_folder_docs_by_id = {} + for folder_id in folder_ids: + folder_doc = folder_docs_by_id.get(folder_id) + if folder_doc is None: + missing_folder_ids.add(folder_id) + elif folder_doc["type"] == "archived_asset": + inactive_folder_ids.add(folder_id) + else: + content_folder_docs_by_id[folder_id] = folder_doc + + # stash context values, works only for single representation + init_folder_id = None + init_product_name = None + init_repre_name = None + if len(repres) == 1: + init_repre_doc = repres[0] + init_version_doc = content_version_docs_by_id.get( + init_repre_doc["parent"]) + init_product_doc = None + init_folder_doc = None + if init_version_doc: + init_product_doc = content_product_docs_by_id.get( + init_version_doc["parent"] + ) + if init_product_doc: + init_folder_doc = content_folder_docs_by_id.get( + init_product_doc["parent"] + ) + if init_folder_doc: + init_repre_name = init_repre_doc["name"] + init_product_name = init_product_doc["name"] + init_folder_id = init_folder_doc["_id"] + + self._init_folder_id = init_folder_id + self._init_product_name = init_product_name + self._init_repre_name = init_repre_name + + self._folder_docs_by_id = content_folder_docs_by_id + self._product_docs_by_id = content_product_docs_by_id + self._version_docs_by_id = content_version_docs_by_id + self._repre_docs_by_id = content_repre_docs_by_id + + self._missing_folder_ids = missing_folder_ids + self._missing_product_ids = missing_product_ids + self._missing_version_ids = missing_version_ids + self._missing_repre_ids = missing_repre_ids + self._missing_docs = ( + bool(missing_folder_ids) + or bool(missing_version_ids) + or bool(missing_product_ids) + or bool(missing_repre_ids) + ) + + self._inactive_folder_ids = inactive_folder_ids + self._inactive_product_ids = inactive_product_ids + self._inactive_repre_ids = inactive_repre_ids + + def _combobox_value_changed(self, *args, **kwargs): + self.refresh() + + def _build_loaders_menu(self): + repre_ids = self._get_current_output_repre_ids() + loaders = self._get_loaders(repre_ids) + # Get and destroy the action group + self._accept_btn.clear_actions() + + if not loaders: + return + + # Build new action group + group = QtWidgets.QActionGroup(self._accept_btn) + + for loader in loaders: + # Label + label = getattr(loader, "label", None) + if label is None: + label = loader.__name__ + + action = group.addAction(label) + # action = QtWidgets.QAction(label) + action.setData(loader) + + # Support font-awesome icons using the `.icon` and `.color` + # attributes on plug-ins. + icon = getattr(loader, "icon", None) + if icon is not None: + try: + key = "fa.{0}".format(icon) + color = getattr(loader, "color", "white") + action.setIcon(qtawesome.icon(key, color=color)) + + except Exception as exc: + print("Unable to set icon for loader {}: {}".format( + loader, str(exc) + )) + + self._accept_btn.add_action(action) + + group.triggered.connect(self._on_action_clicked) + + def _on_action_clicked(self, action): + loader_plugin = action.data() + self._trigger_switch(loader_plugin) + + def _get_loaders(self, repre_ids): + repre_contexts = None + if repre_ids: + repre_contexts = get_repres_contexts(repre_ids) + + if not repre_contexts: + return list() + + available_loaders = [] + for loader_plugin in discover_loader_plugins(): + # Skip loaders without switch method + if not hasattr(loader_plugin, "switch"): + continue + + # Skip utility loaders + if ( + hasattr(loader_plugin, "is_utility") + and loader_plugin.is_utility + ): + continue + available_loaders.append(loader_plugin) + + loaders = None + for repre_context in repre_contexts.values(): + _loaders = set(loaders_from_repre_context( + available_loaders, repre_context + )) + if loaders is None: + loaders = _loaders + else: + loaders = _loaders.intersection(loaders) + + if not loaders: + break + + if loaders is None: + loaders = [] + else: + loaders = list(loaders) + + return loaders + + def _fill_combobox(self, values, combobox_type): + if combobox_type == "product": + combobox_widget = self._products_combox + elif combobox_type == "repre": + combobox_widget = self._representations_box + else: + return + selected_value = combobox_widget.get_valid_value() + + # Fill combobox + if values is not None: + combobox_widget.populate(list(sorted(values))) + if selected_value and selected_value in values: + index = None + for idx in range(combobox_widget.count()): + if selected_value == str(combobox_widget.itemText(idx)): + index = idx + break + if index is not None: + combobox_widget.setCurrentIndex(index) + + def _set_style_property(self, widget, name, value): + cur_value = widget.property(name) + if cur_value == value: + return + widget.setProperty(name, value) + widget.style().polish(widget) + + def _get_current_output_repre_ids(self): + # NOTE hero versions are not used because it is expected that + # hero version has same representations as latests + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.currentText() + selected_repre = self._representations_box.currentText() + + # Nothing is selected + # [ ] [ ] [ ] + if ( + not selected_folder_id + and not selected_product_name + and not selected_repre + ): + return list(self._repre_docs_by_id.keys()) + + # Everything is selected + # [x] [x] [x] + if selected_folder_id and selected_product_name and selected_repre: + return self._get_current_output_repre_ids_xxx( + selected_folder_id, selected_product_name, selected_repre + ) + + # [x] [x] [ ] + # If folder and product is selected + if selected_folder_id and selected_product_name: + return self._get_current_output_repre_ids_xxo( + selected_folder_id, selected_product_name + ) + + # [x] [ ] [x] + # If folder and repre is selected + if selected_folder_id and selected_repre: + return self._get_current_output_repre_ids_xox( + selected_folder_id, selected_repre + ) + + # [x] [ ] [ ] + # If folder and product is selected + if selected_folder_id: + return self._get_current_output_repre_ids_xoo(selected_folder_id) + + # [ ] [x] [x] + if selected_product_name and selected_repre: + return self._get_current_output_repre_ids_oxx( + selected_product_name, selected_repre + ) + + # [ ] [x] [ ] + if selected_product_name: + return self._get_current_output_repre_ids_oxo( + selected_product_name + ) + + # [ ] [ ] [x] + return self._get_current_output_repre_ids_oox(selected_repre) + + def _get_current_output_repre_ids_xxx( + self, folder_id, selected_product_name, selected_repre + ): + project_name = self._project_name + product_doc = get_subset_by_name( + project_name, + selected_product_name, + folder_id, + fields=["_id"] + ) + + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + version_doc = last_versions_by_product_id.get(product_id) + if not version_doc: + return [] + + repre_docs = get_representations( + project_name, + version_ids=[version_doc["_id"]], + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xxo(self, folder_id, product_name): + project_name = self._project_name + product_doc = get_subset_by_name( + project_name, + product_name, + folder_id, + fields=["_id"] + ) + if not product_doc: + return [] + + repre_names = set() + for repre_doc in self._repre_docs_by_id.values(): + repre_names.add(repre_doc["name"]) + + # TODO where to take version ids? + version_ids = [] + repre_docs = get_representations( + project_name, + representation_names=repre_names, + version_ids=version_ids, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xox(self, folder_id, selected_repre): + product_names = { + product_doc["name"] + for product_doc in self._product_docs_by_id.values() + } + + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=[folder_id], + subset_names=product_names, + fields=["_id", "name"] + ) + product_name_by_id = { + product_doc["_id"]: product_doc["name"] + for product_doc in product_docs + } + product_ids = list(product_name_by_id.keys()) + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_id_by_product_name = {} + for product_id, last_version in last_versions_by_product_id.items(): + product_name = product_name_by_id[product_id] + last_version_id_by_product_name[product_name] = ( + last_version["_id"] + ) + + repre_docs = get_representations( + project_name, + version_ids=last_version_id_by_product_name.values(), + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xoo(self, folder_id): + project_name = self._project_name + repres_by_product_name = collections.defaultdict(set) + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + product_name = product_doc["name"] + repres_by_product_name[product_name].add(repre_doc["name"]) + + product_docs = list(get_subsets( + project_name, + asset_ids=[folder_id], + subset_names=repres_by_product_name.keys(), + fields=["_id", "name"] + )) + product_name_by_id = { + product_doc["_id"]: product_doc["name"] + for product_doc in product_docs + } + product_ids = list(product_name_by_id.keys()) + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_id_by_product_name = {} + for product_id, last_version in last_versions_by_product_id.items(): + product_name = product_name_by_id[product_id] + last_version_id_by_product_name[product_name] = ( + last_version["_id"] + ) + + repre_names_by_version_id = {} + for product_name, repre_names in repres_by_product_name.items(): + version_id = last_version_id_by_product_name.get(product_name) + # This should not happen but why to crash? + if version_id is not None: + repre_names_by_version_id[version_id] = list(repre_names) + + repre_docs = get_representations( + project_name, + names_by_version_ids=repre_names_by_version_id, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oxx( + self, product_name, selected_repre + ): + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[product_name], + fields=["_id"] + ) + product_ids = [product_doc["_id"] for product_doc in product_docs] + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_ids = [ + last_version["_id"] + for last_version in last_versions_by_product_id.values() + ] + repre_docs = get_representations( + project_name, + version_ids=last_version_ids, + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oxo(self, product_name): + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[product_name], + fields=["_id", "parent"] + ) + product_docs_by_id = { + product_doc["_id"]: product_doc + for product_doc in product_docs + } + if not product_docs: + return list() + + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_names_by_folder_id = collections.defaultdict(set) + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + folder_doc = self._folder_docs_by_id[product_doc["parent"]] + folder_id = folder_doc["_id"] + repre_names_by_folder_id[folder_id].add(repre_doc["name"]) + + repre_names_by_version_id = {} + for last_version_id, product_id in product_id_by_version_id.items(): + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + repre_names = repre_names_by_folder_id.get(folder_id) + if not repre_names: + continue + repre_names_by_version_id[last_version_id] = repre_names + + repre_docs = get_representations( + project_name, + names_by_version_ids=repre_names_by_version_id, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oox(self, selected_repre): + project_name = self._project_name + repre_docs = get_representations( + project_name, + representation_names=[selected_repre], + version_ids=self._version_docs_by_id.keys(), + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_product_box_values(self): + project_name = self._project_name + selected_folder_id = self._folders_field.get_selected_folder_id() + if selected_folder_id: + folder_ids = [selected_folder_id] + else: + folder_ids = list(self._folder_docs_by_id.keys()) + + product_docs = get_subsets( + project_name, + asset_ids=folder_ids, + fields=["parent", "name"] + ) + + product_names_by_parent_id = collections.defaultdict(set) + for product_doc in product_docs: + product_names_by_parent_id[product_doc["parent"]].add( + product_doc["name"] + ) + + possible_product_names = None + for product_names in product_names_by_parent_id.values(): + if possible_product_names is None: + possible_product_names = product_names + else: + possible_product_names = possible_product_names.intersection( + product_names) + + if not possible_product_names: + break + + if not possible_product_names: + return [] + return list(possible_product_names) + + def _representations_box_values(self): + # NOTE hero versions are not used because it is expected that + # hero version has same representations as latests + project_name = self._project_name + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.currentText() + + # If nothing is selected + # [ ] [ ] [?] + if not selected_folder_id and not selected_product_name: + # Find all representations of selection's products + possible_repres = get_representations( + project_name, + version_ids=self._version_docs_by_id.keys(), + fields=["parent", "name"] + ) + + possible_repres_by_parent = collections.defaultdict(set) + for repre in possible_repres: + possible_repres_by_parent[repre["parent"]].add(repre["name"]) + + output_repres = None + for repre_names in possible_repres_by_parent.values(): + if output_repres is None: + output_repres = repre_names + else: + output_repres = (output_repres & repre_names) + + if not output_repres: + break + + return list(output_repres or list()) + + # [x] [x] [?] + if selected_folder_id and selected_product_name: + product_doc = get_subset_by_name( + project_name, + selected_product_name, + selected_folder_id, + fields=["_id"] + ) + + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + version_doc = last_versions_by_product_id.get(product_id) + repre_docs = get_representations( + project_name, + version_ids=[version_doc["_id"]], + fields=["name"] + ) + return [ + repre_doc["name"] + for repre_doc in repre_docs + ] + + # [x] [ ] [?] + # If only folder is selected + if selected_folder_id: + # Filter products by names from content + product_names = { + product_doc["name"] + for product_doc in self._product_docs_by_id.values() + } + + product_docs = get_subsets( + project_name, + asset_ids=[selected_folder_id], + subset_names=product_names, + fields=["_id"] + ) + product_ids = { + product_doc["_id"] + for product_doc in product_docs + } + if not product_ids: + return list() + + last_versions_by_product_id = self.find_last_versions(product_ids) + product_id_by_version_id = {} + for product_id, last_version in ( + last_versions_by_product_id.items() + ): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_docs = list(get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + )) + if not repre_docs: + return list() + + repre_names_by_parent = collections.defaultdict(set) + for repre_doc in repre_docs: + repre_names_by_parent[repre_doc["parent"]].add( + repre_doc["name"] + ) + + available_repres = None + for repre_names in repre_names_by_parent.values(): + if available_repres is None: + available_repres = repre_names + continue + + available_repres = available_repres.intersection(repre_names) + + return list(available_repres) + + # [ ] [x] [?] + product_docs = list(get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[selected_product_name], + fields=["_id", "parent"] + )) + if not product_docs: + return list() + + product_docs_by_id = { + product_doc["_id"]: product_doc + for product_doc in product_docs + } + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_docs = list( + get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + ) + if not repre_docs: + return list() + + repre_names_by_folder_id = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + folder_id = product_docs_by_id[product_id]["parent"] + repre_names_by_folder_id[folder_id].add(repre_doc["name"]) + + available_repres = None + for repre_names in repre_names_by_folder_id.values(): + if available_repres is None: + available_repres = repre_names + continue + + available_repres = available_repres.intersection(repre_names) + + return list(available_repres) + + def _is_folder_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + if ( + selected_folder_id is None + and (self._missing_docs or self._inactive_folder_ids) + ): + validation_state.folder_ok = False + + def _is_product_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + + # [?] [x] [?] + # If product is selected then must be ok + if selected_product_name is not None: + return + + # [ ] [ ] [?] + if selected_folder_id is None: + # If there were archived products and folder is not selected + if self._inactive_product_ids: + validation_state.product_ok = False + return + + # [x] [ ] [?] + project_name = self._project_name + product_docs = get_subsets( + project_name, asset_ids=[selected_folder_id], fields=["name"] + ) + + product_names = set( + product_doc["name"] + for product_doc in product_docs + ) + + for product_doc in self._product_docs_by_id.values(): + if product_doc["name"] not in product_names: + validation_state.product_ok = False + break + + def _is_repre_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + selected_repre = self._representations_box.get_valid_value() + + # [?] [?] [x] + # If product is selected then must be ok + if selected_repre is not None: + return + + # [ ] [ ] [ ] + if selected_folder_id is None and selected_product_name is None: + if ( + self._inactive_repre_ids + or self._missing_version_ids + or self._missing_repre_ids + ): + validation_state.repre_ok = False + return + + # [x] [x] [ ] + project_name = self._project_name + if ( + selected_folder_id is not None + and selected_product_name is not None + ): + product_doc = get_subset_by_name( + project_name, + selected_product_name, + selected_folder_id, + fields=["_id"] + ) + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + last_version = last_versions_by_product_id.get(product_id) + if not last_version: + validation_state.repre_ok = False + return + + repre_docs = get_representations( + project_name, + version_ids=[last_version["_id"]], + fields=["name"] + ) + + repre_names = set( + repre_doc["name"] + for repre_doc in repre_docs + ) + for repre_doc in self._repre_docs_by_id.values(): + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + return + + # [x] [ ] [ ] + if selected_folder_id is not None: + product_docs = list(get_subsets( + project_name, + asset_ids=[selected_folder_id], + fields=["_id", "name"] + )) + + product_name_by_id = {} + product_ids = set() + for product_doc in product_docs: + product_id = product_doc["_id"] + product_ids.add(product_id) + product_name_by_id[product_id] = product_doc["name"] + + last_versions_by_product_id = self.find_last_versions(product_ids) + + product_id_by_version_id = {} + for product_id, last_version in ( + last_versions_by_product_id.items() + ): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + repre_docs = get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + repres_by_product_name = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + product_name = product_name_by_id[product_id] + repres_by_product_name[product_name].add(repre_doc["name"]) + + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + repre_names = repres_by_product_name[product_doc["name"]] + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + return + + # [ ] [x] [ ] + # Product documents + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[selected_product_name], + fields=["_id", "name", "parent"] + ) + product_docs_by_id = {} + for product_doc in product_docs: + product_docs_by_id[product_doc["_id"]] = product_doc + + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + repre_docs = get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + repres_by_folder_id = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + folder_id = product_docs_by_id[product_id]["parent"] + repres_by_folder_id[folder_id].add(repre_doc["name"]) + + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + folder_id = product_doc["parent"] + repre_names = repres_by_folder_id[folder_id] + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + + def _on_current_folder(self): + # Set initial folder as current. + folder_id = self._controller.get_current_folder_id() + if not folder_id: + return + + selected_folder_id = self._folders_field.get_selected_folder_id() + if folder_id == selected_folder_id: + return + + self._folders_field.set_selected_item(folder_id) + self._combobox_value_changed() + + def _on_accept(self): + self._trigger_switch() + + def _trigger_switch(self, loader=None): + # Use None when not a valid value or when placeholder value + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + selected_representation = self._representations_box.get_valid_value() + + project_name = self._project_name + if selected_folder_id: + folder_ids = {selected_folder_id} + else: + folder_ids = set(self._folder_docs_by_id.keys()) + + product_names = None + if selected_product_name: + product_names = [selected_product_name] + + product_docs = list(get_subsets( + project_name, + subset_names=product_names, + asset_ids=folder_ids + )) + product_ids = set() + product_docs_by_parent_and_name = collections.defaultdict(dict) + for product_doc in product_docs: + product_ids.add(product_doc["_id"]) + folder_id = product_doc["parent"] + name = product_doc["name"] + product_docs_by_parent_and_name[folder_id][name] = product_doc + + # versions + _version_docs = get_versions(project_name, subset_ids=product_ids) + version_docs = list(reversed( + sorted(_version_docs, key=lambda item: item["name"]) + )) + + hero_version_docs = list(get_hero_versions( + project_name, subset_ids=product_ids + )) + + version_ids = set() + version_docs_by_parent_id = {} + for version_doc in version_docs: + parent_id = version_doc["parent"] + if parent_id not in version_docs_by_parent_id: + version_ids.add(version_doc["_id"]) + version_docs_by_parent_id[parent_id] = version_doc + + hero_version_docs_by_parent_id = {} + for hero_version_doc in hero_version_docs: + version_ids.add(hero_version_doc["_id"]) + parent_id = hero_version_doc["parent"] + hero_version_docs_by_parent_id[parent_id] = hero_version_doc + + repre_docs = get_representations( + project_name, version_ids=version_ids + ) + repre_docs_by_parent_id_by_name = collections.defaultdict(dict) + for repre_doc in repre_docs: + parent_id = repre_doc["parent"] + name = repre_doc["name"] + repre_docs_by_parent_id_by_name[parent_id][name] = repre_doc + + for container in self._items: + self._switch_container( + container, + loader, + selected_folder_id, + selected_product_name, + selected_representation, + product_docs_by_parent_and_name, + version_docs_by_parent_id, + hero_version_docs_by_parent_id, + repre_docs_by_parent_id_by_name, + ) + + self.switched.emit() + + self.close() + + def _switch_container( + self, + container, + loader, + selected_folder_id, + product_name, + selected_representation, + product_docs_by_parent_and_name, + version_docs_by_parent_id, + hero_version_docs_by_parent_id, + repre_docs_by_parent_id_by_name, + ): + container_repre_id = container["representation"] + container_repre = self._repre_docs_by_id[container_repre_id] + container_repre_name = container_repre["name"] + container_version_id = container_repre["parent"] + + container_version = self._version_docs_by_id[container_version_id] + + container_product_id = container_version["parent"] + container_product = self._product_docs_by_id[container_product_id] + + if selected_folder_id: + folder_id = selected_folder_id + else: + folder_id = container_product["parent"] + + products_by_name = product_docs_by_parent_and_name[folder_id] + if product_name: + product_doc = products_by_name[product_name] + else: + product_doc = products_by_name[container_product["name"]] + + repre_doc = None + product_id = product_doc["_id"] + if container_version["type"] == "hero_version": + hero_version = hero_version_docs_by_parent_id.get( + product_id + ) + if hero_version: + _repres = repre_docs_by_parent_id_by_name.get( + hero_version["_id"] + ) + if selected_representation: + repre_doc = _repres.get(selected_representation) + else: + repre_doc = _repres.get(container_repre_name) + + if not repre_doc: + version_doc = version_docs_by_parent_id[product_id] + version_id = version_doc["_id"] + repres_by_name = repre_docs_by_parent_id_by_name[version_id] + if selected_representation: + repre_doc = repres_by_name[selected_representation] + else: + repre_doc = repres_by_name[container_repre_name] + + error = None + try: + switch_container(container, repre_doc, loader) + except ( + LoaderSwitchNotImplementedError, + IncompatibleLoaderError, + LoaderNotFoundError, + ) as exc: + error = str(exc) + except Exception: + error = ( + "Switch asset failed. " + "Search console log for more details." + ) + if error is not None: + log.warning(( + "Couldn't switch asset." + "See traceback for more information." + ), exc_info=True) + dialog = QtWidgets.QMessageBox(self) + dialog.setWindowTitle("Switch asset failed") + dialog.setText(error) + dialog.exec_() diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py b/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py new file mode 100644 index 0000000000..699c62371a --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py @@ -0,0 +1,307 @@ +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.tools.utils import ( + PlaceholderLineEdit, + BaseClickableFrame, + set_style_property, +) +from openpype.tools.ayon_utils.widgets import FoldersWidget + +NOT_SET = object() + + +class ClickableLineEdit(QtWidgets.QLineEdit): + """QLineEdit capturing left mouse click. + + Triggers `clicked` signal on mouse click. + """ + clicked = QtCore.Signal() + + def __init__(self, *args, **kwargs): + super(ClickableLineEdit, self).__init__(*args, **kwargs) + self.setReadOnly(True) + self._mouse_pressed = False + + def mousePressEvent(self, event): + if event.button() == QtCore.Qt.LeftButton: + self._mouse_pressed = True + event.accept() + + def mouseMoveEvent(self, event): + event.accept() + + def mouseReleaseEvent(self, event): + if self._mouse_pressed: + self._mouse_pressed = False + if self.rect().contains(event.pos()): + self.clicked.emit() + event.accept() + + def mouseDoubleClickEvent(self, event): + event.accept() + + +class ControllerWrap: + def __init__(self, controller): + self._controller = controller + self._selected_folder_id = None + + def emit_event(self, *args, **kwargs): + self._controller.emit_event(*args, **kwargs) + + def register_event_callback(self, *args, **kwargs): + self._controller.register_event_callback(*args, **kwargs) + + def get_current_project_name(self): + return self._controller.get_current_project_name() + + def get_folder_items(self, *args, **kwargs): + return self._controller.get_folder_items(*args, **kwargs) + + def set_selected_folder(self, folder_id): + self._selected_folder_id = folder_id + + def get_selected_folder_id(self): + return self._selected_folder_id + + +class FoldersDialog(QtWidgets.QDialog): + """Dialog to select asset for a context of instance.""" + + def __init__(self, controller, parent): + super(FoldersDialog, self).__init__(parent) + self.setWindowTitle("Select folder") + + filter_input = PlaceholderLineEdit(self) + filter_input.setPlaceholderText("Filter folders..") + + controller_wrap = ControllerWrap(controller) + folders_widget = FoldersWidget(controller_wrap, self) + folders_widget.set_deselectable(True) + + ok_btn = QtWidgets.QPushButton("OK", self) + cancel_btn = QtWidgets.QPushButton("Cancel", self) + + btns_layout = QtWidgets.QHBoxLayout() + btns_layout.addStretch(1) + btns_layout.addWidget(ok_btn) + btns_layout.addWidget(cancel_btn) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(filter_input, 0) + layout.addWidget(folders_widget, 1) + layout.addLayout(btns_layout, 0) + + folders_widget.double_clicked.connect(self._on_ok_clicked) + folders_widget.refreshed.connect(self._on_folders_refresh) + filter_input.textChanged.connect(self._on_filter_change) + ok_btn.clicked.connect(self._on_ok_clicked) + cancel_btn.clicked.connect(self._on_cancel_clicked) + + self._filter_input = filter_input + self._ok_btn = ok_btn + self._cancel_btn = cancel_btn + + self._folders_widget = folders_widget + self._controller_wrap = controller_wrap + + # Set selected folder only when user confirms the dialog + self._selected_folder_id = None + self._selected_folder_label = None + + self._folder_id_to_select = NOT_SET + + self._first_show = True + self._default_height = 500 + + def showEvent(self, event): + """Refresh asset model on show.""" + super(FoldersDialog, self).showEvent(event) + if self._first_show: + self._first_show = False + self._on_first_show() + + def refresh(self): + project_name = self._controller_wrap.get_current_project_name() + self._folders_widget.set_project_name(project_name) + + def _on_first_show(self): + center = self.rect().center() + size = self.size() + size.setHeight(self._default_height) + + self.resize(size) + new_pos = self.mapToGlobal(center) + new_pos.setX(new_pos.x() - int(self.width() / 2)) + new_pos.setY(new_pos.y() - int(self.height() / 2)) + self.move(new_pos) + + def _on_folders_refresh(self): + if self._folder_id_to_select is NOT_SET: + return + self._folders_widget.set_selected_folder(self._folder_id_to_select) + self._folder_id_to_select = NOT_SET + + def _on_filter_change(self, text): + """Trigger change of filter of folders.""" + + self._folders_widget.set_name_filter(text) + + def _on_cancel_clicked(self): + self.done(0) + + def _on_ok_clicked(self): + self._selected_folder_id = ( + self._folders_widget.get_selected_folder_id() + ) + self._selected_folder_label = ( + self._folders_widget.get_selected_folder_label() + ) + self.done(1) + + def set_selected_folder(self, folder_id): + """Change preselected folder before showing the dialog. + + This also resets model and clean filter. + """ + + if ( + self._folders_widget.is_refreshing + or self._folders_widget.get_project_name() is None + ): + self._folder_id_to_select = folder_id + else: + self._folders_widget.set_selected_folder(folder_id) + + def get_selected_folder_id(self): + """Get selected folder id. + + Returns: + Union[str, None]: Selected folder id or None if nothing + is selected. + """ + return self._selected_folder_id + + def get_selected_folder_label(self): + return self._selected_folder_label + + +class FoldersField(BaseClickableFrame): + """Field where asset name of selected instance/s is showed. + + Click on the field will trigger `FoldersDialog`. + """ + value_changed = QtCore.Signal() + + def __init__(self, controller, parent): + super(FoldersField, self).__init__(parent) + self.setObjectName("AssetNameInputWidget") + + # Don't use 'self' for parent! + # - this widget has specific styles + dialog = FoldersDialog(controller, parent) + + name_input = ClickableLineEdit(self) + name_input.setObjectName("AssetNameInput") + + icon = qtawesome.icon("fa.window-maximize", color="white") + icon_btn = QtWidgets.QPushButton(self) + icon_btn.setIcon(icon) + icon_btn.setObjectName("AssetNameInputButton") + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(0) + layout.addWidget(name_input, 1) + layout.addWidget(icon_btn, 0) + + # Make sure all widgets are vertically extended to highest widget + for widget in ( + name_input, + icon_btn + ): + w_size_policy = widget.sizePolicy() + w_size_policy.setVerticalPolicy( + QtWidgets.QSizePolicy.MinimumExpanding) + widget.setSizePolicy(w_size_policy) + + size_policy = self.sizePolicy() + size_policy.setVerticalPolicy(QtWidgets.QSizePolicy.Maximum) + self.setSizePolicy(size_policy) + + name_input.clicked.connect(self._mouse_release_callback) + icon_btn.clicked.connect(self._mouse_release_callback) + dialog.finished.connect(self._on_dialog_finish) + + self._controller = controller + self._dialog = dialog + self._name_input = name_input + self._icon_btn = icon_btn + + self._selected_folder_id = None + self._selected_folder_label = None + self._selected_items = [] + self._is_valid = True + + def refresh(self): + self._dialog.refresh() + + def is_valid(self): + """Is asset valid.""" + return self._is_valid + + def get_selected_folder_id(self): + """Selected asset names.""" + return self._selected_folder_id + + def get_selected_folder_label(self): + return self._selected_folder_label + + def set_text(self, text): + """Set text in text field. + + Does not change selected items (assets). + """ + self._name_input.setText(text) + + def set_valid(self, is_valid): + state = "" + if not is_valid: + state = "invalid" + self._set_state_property(state) + + def set_selected_item(self, folder_id=None, folder_label=None): + """Set folder for selection. + + Args: + folder_id (Optional[str]): Folder id to select. + folder_label (Optional[str]): Folder label. + """ + + self._selected_folder_id = folder_id + if not folder_id: + folder_label = None + elif folder_id and not folder_label: + folder_label = self._controller.get_folder_label(folder_id) + self._selected_folder_label = folder_label + self.set_text(folder_label if folder_label else "") + + def _on_dialog_finish(self, result): + if not result: + return + + folder_id = self._dialog.get_selected_folder_id() + folder_label = self._dialog.get_selected_folder_label() + self.set_selected_item(folder_id, folder_label) + + self.value_changed.emit() + + def _mouse_release_callback(self): + self._dialog.set_selected_folder(self._selected_folder_id) + self._dialog.open() + + def _set_state_property(self, state): + set_style_property(self, "state", state) + set_style_property(self._name_input, "state", state) + set_style_property(self._icon_btn, "state", state) diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py b/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py new file mode 100644 index 0000000000..50a49e0ce1 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py @@ -0,0 +1,94 @@ +from qtpy import QtWidgets, QtCore + +from openpype import style + + +class ButtonWithMenu(QtWidgets.QToolButton): + def __init__(self, parent=None): + super(ButtonWithMenu, self).__init__(parent) + + self.setObjectName("ButtonWithMenu") + + self.setPopupMode(QtWidgets.QToolButton.MenuButtonPopup) + menu = QtWidgets.QMenu(self) + + self.setMenu(menu) + + self._menu = menu + self._actions = [] + + def menu(self): + return self._menu + + def clear_actions(self): + if self._menu is not None: + self._menu.clear() + self._actions = [] + + def add_action(self, action): + self._actions.append(action) + self._menu.addAction(action) + + def _on_action_trigger(self): + action = self.sender() + if action not in self._actions: + return + action.trigger() + + +class SearchComboBox(QtWidgets.QComboBox): + """Searchable ComboBox with empty placeholder value as first value""" + + def __init__(self, parent): + super(SearchComboBox, self).__init__(parent) + + self.setEditable(True) + self.setInsertPolicy(QtWidgets.QComboBox.NoInsert) + + combobox_delegate = QtWidgets.QStyledItemDelegate(self) + self.setItemDelegate(combobox_delegate) + + completer = self.completer() + completer.setCompletionMode( + QtWidgets.QCompleter.PopupCompletion + ) + completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) + + completer_view = completer.popup() + completer_view.setObjectName("CompleterView") + completer_delegate = QtWidgets.QStyledItemDelegate(completer_view) + completer_view.setItemDelegate(completer_delegate) + completer_view.setStyleSheet(style.load_stylesheet()) + + self._combobox_delegate = combobox_delegate + + self._completer_delegate = completer_delegate + self._completer = completer + + def set_placeholder(self, placeholder): + self.lineEdit().setPlaceholderText(placeholder) + + def populate(self, items): + self.clear() + self.addItems([""]) # ensure first item is placeholder + self.addItems(items) + + def get_valid_value(self): + """Return the current text if it's a valid value else None + + Note: The empty placeholder value is valid and returns as "" + + """ + + text = self.currentText() + lookup = set(self.itemText(i) for i in range(self.count())) + if text not in lookup: + return None + + return text or None + + def set_valid_value(self, value): + """Try to locate 'value' and pre-select it in dropdown.""" + index = self.findText(value) + if index > -1: + self.setCurrentIndex(index) diff --git a/openpype/tools/ayon_sceneinventory/view.py b/openpype/tools/ayon_sceneinventory/view.py new file mode 100644 index 0000000000..039b498b1b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/view.py @@ -0,0 +1,825 @@ +import uuid +import collections +import logging +import itertools +from functools import partial + +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.client import ( + get_version_by_id, + get_versions, + get_hero_versions, + get_representation_by_id, + get_representations, +) +from openpype import style +from openpype.pipeline import ( + HeroVersionType, + update_container, + remove_container, + discover_inventory_actions, +) +from openpype.tools.utils.lib import ( + iter_model_rows, + format_version +) + +from .switch_dialog import SwitchAssetDialog +from .model import InventoryModel + + +DEFAULT_COLOR = "#fb9c15" + +log = logging.getLogger("SceneInventory") + + +class SceneInventoryView(QtWidgets.QTreeView): + data_changed = QtCore.Signal() + hierarchy_view_changed = QtCore.Signal(bool) + + def __init__(self, controller, parent): + super(SceneInventoryView, self).__init__(parent=parent) + + # view settings + self.setIndentation(12) + self.setAlternatingRowColors(True) + self.setSortingEnabled(True) + self.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) + self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + self.customContextMenuRequested.connect(self._show_right_mouse_menu) + + self._hierarchy_view = False + self._selected = None + + self._controller = controller + + def _set_hierarchy_view(self, enabled): + if enabled == self._hierarchy_view: + return + self._hierarchy_view = enabled + self.hierarchy_view_changed.emit(enabled) + + def _enter_hierarchy(self, items): + self._selected = set(i["objectName"] for i in items) + self._set_hierarchy_view(True) + self.data_changed.emit() + self.expandToDepth(1) + self.setStyleSheet(""" + QTreeView { + border-color: #fb9c15; + } + """) + + def _leave_hierarchy(self): + self._set_hierarchy_view(False) + self.data_changed.emit() + self.setStyleSheet("QTreeView {}") + + def _build_item_menu_for_selection(self, items, menu): + # Exclude items that are "NOT FOUND" since setting versions, updating + # and removal won't work for those items. + items = [item for item in items if not item.get("isNotFound")] + if not items: + return + + # An item might not have a representation, for example when an item + # is listed as "NOT FOUND" + repre_ids = set() + for item in items: + repre_id = item["representation"] + try: + uuid.UUID(repre_id) + repre_ids.add(repre_id) + except ValueError: + pass + + project_name = self._controller.get_current_project_name() + repre_docs = get_representations( + project_name, representation_ids=repre_ids, fields=["parent"] + ) + + version_ids = { + repre_doc["parent"] + for repre_doc in repre_docs + } + + loaded_versions = get_versions( + project_name, version_ids=version_ids, hero=True + ) + + loaded_hero_versions = [] + versions_by_parent_id = collections.defaultdict(list) + subset_ids = set() + for version in loaded_versions: + if version["type"] == "hero_version": + loaded_hero_versions.append(version) + else: + parent_id = version["parent"] + versions_by_parent_id[parent_id].append(version) + subset_ids.add(parent_id) + + all_versions = get_versions( + project_name, subset_ids=subset_ids, hero=True + ) + hero_versions = [] + versions = [] + for version in all_versions: + if version["type"] == "hero_version": + hero_versions.append(version) + else: + versions.append(version) + + has_loaded_hero_versions = len(loaded_hero_versions) > 0 + has_available_hero_version = len(hero_versions) > 0 + has_outdated = False + + for version in versions: + parent_id = version["parent"] + current_versions = versions_by_parent_id[parent_id] + for current_version in current_versions: + if current_version["name"] < version["name"]: + has_outdated = True + break + + if has_outdated: + break + + switch_to_versioned = None + if has_loaded_hero_versions: + def _on_switch_to_versioned(items): + repre_ids = { + item["representation"] + for item in items + } + + repre_docs = get_representations( + project_name, + representation_ids=repre_ids, + fields=["parent"] + ) + + version_ids = set() + version_id_by_repre_id = {} + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + repre_id = str(repre_doc["_id"]) + version_id_by_repre_id[repre_id] = version_id + version_ids.add(version_id) + + hero_versions = get_hero_versions( + project_name, + version_ids=version_ids, + fields=["version_id"] + ) + + hero_src_version_ids = set() + for hero_version in hero_versions: + version_id = hero_version["version_id"] + hero_src_version_ids.add(version_id) + hero_version_id = hero_version["_id"] + for _repre_id, current_version_id in ( + version_id_by_repre_id.items() + ): + if current_version_id == hero_version_id: + version_id_by_repre_id[_repre_id] = version_id + + version_docs = get_versions( + project_name, + version_ids=hero_src_version_ids, + fields=["name"] + ) + version_name_by_id = {} + for version_doc in version_docs: + version_name_by_id[version_doc["_id"]] = \ + version_doc["name"] + + # Specify version per item to update to + update_items = [] + update_versions = [] + for item in items: + repre_id = item["representation"] + version_id = version_id_by_repre_id.get(repre_id) + version_name = version_name_by_id.get(version_id) + if version_name is not None: + update_items.append(item) + update_versions.append(version_name) + self._update_containers(update_items, update_versions) + + update_icon = qtawesome.icon( + "fa.asterisk", + color=DEFAULT_COLOR + ) + switch_to_versioned = QtWidgets.QAction( + update_icon, + "Switch to versioned", + menu + ) + switch_to_versioned.triggered.connect( + lambda: _on_switch_to_versioned(items) + ) + + update_to_latest_action = None + if has_outdated or has_loaded_hero_versions: + update_icon = qtawesome.icon( + "fa.angle-double-up", + color=DEFAULT_COLOR + ) + update_to_latest_action = QtWidgets.QAction( + update_icon, + "Update to latest", + menu + ) + update_to_latest_action.triggered.connect( + lambda: self._update_containers(items, version=-1) + ) + + change_to_hero = None + if has_available_hero_version: + # TODO change icon + change_icon = qtawesome.icon( + "fa.asterisk", + color="#00b359" + ) + change_to_hero = QtWidgets.QAction( + change_icon, + "Change to hero", + menu + ) + change_to_hero.triggered.connect( + lambda: self._update_containers(items, + version=HeroVersionType(-1)) + ) + + # set version + set_version_icon = qtawesome.icon("fa.hashtag", color=DEFAULT_COLOR) + set_version_action = QtWidgets.QAction( + set_version_icon, + "Set version", + menu + ) + set_version_action.triggered.connect( + lambda: self._show_version_dialog(items)) + + # switch folder + switch_folder_icon = qtawesome.icon("fa.sitemap", color=DEFAULT_COLOR) + switch_folder_action = QtWidgets.QAction( + switch_folder_icon, + "Switch Folder", + menu + ) + switch_folder_action.triggered.connect( + lambda: self._show_switch_dialog(items)) + + # remove + remove_icon = qtawesome.icon("fa.remove", color=DEFAULT_COLOR) + remove_action = QtWidgets.QAction(remove_icon, "Remove items", menu) + remove_action.triggered.connect( + lambda: self._show_remove_warning_dialog(items)) + + # add the actions + if switch_to_versioned: + menu.addAction(switch_to_versioned) + + if update_to_latest_action: + menu.addAction(update_to_latest_action) + + if change_to_hero: + menu.addAction(change_to_hero) + + menu.addAction(set_version_action) + menu.addAction(switch_folder_action) + + menu.addSeparator() + + menu.addAction(remove_action) + + self._handle_sync_server(menu, repre_ids) + + def _handle_sync_server(self, menu, repre_ids): + """Adds actions for download/upload when SyncServer is enabled + + Args: + menu (OptionMenu) + repre_ids (list) of object_ids + + Returns: + (OptionMenu) + """ + + if not self._controller.is_sync_server_enabled(): + return + + menu.addSeparator() + + download_icon = qtawesome.icon("fa.download", color=DEFAULT_COLOR) + download_active_action = QtWidgets.QAction( + download_icon, + "Download", + menu + ) + download_active_action.triggered.connect( + lambda: self._add_sites(repre_ids, "active_site")) + + upload_icon = qtawesome.icon("fa.upload", color=DEFAULT_COLOR) + upload_remote_action = QtWidgets.QAction( + upload_icon, + "Upload", + menu + ) + upload_remote_action.triggered.connect( + lambda: self._add_sites(repre_ids, "remote_site")) + + menu.addAction(download_active_action) + menu.addAction(upload_remote_action) + + def _add_sites(self, repre_ids, site_type): + """(Re)sync all 'repre_ids' to specific site. + + It checks if opposite site has fully available content to limit + accidents. (ReSync active when no remote >> losing active content) + + Args: + repre_ids (list) + site_type (Literal[active_site, remote_site]): Site type. + """ + + self._controller.resync_representations(repre_ids, site_type) + + self.data_changed.emit() + + def _build_item_menu(self, items=None): + """Create menu for the selected items""" + + if not items: + items = [] + + menu = QtWidgets.QMenu(self) + + # add the actions + self._build_item_menu_for_selection(items, menu) + + # These two actions should be able to work without selection + # expand all items + expandall_action = QtWidgets.QAction(menu, text="Expand all items") + expandall_action.triggered.connect(self.expandAll) + + # collapse all items + collapse_action = QtWidgets.QAction(menu, text="Collapse all items") + collapse_action.triggered.connect(self.collapseAll) + + menu.addAction(expandall_action) + menu.addAction(collapse_action) + + custom_actions = self._get_custom_actions(containers=items) + if custom_actions: + submenu = QtWidgets.QMenu("Actions", self) + for action in custom_actions: + color = action.color or DEFAULT_COLOR + icon = qtawesome.icon("fa.%s" % action.icon, color=color) + action_item = QtWidgets.QAction(icon, action.label, submenu) + action_item.triggered.connect( + partial(self._process_custom_action, action, items)) + + submenu.addAction(action_item) + + menu.addMenu(submenu) + + # go back to flat view + back_to_flat_action = None + if self._hierarchy_view: + back_to_flat_icon = qtawesome.icon("fa.list", color=DEFAULT_COLOR) + back_to_flat_action = QtWidgets.QAction( + back_to_flat_icon, + "Back to Full-View", + menu + ) + back_to_flat_action.triggered.connect(self._leave_hierarchy) + + # send items to hierarchy view + enter_hierarchy_icon = qtawesome.icon("fa.indent", color="#d8d8d8") + enter_hierarchy_action = QtWidgets.QAction( + enter_hierarchy_icon, + "Cherry-Pick (Hierarchy)", + menu + ) + enter_hierarchy_action.triggered.connect( + lambda: self._enter_hierarchy(items)) + + if items: + menu.addAction(enter_hierarchy_action) + + if back_to_flat_action is not None: + menu.addAction(back_to_flat_action) + + return menu + + def _get_custom_actions(self, containers): + """Get the registered Inventory Actions + + Args: + containers(list): collection of containers + + Returns: + list: collection of filter and initialized actions + """ + + def sorter(Plugin): + """Sort based on order attribute of the plugin""" + return Plugin.order + + # Fedd an empty dict if no selection, this will ensure the compat + # lookup always work, so plugin can interact with Scene Inventory + # reversely. + containers = containers or [dict()] + + # Check which action will be available in the menu + Plugins = discover_inventory_actions() + compatible = [p() for p in Plugins if + any(p.is_compatible(c) for c in containers)] + + return sorted(compatible, key=sorter) + + def _process_custom_action(self, action, containers): + """Run action and if results are returned positive update the view + + If the result is list or dict, will select view items by the result. + + Args: + action (InventoryAction): Inventory Action instance + containers (list): Data of currently selected items + + Returns: + None + """ + + result = action.process(containers) + if result: + self.data_changed.emit() + + if isinstance(result, (list, set)): + self._select_items_by_action(result) + + if isinstance(result, dict): + self._select_items_by_action( + result["objectNames"], result["options"] + ) + + def _select_items_by_action(self, object_names, options=None): + """Select view items by the result of action + + Args: + object_names (list or set): A list/set of container object name + options (dict): GUI operation options. + + Returns: + None + + """ + options = options or dict() + + if options.get("clear", True): + self.clearSelection() + + object_names = set(object_names) + if ( + self._hierarchy_view + and not self._selected.issuperset(object_names) + ): + # If any container not in current cherry-picked view, update + # view before selecting them. + self._selected.update(object_names) + self.data_changed.emit() + + model = self.model() + selection_model = self.selectionModel() + + select_mode = { + "select": QtCore.QItemSelectionModel.Select, + "deselect": QtCore.QItemSelectionModel.Deselect, + "toggle": QtCore.QItemSelectionModel.Toggle, + }[options.get("mode", "select")] + + for index in iter_model_rows(model, 0): + item = index.data(InventoryModel.ItemRole) + if item.get("isGroupNode"): + continue + + name = item.get("objectName") + if name in object_names: + self.scrollTo(index) # Ensure item is visible + flags = select_mode | QtCore.QItemSelectionModel.Rows + selection_model.select(index, flags) + + object_names.remove(name) + + if len(object_names) == 0: + break + + def _show_right_mouse_menu(self, pos): + """Display the menu when at the position of the item clicked""" + + globalpos = self.viewport().mapToGlobal(pos) + + if not self.selectionModel().hasSelection(): + print("No selection") + # Build menu without selection, feed an empty list + menu = self._build_item_menu() + menu.exec_(globalpos) + return + + active = self.currentIndex() # index under mouse + active = active.sibling(active.row(), 0) # get first column + + # move index under mouse + indices = self.get_indices() + if active in indices: + indices.remove(active) + + indices.append(active) + + # Extend to the sub-items + all_indices = self._extend_to_children(indices) + items = [dict(i.data(InventoryModel.ItemRole)) for i in all_indices + if i.parent().isValid()] + + if self._hierarchy_view: + # Ensure no group item + items = [n for n in items if not n.get("isGroupNode")] + + menu = self._build_item_menu(items) + menu.exec_(globalpos) + + def get_indices(self): + """Get the selected rows""" + selection_model = self.selectionModel() + return selection_model.selectedRows() + + def _extend_to_children(self, indices): + """Extend the indices to the children indices. + + Top-level indices are extended to its children indices. Sub-items + are kept as is. + + Args: + indices (list): The indices to extend. + + Returns: + list: The children indices + + """ + def get_children(i): + model = i.model() + rows = model.rowCount(parent=i) + for row in range(rows): + child = model.index(row, 0, parent=i) + yield child + + subitems = set() + for i in indices: + valid_parent = i.parent().isValid() + if valid_parent and i not in subitems: + subitems.add(i) + + if self._hierarchy_view: + # Assume this is a group item + for child in get_children(i): + subitems.add(child) + else: + # is top level item + for child in get_children(i): + subitems.add(child) + + return list(subitems) + + def _show_version_dialog(self, items): + """Create a dialog with the available versions for the selected file + + Args: + items (list): list of items to run the "set_version" for + + Returns: + None + """ + + active = items[-1] + + project_name = self._controller.get_current_project_name() + # Get available versions for active representation + repre_doc = get_representation_by_id( + project_name, + active["representation"], + fields=["parent"] + ) + + repre_version_doc = get_version_by_id( + project_name, + repre_doc["parent"], + fields=["parent"] + ) + + version_docs = list(get_versions( + project_name, + subset_ids=[repre_version_doc["parent"]], + hero=True + )) + hero_version = None + standard_versions = [] + for version_doc in version_docs: + if version_doc["type"] == "hero_version": + hero_version = version_doc + else: + standard_versions.append(version_doc) + versions = list(reversed( + sorted(standard_versions, key=lambda item: item["name"]) + )) + if hero_version: + _version_id = hero_version["version_id"] + for _version in versions: + if _version["_id"] != _version_id: + continue + + hero_version["name"] = HeroVersionType( + _version["name"] + ) + hero_version["data"] = _version["data"] + break + + # Get index among the listed versions + current_item = None + current_version = active["version"] + if isinstance(current_version, HeroVersionType): + current_item = hero_version + else: + for version in versions: + if version["name"] == current_version: + current_item = version + break + + all_versions = [] + if hero_version: + all_versions.append(hero_version) + all_versions.extend(versions) + + if current_item: + index = all_versions.index(current_item) + else: + index = 0 + + versions_by_label = dict() + labels = [] + for version in all_versions: + is_hero = version["type"] == "hero_version" + label = format_version(version["name"], is_hero) + labels.append(label) + versions_by_label[label] = version["name"] + + label, state = QtWidgets.QInputDialog.getItem( + self, + "Set version..", + "Set version number to", + labels, + current=index, + editable=False + ) + if not state: + return + + if label: + version = versions_by_label[label] + self._update_containers(items, version) + + def _show_switch_dialog(self, items): + """Display Switch dialog""" + dialog = SwitchAssetDialog(self._controller, self, items) + dialog.switched.connect(self.data_changed.emit) + dialog.show() + + def _show_remove_warning_dialog(self, items): + """Prompt a dialog to inform the user the action will remove items""" + + accept = QtWidgets.QMessageBox.Ok + buttons = accept | QtWidgets.QMessageBox.Cancel + + state = QtWidgets.QMessageBox.question( + self, + "Are you sure?", + "Are you sure you want to remove {} item(s)".format(len(items)), + buttons=buttons, + defaultButton=accept + ) + + if state != accept: + return + + for item in items: + remove_container(item) + self.data_changed.emit() + + def _show_version_error_dialog(self, version, items): + """Shows QMessageBox when version switch doesn't work + + Args: + version: str or int or None + """ + if version == -1: + version_str = "latest" + elif isinstance(version, HeroVersionType): + version_str = "hero" + elif isinstance(version, int): + version_str = "v{:03d}".format(version) + else: + version_str = version + + dialog = QtWidgets.QMessageBox(self) + dialog.setIcon(QtWidgets.QMessageBox.Warning) + dialog.setStyleSheet(style.load_stylesheet()) + dialog.setWindowTitle("Update failed") + + switch_btn = dialog.addButton( + "Switch Folder", + QtWidgets.QMessageBox.ActionRole + ) + switch_btn.clicked.connect(lambda: self._show_switch_dialog(items)) + + dialog.addButton(QtWidgets.QMessageBox.Cancel) + + msg = ( + "Version update to '{}' failed as representation doesn't exist." + "\n\nPlease update to version with a valid representation" + " OR \n use 'Switch Folder' button to change folder." + ).format(version_str) + dialog.setText(msg) + dialog.exec_() + + def update_all(self): + """Update all items that are currently 'outdated' in the view""" + # Get the source model through the proxy model + model = self.model().sourceModel() + + # Get all items from outdated groups + outdated_items = [] + for index in iter_model_rows(model, + column=0, + include_root=False): + item = index.data(model.ItemRole) + + if not item.get("isGroupNode"): + continue + + # Only the group nodes contain the "highest_version" data and as + # such we find only the groups and take its children. + if not model.outdated(item): + continue + + # Collect all children which we want to update + children = item.children() + outdated_items.extend(children) + + if not outdated_items: + log.info("Nothing to update.") + return + + # Trigger update to latest + self._update_containers(outdated_items, version=-1) + + def _update_containers(self, items, version): + """Helper to update items to given version (or version per item) + + If at least one item is specified this will always try to refresh + the inventory even if errors occurred on any of the items. + + Arguments: + items (list): Items to update + version (int or list): Version to set to. + This can be a list specifying a version for each item. + Like `update_container` version -1 sets the latest version + and HeroTypeVersion instances set the hero version. + + """ + + if isinstance(version, (list, tuple)): + # We allow a unique version to be specified per item. In that case + # the length must match with the items + assert len(items) == len(version), ( + "Number of items mismatches number of versions: " + "{} items - {} versions".format(len(items), len(version)) + ) + versions = version + else: + # Repeat the same version infinitely + versions = itertools.repeat(version) + + # Trigger update to latest + try: + for item, item_version in zip(items, versions): + try: + update_container(item, item_version) + except AssertionError: + self._show_version_error_dialog(item_version, [item]) + log.warning("Update failed", exc_info=True) + finally: + # Always update the scene inventory view, even if errors occurred + self.data_changed.emit() diff --git a/openpype/tools/ayon_sceneinventory/window.py b/openpype/tools/ayon_sceneinventory/window.py new file mode 100644 index 0000000000..427bf4c50d --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/window.py @@ -0,0 +1,200 @@ +from qtpy import QtWidgets, QtCore, QtGui +import qtawesome + +from openpype import style, resources +from openpype.tools.utils.delegates import VersionDelegate +from openpype.tools.utils.lib import ( + preserve_expanded_rows, + preserve_selection, +) +from openpype.tools.ayon_sceneinventory import SceneInventoryController + +from .model import ( + InventoryModel, + FilterProxyModel +) +from .view import SceneInventoryView + + +class ControllerVersionDelegate(VersionDelegate): + """Version delegate that uses controller to get project. + + Original VersionDelegate is using 'AvalonMongoDB' object instead. Don't + worry about the variable name, object is stored to '_dbcon' attribute. + """ + + def get_project_name(self): + self._dbcon.get_current_project_name() + + +class SceneInventoryWindow(QtWidgets.QDialog): + """Scene Inventory window""" + + def __init__(self, controller=None, parent=None): + super(SceneInventoryWindow, self).__init__(parent) + + if controller is None: + controller = SceneInventoryController() + + project_name = controller.get_current_project_name() + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("Scene Inventory - {}".format(project_name)) + self.setObjectName("SceneInventory") + + self.resize(1100, 480) + + # region control + + filter_label = QtWidgets.QLabel("Search", self) + text_filter = QtWidgets.QLineEdit(self) + + outdated_only_checkbox = QtWidgets.QCheckBox( + "Filter to outdated", self + ) + outdated_only_checkbox.setToolTip("Show outdated files only") + outdated_only_checkbox.setChecked(False) + + icon = qtawesome.icon("fa.arrow-up", color="white") + update_all_button = QtWidgets.QPushButton(self) + update_all_button.setToolTip("Update all outdated to latest version") + update_all_button.setIcon(icon) + + icon = qtawesome.icon("fa.refresh", color="white") + refresh_button = QtWidgets.QPushButton(self) + refresh_button.setToolTip("Refresh") + refresh_button.setIcon(icon) + + control_layout = QtWidgets.QHBoxLayout() + control_layout.addWidget(filter_label) + control_layout.addWidget(text_filter) + control_layout.addWidget(outdated_only_checkbox) + control_layout.addWidget(update_all_button) + control_layout.addWidget(refresh_button) + + model = InventoryModel(controller) + proxy = FilterProxyModel() + proxy.setSourceModel(model) + proxy.setDynamicSortFilter(True) + proxy.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) + + view = SceneInventoryView(controller, self) + view.setModel(proxy) + + sync_enabled = controller.is_sync_server_enabled() + view.setColumnHidden(model.active_site_col, not sync_enabled) + view.setColumnHidden(model.remote_site_col, not sync_enabled) + + # set some nice default widths for the view + view.setColumnWidth(0, 250) # name + view.setColumnWidth(1, 55) # version + view.setColumnWidth(2, 55) # count + view.setColumnWidth(3, 150) # family + view.setColumnWidth(4, 120) # group + view.setColumnWidth(5, 150) # loader + + # apply delegates + version_delegate = ControllerVersionDelegate(controller, self) + column = model.Columns.index("version") + view.setItemDelegateForColumn(column, version_delegate) + + layout = QtWidgets.QVBoxLayout(self) + layout.addLayout(control_layout) + layout.addWidget(view) + + show_timer = QtCore.QTimer() + show_timer.setInterval(0) + show_timer.setSingleShot(False) + + # signals + show_timer.timeout.connect(self._on_show_timer) + text_filter.textChanged.connect(self._on_text_filter_change) + outdated_only_checkbox.stateChanged.connect( + self._on_outdated_state_change + ) + view.hierarchy_view_changed.connect( + self._on_hierarchy_view_change + ) + view.data_changed.connect(self._on_refresh_request) + refresh_button.clicked.connect(self._on_refresh_request) + update_all_button.clicked.connect(self._on_update_all) + + self._show_timer = show_timer + self._show_counter = 0 + self._controller = controller + self._update_all_button = update_all_button + self._outdated_only_checkbox = outdated_only_checkbox + self._view = view + self._model = model + self._proxy = proxy + self._version_delegate = version_delegate + + self._first_show = True + self._first_refresh = True + + def showEvent(self, event): + super(SceneInventoryWindow, self).showEvent(event) + if self._first_show: + self._first_show = False + self.setStyleSheet(style.load_stylesheet()) + + self._show_counter = 0 + self._show_timer.start() + + def keyPressEvent(self, event): + """Custom keyPressEvent. + + Override keyPressEvent to do nothing so that Maya's panels won't + take focus when pressing "SHIFT" whilst mouse is over viewport or + outliner. This way users don't accidentally perform Maya commands + whilst trying to name an instance. + + """ + + def _on_refresh_request(self): + """Signal callback to trigger 'refresh' without any arguments.""" + + self.refresh() + + def refresh(self, containers=None): + self._first_refresh = False + self._controller.reset() + with preserve_expanded_rows( + tree_view=self._view, + role=self._model.UniqueRole + ): + with preserve_selection( + tree_view=self._view, + role=self._model.UniqueRole, + current_index=False + ): + kwargs = {"containers": containers} + # TODO do not touch view's inner attribute + if self._view._hierarchy_view: + kwargs["selected"] = self._view._selected + self._model.refresh(**kwargs) + + def _on_show_timer(self): + if self._show_counter < 3: + self._show_counter += 1 + return + self._show_timer.stop() + self.refresh() + + def _on_hierarchy_view_change(self, enabled): + self._proxy.set_hierarchy_view(enabled) + self._model.set_hierarchy_view(enabled) + + def _on_text_filter_change(self, text_filter): + if hasattr(self._proxy, "setFilterRegExp"): + self._proxy.setFilterRegExp(text_filter) + else: + self._proxy.setFilterRegularExpression(text_filter) + + def _on_outdated_state_change(self): + self._proxy.set_filter_outdated( + self._outdated_only_checkbox.isChecked() + ) + + def _on_update_all(self): + self._view.update_all() diff --git a/openpype/tools/ayon_utils/models/__init__.py b/openpype/tools/ayon_utils/models/__init__.py new file mode 100644 index 0000000000..69722b5e21 --- /dev/null +++ b/openpype/tools/ayon_utils/models/__init__.py @@ -0,0 +1,32 @@ +"""Backend models that can be used in controllers.""" + +from .cache import CacheItem, NestedCacheItem +from .projects import ( + ProjectItem, + ProjectsModel, + PROJECTS_MODEL_SENDER, +) +from .hierarchy import ( + FolderItem, + TaskItem, + HierarchyModel, + HIERARCHY_MODEL_SENDER, +) +from .thumbnails import ThumbnailsModel + + +__all__ = ( + "CacheItem", + "NestedCacheItem", + + "ProjectItem", + "ProjectsModel", + "PROJECTS_MODEL_SENDER", + + "FolderItem", + "TaskItem", + "HierarchyModel", + "HIERARCHY_MODEL_SENDER", + + "ThumbnailsModel", +) diff --git a/openpype/tools/ayon_utils/models/cache.py b/openpype/tools/ayon_utils/models/cache.py new file mode 100644 index 0000000000..44b97e930d --- /dev/null +++ b/openpype/tools/ayon_utils/models/cache.py @@ -0,0 +1,196 @@ +import time +import collections + +InitInfo = collections.namedtuple( + "InitInfo", + ["default_factory", "lifetime"] +) + + +def _default_factory_func(): + return None + + +class CacheItem: + """Simple cache item with lifetime and default value. + + Args: + default_factory (Optional[callable]): Function that returns default + value used on init and on reset. + lifetime (Optional[int]): Lifetime of the cache data in seconds. + """ + + def __init__(self, default_factory=None, lifetime=None): + if lifetime is None: + lifetime = 120 + self._lifetime = lifetime + self._last_update = None + if default_factory is None: + default_factory = _default_factory_func + self._default_factory = default_factory + self._data = default_factory() + + @property + def is_valid(self): + """Is cache valid to use. + + Return: + bool: True if cache is valid, False otherwise. + """ + + if self._last_update is None: + return False + + return (time.time() - self._last_update) < self._lifetime + + def set_lifetime(self, lifetime): + """Change lifetime of cache item. + + Args: + lifetime (int): Lifetime of the cache data in seconds. + """ + + self._lifetime = lifetime + + def set_invalid(self): + """Set cache as invalid.""" + + self._last_update = None + + def reset(self): + """Set cache as invalid and reset data.""" + + self._last_update = None + self._data = self._default_factory() + + def get_data(self): + """Receive cached data. + + Returns: + Any: Any data that are cached. + """ + + return self._data + + def update_data(self, data): + self._data = data + self._last_update = time.time() + + +class NestedCacheItem: + """Helper for cached items stored in nested structure. + + Example: + >>> cache = NestedCacheItem(levels=2) + >>> cache["a"]["b"].is_valid + False + >>> cache["a"]["b"].get_data() + None + >>> cache["a"]["b"] = 1 + >>> cache["a"]["b"].is_valid + True + >>> cache["a"]["b"].get_data() + 1 + >>> cache.reset() + >>> cache["a"]["b"].is_valid + False + + Args: + levels (int): Number of nested levels where read cache is stored. + default_factory (Optional[callable]): Function that returns default + value used on init and on reset. + lifetime (Optional[int]): Lifetime of the cache data in seconds. + _init_info (Optional[InitInfo]): Private argument. Init info for + nested cache where created from parent item. + """ + + def __init__( + self, levels=1, default_factory=None, lifetime=None, _init_info=None + ): + if levels < 1: + raise ValueError("Nested levels must be greater than 0") + self._data_by_key = {} + if _init_info is None: + _init_info = InitInfo(default_factory, lifetime) + self._init_info = _init_info + self._levels = levels + + def __getitem__(self, key): + """Get cached data. + + Args: + key (str): Key of the cache item. + + Returns: + Union[NestedCacheItem, CacheItem]: Cache item. + """ + + cache = self._data_by_key.get(key) + if cache is None: + if self._levels > 1: + cache = NestedCacheItem( + levels=self._levels - 1, + _init_info=self._init_info + ) + else: + cache = CacheItem( + self._init_info.default_factory, + self._init_info.lifetime + ) + self._data_by_key[key] = cache + return cache + + def __setitem__(self, key, value): + """Update cached data. + + Args: + key (str): Key of the cache item. + value (Any): Any data that are cached. + """ + + if self._levels > 1: + raise AttributeError(( + "{} does not support '__setitem__'. Lower nested level by {}" + ).format(self.__class__.__name__, self._levels - 1)) + cache = self[key] + cache.update_data(value) + + def get(self, key): + """Get cached data. + + Args: + key (str): Key of the cache item. + + Returns: + Union[NestedCacheItem, CacheItem]: Cache item. + """ + + return self[key] + + def reset(self): + """Reset cache.""" + + self._data_by_key = {} + + def set_lifetime(self, lifetime): + """Change lifetime of all children cache items. + + Args: + lifetime (int): Lifetime of the cache data in seconds. + """ + + self._init_info.lifetime = lifetime + for cache in self._data_by_key.values(): + cache.set_lifetime(lifetime) + + @property + def is_valid(self): + """Raise reasonable error when called on wront level. + + Raises: + AttributeError: If called on nested cache item. + """ + + raise AttributeError(( + "{} does not support 'is_valid'. Lower nested level by '{}'" + ).format(self.__class__.__name__, self._levels)) diff --git a/openpype/tools/ayon_utils/models/hierarchy.py b/openpype/tools/ayon_utils/models/hierarchy.py new file mode 100644 index 0000000000..fc6b8e1eb7 --- /dev/null +++ b/openpype/tools/ayon_utils/models/hierarchy.py @@ -0,0 +1,500 @@ +import collections +import contextlib +from abc import ABCMeta, abstractmethod + +import ayon_api +import six + +from openpype.style import get_default_entity_icon_color + +from .cache import NestedCacheItem + +HIERARCHY_MODEL_SENDER = "hierarchy.model" + + +@six.add_metaclass(ABCMeta) +class AbstractHierarchyController: + @abstractmethod + def emit_event(self, topic, data, source): + pass + + +class FolderItem: + """Item representing folder entity on a server. + + Folder can be a child of another folder or a project. + + Args: + entity_id (str): Folder id. + parent_id (Union[str, None]): Parent folder id. If 'None' then project + is parent. + name (str): Name of folder. + path (str): Folder path. + folder_type (str): Type of folder. + label (Union[str, None]): Folder label. + icon (Union[dict[str, Any], None]): Icon definition. + """ + + def __init__( + self, entity_id, parent_id, name, path, folder_type, label, icon + ): + self.entity_id = entity_id + self.parent_id = parent_id + self.name = name + self.path = path + self.folder_type = folder_type + self.label = label or name + if not icon: + icon = { + "type": "awesome-font", + "name": "fa.folder", + "color": get_default_entity_icon_color() + } + self.icon = icon + + def to_data(self): + """Converts folder item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "entity_id": self.entity_id, + "parent_id": self.parent_id, + "name": self.name, + "path": self.path, + "folder_type": self.folder_type, + "label": self.label, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-creates folder item from data. + + Args: + data (dict[str, Any]): Folder item data. + + Returns: + FolderItem: Folder item. + """ + + return cls(**data) + + +class TaskItem: + """Task item representing task entity on a server. + + Task is child of a folder. + + Task item has label that is used for display in UI. The label is by + default using task name and type. + + Args: + task_id (str): Task id. + name (str): Name of task. + task_type (str): Type of task. + parent_id (str): Parent folder id. + icon (Union[dict[str, Any], None]): Icon definitions. + """ + + def __init__( + self, task_id, name, task_type, parent_id, icon + ): + self.task_id = task_id + self.name = name + self.task_type = task_type + self.parent_id = parent_id + if icon is None: + icon = { + "type": "awesome-font", + "name": "fa.male", + "color": get_default_entity_icon_color() + } + self.icon = icon + + self._label = None + + @property + def id(self): + """Alias for task_id. + + Returns: + str: Task id. + """ + + return self.task_id + + @property + def label(self): + """Label of task item for UI. + + Returns: + str: Label of task item. + """ + + if self._label is None: + self._label = "{} ({})".format(self.name, self.task_type) + return self._label + + def to_data(self): + """Converts task item to data. + + Returns: + dict[str, Any]: Task item data. + """ + + return { + "task_id": self.task_id, + "name": self.name, + "parent_id": self.parent_id, + "task_type": self.task_type, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-create task item from data. + + Args: + data (dict[str, Any]): Task item data. + + Returns: + TaskItem: Task item. + """ + + return cls(**data) + + +def _get_task_items_from_tasks(tasks): + """ + + Returns: + TaskItem: Task item. + """ + + output = [] + for task in tasks: + folder_id = task["folderId"] + output.append(TaskItem( + task["id"], + task["name"], + task["type"], + folder_id, + None + )) + return output + + +def _get_folder_item_from_hierarchy_item(item): + name = item["name"] + path_parts = list(item["parents"]) + path_parts.append(name) + + return FolderItem( + item["id"], + item["parentId"], + name, + "/".join(path_parts), + item["folderType"], + item["label"], + None, + ) + + +def _get_folder_item_from_entity(entity): + name = entity["name"] + return FolderItem( + entity["id"], + entity["parentId"], + name, + entity["path"], + entity["folderType"], + entity["label"] or name, + None, + ) + + +class HierarchyModel(object): + """Model for project hierarchy items. + + Hierarchy items are folders and tasks. Folders can have as parent another + folder or project. Tasks can have as parent only folder. + """ + lifetime = 60 # A minute + + def __init__(self, controller): + self._folders_items = NestedCacheItem( + levels=1, default_factory=dict, lifetime=self.lifetime) + self._folders_by_id = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + + self._task_items = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + self._tasks_by_id = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + + self._folders_refreshing = set() + self._tasks_refreshing = set() + self._controller = controller + + def reset(self): + self._folders_items.reset() + self._folders_by_id.reset() + + self._task_items.reset() + self._tasks_by_id.reset() + + def refresh_project(self, project_name): + """Force to refresh folder items for a project. + + Args: + project_name (str): Name of project to refresh. + """ + + self._refresh_folders_cache(project_name) + + def get_folder_items(self, project_name, sender): + """Get folder items by project name. + + The folders are cached per project name. If the cache is not valid + then the folders are queried from server. + + Args: + project_name (str): Name of project where to look for folders. + sender (Union[str, None]): Who requested the folder ids. + + Returns: + dict[str, FolderItem]: Folder items by id. + """ + + if not self._folders_items[project_name].is_valid: + self._refresh_folders_cache(project_name, sender) + return self._folders_items[project_name].get_data() + + def get_folder_items_by_id(self, project_name, folder_ids): + """Get folder items by ids. + + This function will query folders if they are not in cache. But the + queried items are not added to cache back. + + Args: + project_name (str): Name of project where to look for folders. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Union[FolderItem, None]]: Folder items by id. + """ + + folder_ids = set(folder_ids) + if self._folders_items[project_name].is_valid: + cache_data = self._folders_items[project_name].get_data() + return { + folder_id: cache_data.get(folder_id) + for folder_id in folder_ids + } + folders = ayon_api.get_folders( + project_name, + folder_ids=folder_ids, + fields=["id", "name", "label", "parentId", "path", "folderType"] + ) + # Make sure all folder ids are in output + output = {folder_id: None for folder_id in folder_ids} + output.update({ + folder["id"]: _get_folder_item_from_entity(folder) + for folder in folders + }) + return output + + def get_folder_item(self, project_name, folder_id): + """Get folder items by id. + + This function will query folder if they is not in cache. But the + queried items are not added to cache back. + + Args: + project_name (str): Name of project where to look for folders. + folder_id (str): Folder id. + + Returns: + Union[FolderItem, None]: Folder item. + """ + items = self.get_folder_items_by_id( + project_name, [folder_id] + ) + return items.get(folder_id) + + def get_task_items(self, project_name, folder_id, sender): + if not project_name or not folder_id: + return [] + + task_cache = self._task_items[project_name][folder_id] + if not task_cache.is_valid: + self._refresh_tasks_cache(project_name, folder_id, sender) + return task_cache.get_data() + + def get_folder_entities(self, project_name, folder_ids): + """Get folder entities by ids. + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Any]: Folder entities by id. + """ + + output = {} + folder_ids = set(folder_ids) + if not project_name or not folder_ids: + return output + + folder_ids_to_query = set() + for folder_id in folder_ids: + cache = self._folders_by_id[project_name][folder_id] + if cache.is_valid: + output[folder_id] = cache.get_data() + elif folder_id: + folder_ids_to_query.add(folder_id) + else: + output[folder_id] = None + self._query_folder_entities(project_name, folder_ids_to_query) + for folder_id in folder_ids_to_query: + cache = self._folders_by_id[project_name][folder_id] + output[folder_id] = cache.get_data() + return output + + def get_folder_entity(self, project_name, folder_id): + output = self.get_folder_entities(project_name, {folder_id}) + return output[folder_id] + + def get_task_entities(self, project_name, task_ids): + output = {} + task_ids = set(task_ids) + if not project_name or not task_ids: + return output + + task_ids_to_query = set() + for task_id in task_ids: + cache = self._tasks_by_id[project_name][task_id] + if cache.is_valid: + output[task_id] = cache.get_data() + elif task_id: + task_ids_to_query.add(task_id) + else: + output[task_id] = None + self._query_task_entities(project_name, task_ids_to_query) + for task_id in task_ids_to_query: + cache = self._tasks_by_id[project_name][task_id] + output[task_id] = cache.get_data() + return output + + def get_task_entity(self, project_name, task_id): + output = self.get_task_entities(project_name, {task_id}) + return output[task_id] + + @contextlib.contextmanager + def _folder_refresh_event_manager(self, project_name, sender): + self._folders_refreshing.add(project_name) + self._controller.emit_event( + "folders.refresh.started", + {"project_name": project_name, "sender": sender}, + HIERARCHY_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "folders.refresh.finished", + {"project_name": project_name, "sender": sender}, + HIERARCHY_MODEL_SENDER + ) + self._folders_refreshing.remove(project_name) + + @contextlib.contextmanager + def _task_refresh_event_manager( + self, project_name, folder_id, sender + ): + self._tasks_refreshing.add(folder_id) + self._controller.emit_event( + "tasks.refresh.started", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + HIERARCHY_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "tasks.refresh.finished", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + HIERARCHY_MODEL_SENDER + ) + self._tasks_refreshing.discard(folder_id) + + def _refresh_folders_cache(self, project_name, sender=None): + if project_name in self._folders_refreshing: + return + + with self._folder_refresh_event_manager(project_name, sender): + folder_items = self._query_folders(project_name) + self._folders_items[project_name].update_data(folder_items) + + def _query_folders(self, project_name): + hierarchy = ayon_api.get_folders_hierarchy(project_name) + + folder_items = {} + hierachy_queue = collections.deque(hierarchy["hierarchy"]) + while hierachy_queue: + item = hierachy_queue.popleft() + folder_item = _get_folder_item_from_hierarchy_item(item) + folder_items[folder_item.entity_id] = folder_item + hierachy_queue.extend(item["children"] or []) + return folder_items + + def _query_folder_entities(self, project_name, folder_ids): + if not project_name or not folder_ids: + return + project_cache = self._folders_by_id[project_name] + folders = ayon_api.get_folders(project_name, folder_ids=folder_ids) + for folder in folders: + folder_id = folder["id"] + project_cache[folder_id].update_data(folder) + + def _query_task_entities(self, project_name, task_ids): + if not project_name or not task_ids: + return + + project_cache = self._tasks_by_id[project_name] + tasks = ayon_api.get_tasks(project_name, task_ids=task_ids) + for task in tasks: + task_id = task["id"] + project_cache[task_id].update_data(task) + + def _refresh_tasks_cache(self, project_name, folder_id, sender=None): + if folder_id in self._tasks_refreshing: + return + + with self._task_refresh_event_manager( + project_name, folder_id, sender + ): + task_items = self._query_tasks(project_name, folder_id) + self._task_items[project_name][folder_id] = task_items + + def _query_tasks(self, project_name, folder_id): + tasks = list(ayon_api.get_tasks( + project_name, + folder_ids=[folder_id], + fields={"id", "name", "label", "folderId", "type"} + )) + return _get_task_items_from_tasks(tasks) diff --git a/openpype/tools/ayon_utils/models/projects.py b/openpype/tools/ayon_utils/models/projects.py new file mode 100644 index 0000000000..4ad53fbbfa --- /dev/null +++ b/openpype/tools/ayon_utils/models/projects.py @@ -0,0 +1,147 @@ +import contextlib +from abc import ABCMeta, abstractmethod + +import ayon_api +import six + +from openpype.style import get_default_entity_icon_color + +from .cache import CacheItem + +PROJECTS_MODEL_SENDER = "projects.model" + + +@six.add_metaclass(ABCMeta) +class AbstractHierarchyController: + @abstractmethod + def emit_event(self, topic, data, source): + pass + + +class ProjectItem: + """Item representing folder entity on a server. + + Folder can be a child of another folder or a project. + + Args: + name (str): Project name. + active (Union[str, None]): Parent folder id. If 'None' then project + is parent. + """ + + def __init__(self, name, active, is_library, icon=None): + self.name = name + self.active = active + self.is_library = is_library + if icon is None: + icon = { + "type": "awesome-font", + "name": "fa.book" if is_library else "fa.map", + "color": get_default_entity_icon_color(), + } + self.icon = icon + + def to_data(self): + """Converts folder item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "name": self.name, + "active": self.active, + "is_library": self.is_library, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-creates folder item from data. + + Args: + data (dict[str, Any]): Folder item data. + + Returns: + FolderItem: Folder item. + """ + + return cls(**data) + + +def _get_project_items_from_entitiy(projects): + """ + + Args: + projects (list[dict[str, Any]]): List of projects. + + Returns: + ProjectItem: Project item. + """ + + return [ + ProjectItem(project["name"], project["active"], project["library"]) + for project in projects + ] + + +class ProjectsModel(object): + def __init__(self, controller): + self._projects_cache = CacheItem(default_factory=dict) + self._project_items_by_name = {} + self._projects_by_name = {} + + self._is_refreshing = False + self._controller = controller + + def reset(self): + self._projects_cache.reset() + self._project_items_by_name = {} + self._projects_by_name = {} + + def refresh(self): + self._refresh_projects_cache() + + def get_project_items(self, sender): + if not self._projects_cache.is_valid: + self._refresh_projects_cache(sender) + return self._projects_cache.get_data() + + def get_project_entity(self, project_name): + if project_name not in self._projects_by_name: + entity = None + if project_name: + entity = ayon_api.get_project(project_name) + self._projects_by_name[project_name] = entity + return self._projects_by_name[project_name] + + @contextlib.contextmanager + def _project_refresh_event_manager(self, sender): + self._is_refreshing = True + self._controller.emit_event( + "projects.refresh.started", + {"sender": sender}, + PROJECTS_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "projects.refresh.finished", + {"sender": sender}, + PROJECTS_MODEL_SENDER + ) + self._is_refreshing = False + + def _refresh_projects_cache(self, sender=None): + if self._is_refreshing: + return + + with self._project_refresh_event_manager(sender): + project_items = self._query_projects() + self._projects_cache.update_data(project_items) + + def _query_projects(self): + projects = ayon_api.get_projects(fields=["name", "active", "library"]) + return _get_project_items_from_entitiy(projects) diff --git a/openpype/tools/ayon_utils/models/thumbnails.py b/openpype/tools/ayon_utils/models/thumbnails.py new file mode 100644 index 0000000000..40892338df --- /dev/null +++ b/openpype/tools/ayon_utils/models/thumbnails.py @@ -0,0 +1,118 @@ +import collections + +import ayon_api + +from openpype.client.server.thumbnails import AYONThumbnailCache + +from .cache import NestedCacheItem + + +class ThumbnailsModel: + entity_cache_lifetime = 240 # In seconds + + def __init__(self): + self._thumbnail_cache = AYONThumbnailCache() + self._paths_cache = collections.defaultdict(dict) + self._folders_cache = NestedCacheItem( + levels=2, lifetime=self.entity_cache_lifetime) + self._versions_cache = NestedCacheItem( + levels=2, lifetime=self.entity_cache_lifetime) + + def reset(self): + self._paths_cache = collections.defaultdict(dict) + self._folders_cache.reset() + self._versions_cache.reset() + + def get_thumbnail_path(self, project_name, thumbnail_id): + return self._get_thumbnail_path(project_name, thumbnail_id) + + def get_folder_thumbnail_ids(self, project_name, folder_ids): + project_cache = self._folders_cache[project_name] + output = {} + missing_cache = set() + for folder_id in folder_ids: + cache = project_cache[folder_id] + if cache.is_valid: + output[folder_id] = cache.get_data() + else: + missing_cache.add(folder_id) + self._query_folder_thumbnail_ids(project_name, missing_cache) + for folder_id in missing_cache: + cache = project_cache[folder_id] + output[folder_id] = cache.get_data() + return output + + def get_version_thumbnail_ids(self, project_name, version_ids): + project_cache = self._versions_cache[project_name] + output = {} + missing_cache = set() + for version_id in version_ids: + cache = project_cache[version_id] + if cache.is_valid: + output[version_id] = cache.get_data() + else: + missing_cache.add(version_id) + self._query_version_thumbnail_ids(project_name, missing_cache) + for version_id in missing_cache: + cache = project_cache[version_id] + output[version_id] = cache.get_data() + return output + + def _get_thumbnail_path(self, project_name, thumbnail_id): + if not thumbnail_id: + return None + + project_cache = self._paths_cache[project_name] + if thumbnail_id in project_cache: + return project_cache[thumbnail_id] + + filepath = self._thumbnail_cache.get_thumbnail_filepath( + project_name, thumbnail_id + ) + if filepath is not None: + project_cache[thumbnail_id] = filepath + return filepath + + # 'ayon_api' had a bug, public function + # 'get_thumbnail_by_id' did not return output of + # 'ServerAPI' method. + con = ayon_api.get_server_api_connection() + result = con.get_thumbnail_by_id(project_name, thumbnail_id) + if result is None: + pass + + elif result.is_valid: + filepath = self._thumbnail_cache.store_thumbnail( + project_name, + thumbnail_id, + result.content, + result.content_type + ) + project_cache[thumbnail_id] = filepath + return filepath + + def _query_folder_thumbnail_ids(self, project_name, folder_ids): + if not project_name or not folder_ids: + return + + folders = ayon_api.get_folders( + project_name, + folder_ids=folder_ids, + fields=["id", "thumbnailId"] + ) + project_cache = self._folders_cache[project_name] + for folder in folders: + project_cache[folder["id"]] = folder["thumbnailId"] + + def _query_version_thumbnail_ids(self, project_name, version_ids): + if not project_name or not version_ids: + return + + versions = ayon_api.get_versions( + project_name, + version_ids=version_ids, + fields=["id", "thumbnailId"] + ) + project_cache = self._versions_cache[project_name] + for version in versions: + project_cache[version["id"]] = version["thumbnailId"] diff --git a/openpype/tools/ayon_utils/widgets/__init__.py b/openpype/tools/ayon_utils/widgets/__init__.py new file mode 100644 index 0000000000..432a249a73 --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/__init__.py @@ -0,0 +1,41 @@ +from .projects_widget import ( + # ProjectsWidget, + ProjectsCombobox, + ProjectsModel, + ProjectSortFilterProxy, +) + +from .folders_widget import ( + FoldersWidget, + FoldersModel, + FOLDERS_MODEL_SENDER_NAME, +) + +from .tasks_widget import ( + TasksWidget, + TasksModel, + TASKS_MODEL_SENDER_NAME, +) +from .utils import ( + get_qt_icon, + RefreshThread, +) + + +__all__ = ( + # "ProjectsWidget", + "ProjectsCombobox", + "ProjectsModel", + "ProjectSortFilterProxy", + + "FoldersWidget", + "FoldersModel", + "FOLDERS_MODEL_SENDER_NAME", + + "TasksWidget", + "TasksModel", + "TASKS_MODEL_SENDER_NAME", + + "get_qt_icon", + "RefreshThread", +) diff --git a/openpype/tools/ayon_utils/widgets/folders_widget.py b/openpype/tools/ayon_utils/widgets/folders_widget.py new file mode 100644 index 0000000000..322553c51c --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/folders_widget.py @@ -0,0 +1,515 @@ +import collections + +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + TreeView, +) + +from .utils import RefreshThread, get_qt_icon + +FOLDERS_MODEL_SENDER_NAME = "qt_folders_model" +FOLDER_ID_ROLE = QtCore.Qt.UserRole + 1 +FOLDER_NAME_ROLE = QtCore.Qt.UserRole + 2 +FOLDER_PATH_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_TYPE_ROLE = QtCore.Qt.UserRole + 4 + + +class FoldersModel(QtGui.QStandardItemModel): + """Folders model which cares about refresh of folders. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(FoldersModel, self).__init__() + + self._controller = controller + self._items_by_id = {} + self._parent_id_by_id = {} + + self._refresh_threads = {} + self._current_refresh_thread = None + self._last_project_name = None + + self._has_content = False + self._is_refreshing = False + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: True if model is refreshing. + """ + return self._is_refreshing + + @property + def has_content(self): + """Has at least one folder. + + Returns: + bool: True if model has at least one folder. + """ + + return self._has_content + + def refresh(self): + """Refresh folders for last selected project. + + Force to update folders model from controller. This may or may not + trigger query from server, that's based on controller's cache. + """ + + self.set_project_name(self._last_project_name) + + def _clear_items(self): + self._items_by_id = {} + self._parent_id_by_id = {} + self._has_content = False + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + def get_index_by_id(self, item_id): + """Get index by folder id. + + Returns: + QtCore.QModelIndex: Index of the folder. Can be invalid if folder + is not available. + """ + item = self._items_by_id.get(item_id) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def get_project_name(self): + """Project name which model currently use. + + Returns: + Union[str, None]: Currently used project name. + """ + + return self._last_project_name + + def set_project_name(self, project_name): + """Refresh folders items. + + Refresh start thread because it can cause that controller can + start query from database if folders are not cached. + """ + + if not project_name: + self._last_project_name = project_name + self._current_refresh_thread = None + self._fill_items({}) + return + + self._is_refreshing = True + + if self._last_project_name != project_name: + self._clear_items() + self._last_project_name = project_name + + thread = self._refresh_threads.get(project_name) + if thread is not None: + self._current_refresh_thread = thread + return + + thread = RefreshThread( + project_name, + self._controller.get_folder_items, + project_name, + FOLDERS_MODEL_SENDER_NAME + ) + self._current_refresh_thread = thread + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Folders are stored by id. + + Args: + thread_id (str): Thread id. + """ + + # Make sure to remove thread from '_refresh_threads' dict + thread = self._refresh_threads.pop(thread_id) + if ( + self._current_refresh_thread is None + or thread_id != self._current_refresh_thread.id + ): + return + + self._fill_items(thread.get_result()) + + def _fill_item_data(self, item, folder_item): + """ + + Args: + item (QtGui.QStandardItem): Item to fill data. + folder_item (FolderItem): Folder item. + """ + + icon = get_qt_icon(folder_item.icon) + item.setData(folder_item.entity_id, FOLDER_ID_ROLE) + item.setData(folder_item.name, FOLDER_NAME_ROLE) + item.setData(folder_item.path, FOLDER_PATH_ROLE) + item.setData(folder_item.folder_type, FOLDER_TYPE_ROLE) + item.setData(folder_item.label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + + def _fill_items(self, folder_items_by_id): + if not folder_items_by_id: + if folder_items_by_id is not None: + self._clear_items() + self._is_refreshing = False + self.refreshed.emit() + return + + self._has_content = True + + folder_ids = set(folder_items_by_id) + ids_to_remove = set(self._items_by_id) - folder_ids + + folder_items_by_parent = collections.defaultdict(dict) + for folder_item in folder_items_by_id.values(): + ( + folder_items_by_parent + [folder_item.parent_id] + [folder_item.entity_id] + ) = folder_item + + hierarchy_queue = collections.deque() + hierarchy_queue.append((self.invisibleRootItem(), None)) + + # Keep pointers to removed items until the refresh finishes + # - some children of the items could be moved and reused elsewhere + removed_items = [] + while hierarchy_queue: + item = hierarchy_queue.popleft() + parent_item, parent_id = item + folder_items = folder_items_by_parent[parent_id] + + items_by_id = {} + folder_ids_to_add = set(folder_items) + for row_idx in reversed(range(parent_item.rowCount())): + child_item = parent_item.child(row_idx) + child_id = child_item.data(FOLDER_ID_ROLE) + if child_id in ids_to_remove: + removed_items.append(parent_item.takeRow(row_idx)) + else: + items_by_id[child_id] = child_item + + new_items = [] + for item_id in folder_ids_to_add: + folder_item = folder_items[item_id] + item = items_by_id.get(item_id) + if item is None: + is_new = True + item = QtGui.QStandardItem() + item.setEditable(False) + else: + is_new = self._parent_id_by_id[item_id] != parent_id + + self._fill_item_data(item, folder_item) + if is_new: + new_items.append(item) + self._items_by_id[item_id] = item + self._parent_id_by_id[item_id] = parent_id + + hierarchy_queue.append((item, item_id)) + + if new_items: + parent_item.appendRows(new_items) + + for item_id in ids_to_remove: + self._items_by_id.pop(item_id) + self._parent_id_by_id.pop(item_id) + + self._is_refreshing = False + self.refreshed.emit() + + +class FoldersWidget(QtWidgets.QWidget): + """Folders widget. + + Widget that handles folders view, model and selection. + + Expected selection handling is disabled by default. If enabled, the + widget will handle the expected in predefined way. Widget is listening + to event 'expected_selection_changed' with expected event data below, + the same data must be available when called method + 'get_expected_selection_data' on controller. + + { + "folder": { + "current": bool, # Folder is what should be set now + "folder_id": Union[str, None], # Folder id that should be selected + }, + ... + } + + Selection is confirmed by calling method 'expected_folder_selected' on + controller. + + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + handle_expected_selection (bool): If True, the widget will handle + the expected selection. Defaults to False. + """ + + double_clicked = QtCore.Signal(QtGui.QMouseEvent) + selection_changed = QtCore.Signal() + refreshed = QtCore.Signal() + + def __init__(self, controller, parent, handle_expected_selection=False): + super(FoldersWidget, self).__init__(parent) + + folders_view = TreeView(self) + folders_view.setHeaderHidden(True) + + folders_model = FoldersModel(controller) + folders_proxy_model = RecursiveSortFilterProxyModel() + folders_proxy_model.setSourceModel(folders_model) + folders_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + folders_view.setModel(folders_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(folders_view, 1) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "folders.refresh.finished", + self._on_folders_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = folders_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + folders_view.double_clicked.connect(self.double_clicked) + folders_model.refreshed.connect(self._on_model_refresh) + + self._controller = controller + self._folders_view = folders_view + self._folders_model = folders_model + self._folders_proxy_model = folders_proxy_model + + self._handle_expected_selection = handle_expected_selection + self._expected_selection = None + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: True if model is refreshing. + """ + + return self._folders_model.is_refreshing + + @property + def has_content(self): + """Has at least one folder. + + Returns: + bool: True if model has at least one folder. + """ + + return self._folders_model.has_content + + def set_name_filter(self, name): + """Set filter of folder name. + + Args: + name (str): The string filter. + """ + + self._folders_proxy_model.setFilterFixedString(name) + + def refresh(self): + """Refresh folders model. + + Force to update folders model from controller. + """ + + self._folders_model.refresh() + + def get_project_name(self): + """Project name in which folders widget currently is. + + Returns: + Union[str, None]: Currently used project name. + """ + + return self._folders_model.get_project_name() + + def set_project_name(self, project_name): + """Set project name. + + Do not use this method when controller is handling selection of + project using 'selection.project.changed' event. + + Args: + project_name (str): Project name. + """ + + self._folders_model.set_project_name(project_name) + + def get_selected_folder_id(self): + """Get selected folder id. + + Returns: + Union[str, None]: Folder id which is selected. + """ + + return self._get_selected_item_id() + + def get_selected_folder_label(self): + """Selected folder label. + + Returns: + Union[str, None]: Selected folder label. + """ + + item_id = self._get_selected_item_id() + return self.get_folder_label(item_id) + + def get_folder_label(self, folder_id): + """Folder label for a given folder id. + + Returns: + Union[str, None]: Folder label. + """ + + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + return index.data(QtCore.Qt.DisplayRole) + return None + + def set_selected_folder(self, folder_id): + """Change selection. + + Args: + folder_id (Union[str, None]): Folder id or None to deselect. + """ + + if folder_id is None: + self._folders_view.clearSelection() + return True + + if folder_id == self._get_selected_item_id(): + return True + index = self._folders_model.get_index_by_id(folder_id) + if not index.isValid(): + return False + + proxy_index = self._folders_proxy_model.mapFromSource(index) + if not proxy_index.isValid(): + return False + + selection_model = self._folders_view.selectionModel() + selection_model.setCurrentIndex( + proxy_index, QtCore.QItemSelectionModel.SelectCurrent + ) + return True + + def set_deselectable(self, enabled): + """Set deselectable mode. + + Items in view can be deselected. + + Args: + enabled (bool): Enable deselectable mode. + """ + + self._folders_view.set_deselectable(enabled) + + def _get_selected_index(self): + return self._folders_model.get_index_by_id( + self.get_selected_folder_id() + ) + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self.set_project_name(project_name) + + def _on_folders_refresh_finished(self, event): + if event["sender"] != FOLDERS_MODEL_SENDER_NAME: + self.set_project_name(event["project_name"]) + + def _on_controller_refresh(self): + self._update_expected_selection() + + def _on_model_refresh(self): + if self._expected_selection: + self._set_expected_selection() + self._folders_proxy_model.sort(0) + self.refreshed.emit() + + def _get_selected_item_id(self): + selection_model = self._folders_view.selectionModel() + for index in selection_model.selectedIndexes(): + item_id = index.data(FOLDER_ID_ROLE) + if item_id is not None: + return item_id + return None + + def _on_selection_change(self): + item_id = self._get_selected_item_id() + self._controller.set_selected_folder(item_id) + self.selection_changed.emit() + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + folder_data = expected_data.get("folder") + if not folder_data or not folder_data["current"]: + return + + folder_id = folder_data["id"] + self._expected_selection = folder_id + if not self._folders_model.is_refreshing: + self._set_expected_selection() + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return + + folder_id = self._expected_selection + self._expected_selection = None + if folder_id is not None: + self.set_selected_folder(folder_id) + self._controller.expected_folder_selected(folder_id) diff --git a/openpype/tools/ayon_utils/widgets/projects_widget.py b/openpype/tools/ayon_utils/widgets/projects_widget.py new file mode 100644 index 0000000000..be18cfe3ed --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/projects_widget.py @@ -0,0 +1,596 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.ayon_utils.models import PROJECTS_MODEL_SENDER +from .utils import RefreshThread, get_qt_icon + +PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 1 +PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 2 +PROJECT_IS_LIBRARY_ROLE = QtCore.Qt.UserRole + 3 +PROJECT_IS_CURRENT_ROLE = QtCore.Qt.UserRole + 4 +LIBRARY_PROJECT_SEPARATOR_ROLE = QtCore.Qt.UserRole + 5 + + +class ProjectsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(ProjectsModel, self).__init__() + self._controller = controller + + self._project_items = {} + self._has_libraries = False + + self._empty_item = None + self._empty_item_added = False + + self._select_item = None + self._select_item_added = False + self._select_item_visible = None + + self._libraries_sep_item = None + self._libraries_sep_item_added = False + self._libraries_sep_item_visible = False + + self._current_context_project = None + + self._selected_project = None + + self._is_refreshing = False + self._refresh_thread = None + + @property + def is_refreshing(self): + return self._is_refreshing + + def refresh(self): + self._refresh() + + def has_content(self): + return len(self._project_items) > 0 + + def set_select_item_visible(self, visible): + if self._select_item_visible is visible: + return + self._select_item_visible = visible + + if self._selected_project is None: + self._add_select_item() + + def set_libraries_separator_visible(self, visible): + if self._libraries_sep_item_visible is visible: + return + self._libraries_sep_item_visible = visible + + def set_selected_project(self, project_name): + if not self._select_item_visible: + return + + self._selected_project = project_name + if project_name is None: + self._add_select_item() + else: + self._remove_select_item() + + def set_current_context_project(self, project_name): + if project_name == self._current_context_project: + return + self._unset_current_context_project(self._current_context_project) + self._current_context_project = project_name + self._set_current_context_project(project_name) + + def _set_current_context_project(self, project_name): + item = self._project_items.get(project_name) + if item is None: + return + item.setData(True, PROJECT_IS_CURRENT_ROLE) + + def _unset_current_context_project(self, project_name): + item = self._project_items.get(project_name) + if item is None: + return + item.setData(False, PROJECT_IS_CURRENT_ROLE) + + def _add_empty_item(self): + if self._empty_item_added: + return + self._empty_item_added = True + item = self._get_empty_item() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_empty_item(self): + if not self._empty_item_added: + return + self._empty_item_added = False + root_item = self.invisibleRootItem() + item = self._get_empty_item() + root_item.takeRow(item.row()) + + def _get_empty_item(self): + if self._empty_item is None: + item = QtGui.QStandardItem("< No projects >") + item.setFlags(QtCore.Qt.NoItemFlags) + self._empty_item = item + return self._empty_item + + def _get_library_sep_item(self): + if self._libraries_sep_item is not None: + return self._libraries_sep_item + + item = QtGui.QStandardItem() + item.setData("Libraries", QtCore.Qt.DisplayRole) + item.setData(True, LIBRARY_PROJECT_SEPARATOR_ROLE) + item.setFlags(QtCore.Qt.NoItemFlags) + self._libraries_sep_item = item + return item + + def _add_library_sep_item(self): + if ( + not self._libraries_sep_item_visible + or self._libraries_sep_item_added + ): + return + self._libraries_sep_item_added = True + item = self._get_library_sep_item() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_library_sep_item(self): + if ( + not self._libraries_sep_item_added + ): + return + self._libraries_sep_item_added = False + item = self._get_library_sep_item() + root_item = self.invisibleRootItem() + root_item.takeRow(item.row()) + + def _add_select_item(self): + if self._select_item_added: + return + self._select_item_added = True + item = self._get_select_item() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_select_item(self): + if not self._select_item_added: + return + self._select_item_added = False + root_item = self.invisibleRootItem() + item = self._get_select_item() + root_item.takeRow(item.row()) + + def _get_select_item(self): + if self._select_item is None: + item = QtGui.QStandardItem("< Select project >") + item.setEditable(False) + self._select_item = item + return self._select_item + + def _refresh(self): + if self._is_refreshing: + return + self._is_refreshing = True + refresh_thread = RefreshThread( + "projects", self._query_project_items + ) + refresh_thread.refresh_finished.connect(self._refresh_finished) + refresh_thread.start() + self._refresh_thread = refresh_thread + + def _query_project_items(self): + return self._controller.get_project_items() + + def _refresh_finished(self): + # TODO check if failed + result = self._refresh_thread.get_result() + self._refresh_thread = None + + self._fill_items(result) + + self._is_refreshing = False + self.refreshed.emit() + + def _fill_items(self, project_items): + new_project_names = { + project_item.name + for project_item in project_items + } + + # Handle "Select item" visibility + if self._select_item_visible: + # Add select project. if previously selected project is not in + # project items + if self._selected_project not in new_project_names: + self._add_select_item() + else: + self._remove_select_item() + + root_item = self.invisibleRootItem() + + items_to_remove = set(self._project_items.keys()) - new_project_names + for project_name in items_to_remove: + item = self._project_items.pop(project_name) + root_item.takeRow(item.row()) + + has_library_project = False + new_items = [] + for project_item in project_items: + project_name = project_item.name + item = self._project_items.get(project_name) + if project_item.is_library: + has_library_project = True + if item is None: + item = QtGui.QStandardItem() + item.setEditable(False) + new_items.append(item) + icon = get_qt_icon(project_item.icon) + item.setData(project_name, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(project_name, PROJECT_NAME_ROLE) + item.setData(project_item.active, PROJECT_IS_ACTIVE_ROLE) + item.setData(project_item.is_library, PROJECT_IS_LIBRARY_ROLE) + is_current = project_name == self._current_context_project + item.setData(is_current, PROJECT_IS_CURRENT_ROLE) + self._project_items[project_name] = item + + self._set_current_context_project(self._current_context_project) + + self._has_libraries = has_library_project + + if new_items: + root_item.appendRows(new_items) + + if self.has_content(): + # Make sure "No projects" item is removed + self._remove_empty_item() + if has_library_project: + self._add_library_sep_item() + else: + self._remove_library_sep_item() + else: + # Keep only "No projects" item + self._add_empty_item() + self._remove_select_item() + self._remove_library_sep_item() + + +class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): + def __init__(self, *args, **kwargs): + super(ProjectSortFilterProxy, self).__init__(*args, **kwargs) + self._filter_inactive = True + self._filter_standard = False + self._filter_library = False + self._sort_by_type = True + # Disable case sensitivity + self.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + def _type_sort(self, l_index, r_index): + if not self._sort_by_type: + return None + + l_is_library = l_index.data(PROJECT_IS_LIBRARY_ROLE) + r_is_library = r_index.data(PROJECT_IS_LIBRARY_ROLE) + # Both hare project items + if l_is_library is not None and r_is_library is not None: + if l_is_library is r_is_library: + return None + if l_is_library: + return False + return True + + if l_index.data(LIBRARY_PROJECT_SEPARATOR_ROLE): + if r_is_library is None: + return False + return r_is_library + + if r_index.data(LIBRARY_PROJECT_SEPARATOR_ROLE): + if l_is_library is None: + return True + return l_is_library + return None + + def lessThan(self, left_index, right_index): + # Current project always on top + # - make sure this is always first, before any other sorting + # e.g. type sort would move the item lower + if left_index.data(PROJECT_IS_CURRENT_ROLE): + return True + if right_index.data(PROJECT_IS_CURRENT_ROLE): + return False + + # Library separator should be before library projects + result = self._type_sort(left_index, right_index) + if result is not None: + return result + + if left_index.data(PROJECT_NAME_ROLE) is None: + return True + + if right_index.data(PROJECT_NAME_ROLE) is None: + return False + + left_is_active = left_index.data(PROJECT_IS_ACTIVE_ROLE) + right_is_active = right_index.data(PROJECT_IS_ACTIVE_ROLE) + if right_is_active == left_is_active: + return super(ProjectSortFilterProxy, self).lessThan( + left_index, right_index + ) + + if left_is_active: + return True + return False + + def filterAcceptsRow(self, source_row, source_parent): + index = self.sourceModel().index(source_row, 0, source_parent) + project_name = index.data(PROJECT_NAME_ROLE) + if project_name is None: + return True + + string_pattern = self.filterRegularExpression().pattern() + if string_pattern: + return string_pattern.lower() in project_name.lower() + + # Current project keep always visible + default = super(ProjectSortFilterProxy, self).filterAcceptsRow( + source_row, source_parent + ) + if not default: + return default + + # Make sure current project is visible + if index.data(PROJECT_IS_CURRENT_ROLE): + return True + + if ( + self._filter_inactive + and not index.data(PROJECT_IS_ACTIVE_ROLE) + ): + return False + + if ( + self._filter_standard + and not index.data(PROJECT_IS_LIBRARY_ROLE) + ): + return False + + if ( + self._filter_library + and index.data(PROJECT_IS_LIBRARY_ROLE) + ): + return False + return True + + def _custom_index_filter(self, index): + return bool(index.data(PROJECT_IS_ACTIVE_ROLE)) + + def is_active_filter_enabled(self): + return self._filter_inactive + + def set_active_filter_enabled(self, enabled): + if self._filter_inactive == enabled: + return + self._filter_inactive = enabled + self.invalidateFilter() + + def set_library_filter_enabled(self, enabled): + if self._filter_library == enabled: + return + self._filter_library = enabled + self.invalidateFilter() + + def set_standard_filter_enabled(self, enabled): + if self._filter_standard == enabled: + return + self._filter_standard = enabled + self.invalidateFilter() + + def set_sort_by_type(self, enabled): + if self._sort_by_type is enabled: + return + self._sort_by_type = enabled + self.invalidate() + + +class ProjectsCombobox(QtWidgets.QWidget): + refreshed = QtCore.Signal() + selection_changed = QtCore.Signal() + + def __init__(self, controller, parent, handle_expected_selection=False): + super(ProjectsCombobox, self).__init__(parent) + + projects_combobox = QtWidgets.QComboBox(self) + combobox_delegate = QtWidgets.QStyledItemDelegate(projects_combobox) + projects_combobox.setItemDelegate(combobox_delegate) + projects_model = ProjectsModel(controller) + projects_proxy_model = ProjectSortFilterProxy() + projects_proxy_model.setSourceModel(projects_model) + projects_combobox.setModel(projects_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(projects_combobox, 1) + + projects_model.refreshed.connect(self._on_model_refresh) + + controller.register_event_callback( + "projects.refresh.finished", + self._on_projects_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + projects_combobox.currentIndexChanged.connect( + self._on_current_index_changed + ) + + self._controller = controller + self._listen_selection_change = True + self._select_item_visible = False + + self._handle_expected_selection = handle_expected_selection + self._expected_selection = None + + self._projects_combobox = projects_combobox + self._projects_model = projects_model + self._projects_proxy_model = projects_proxy_model + self._combobox_delegate = combobox_delegate + + def refresh(self): + self._projects_model.refresh() + + def set_selection(self, project_name): + """Set selection to a given project. + + Selection change is ignored if project is not found. + + Args: + project_name (str): Name of project. + + Returns: + bool: True if selection was changed, False otherwise. NOTE: + Selection may not be changed if project is not found, or if + project is already selected. + """ + + idx = self._projects_combobox.findData( + project_name, PROJECT_NAME_ROLE) + if idx < 0: + return False + if idx != self._projects_combobox.currentIndex(): + self._projects_combobox.setCurrentIndex(idx) + return True + return False + + def set_listen_to_selection_change(self, listen): + """Disable listening to changes of the selection. + + Because combobox is triggering selection change when it's model + is refreshed, it's necessary to disable listening to selection for + some cases, e.g. when is on a different page of UI and should be just + refreshed. + + Args: + listen (bool): Enable or disable listening to selection changes. + """ + + self._listen_selection_change = listen + + def get_selected_project_name(self): + """Name of selected project. + + Returns: + Union[str, None]: Name of selected project, or None if no project + """ + + idx = self._projects_combobox.currentIndex() + if idx < 0: + return None + return self._projects_combobox.itemData(idx, PROJECT_NAME_ROLE) + + def set_current_context_project(self, project_name): + self._projects_model.set_current_context_project(project_name) + self._projects_proxy_model.invalidateFilter() + + def _update_select_item_visiblity(self, **kwargs): + if not self._select_item_visible: + return + if "project_name" not in kwargs: + project_name = self.get_selected_project_name() + else: + project_name = kwargs.get("project_name") + + # Hide the item if a project is selected + self._projects_model.set_selected_project(project_name) + + def set_select_item_visible(self, visible): + self._select_item_visible = visible + self._projects_model.set_select_item_visible(visible) + self._update_select_item_visiblity() + + def set_libraries_separator_visible(self, visible): + self._projects_model.set_libraries_separator_visible(visible) + + def is_active_filter_enabled(self): + return self._projects_proxy_model.is_active_filter_enabled() + + def set_active_filter_enabled(self, enabled): + return self._projects_proxy_model.set_active_filter_enabled(enabled) + + def set_standard_filter_enabled(self, enabled): + return self._projects_proxy_model.set_standard_filter_enabled(enabled) + + def set_library_filter_enabled(self, enabled): + return self._projects_proxy_model.set_library_filter_enabled(enabled) + + def _on_current_index_changed(self, idx): + if not self._listen_selection_change: + return + project_name = self._projects_combobox.itemData( + idx, PROJECT_NAME_ROLE) + self._update_select_item_visiblity(project_name=project_name) + self._controller.set_selected_project(project_name) + self.selection_changed.emit() + + def _on_model_refresh(self): + self._projects_proxy_model.sort(0) + self._projects_proxy_model.invalidateFilter() + if self._expected_selection: + self._set_expected_selection() + self._update_select_item_visiblity() + self.refreshed.emit() + + def _on_projects_refresh_finished(self, event): + if event["sender"] != PROJECTS_MODEL_SENDER: + self._projects_model.refresh() + + def _on_controller_refresh(self): + self._update_expected_selection() + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return + project_name = self._expected_selection + if project_name is not None: + if project_name != self.get_selected_project_name(): + self.set_selection(project_name) + else: + # Fake project change + self._on_current_index_changed( + self._projects_combobox.currentIndex() + ) + + self._controller.expected_project_selected(project_name) + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + project_data = expected_data.get("project") + if ( + not project_data + or not project_data["current"] + or project_data["selected"] + ): + return + self._expected_selection = project_data["name"] + if not self._projects_model.is_refreshing: + self._set_expected_selection() + + +class ProjectsWidget(QtWidgets.QWidget): + # TODO implement + pass diff --git a/openpype/tools/ayon_utils/widgets/tasks_widget.py b/openpype/tools/ayon_utils/widgets/tasks_widget.py new file mode 100644 index 0000000000..d01b3a7917 --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/tasks_widget.py @@ -0,0 +1,454 @@ +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.style import get_disabled_entity_icon_color +from openpype.tools.utils import DeselectableTreeView + +from .utils import RefreshThread, get_qt_icon + +TASKS_MODEL_SENDER_NAME = "qt_tasks_model" +ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 +PARENT_ID_ROLE = QtCore.Qt.UserRole + 2 +ITEM_NAME_ROLE = QtCore.Qt.UserRole + 3 +TASK_TYPE_ROLE = QtCore.Qt.UserRole + 4 + + +class TasksModel(QtGui.QStandardItemModel): + """Tasks model which cares about refresh of tasks by folder id. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(TasksModel, self).__init__() + + self._controller = controller + + self._items_by_name = {} + self._has_content = False + self._is_refreshing = False + + self._invalid_selection_item_used = False + self._invalid_selection_item = None + self._empty_tasks_item_used = False + self._empty_tasks_item = None + + self._last_project_name = None + self._last_folder_id = None + + self._refresh_threads = {} + self._current_refresh_thread = None + + # Initial state + self._add_invalid_selection_item() + + def _clear_items(self): + self._items_by_name = {} + self._has_content = False + self._remove_invalid_items() + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + def refresh(self): + """Refresh tasks for last project and folder.""" + + self._refresh(self._last_project_name, self._last_folder_id) + + def set_context(self, project_name, folder_id): + """Set context for which should be tasks showed. + + Args: + project_name (Union[str]): Name of project. + folder_id (Union[str, None]): Folder id. + """ + + self._refresh(project_name, folder_id) + + def get_index_by_name(self, task_name): + """Find item by name and return its index. + + Returns: + QtCore.QModelIndex: Index of item. Is invalid if task is not + found by name. + """ + + item = self._items_by_name.get(task_name) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def get_last_project_name(self): + """Get last refreshed project name. + + Returns: + Union[str, None]: Project name. + """ + + return self._last_project_name + + def get_last_folder_id(self): + """Get last refreshed folder id. + + Returns: + Union[str, None]: Folder id. + """ + + return self._last_folder_id + + def set_selected_project(self, project_name): + self._selected_project_name = project_name + + def _get_invalid_selection_item(self): + if self._invalid_selection_item is None: + item = QtGui.QStandardItem("Select a folder") + item.setFlags(QtCore.Qt.NoItemFlags) + icon = get_qt_icon({ + "type": "awesome-font", + "name": "fa.times", + "color": get_disabled_entity_icon_color(), + }) + item.setData(icon, QtCore.Qt.DecorationRole) + self._invalid_selection_item = item + return self._invalid_selection_item + + def _get_empty_task_item(self): + if self._empty_tasks_item is None: + item = QtGui.QStandardItem("No task") + icon = get_qt_icon({ + "type": "awesome-font", + "name": "fa.exclamation-circle", + "color": get_disabled_entity_icon_color(), + }) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + self._empty_tasks_item = item + return self._empty_tasks_item + + def _add_invalid_item(self, item): + self._clear_items() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_invalid_item(self, item): + root_item = self.invisibleRootItem() + root_item.takeRow(item.row()) + + def _remove_invalid_items(self): + self._remove_invalid_selection_item() + self._remove_empty_task_item() + + def _add_invalid_selection_item(self): + if not self._invalid_selection_item_used: + self._add_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = True + + def _remove_invalid_selection_item(self): + if self._invalid_selection_item: + self._remove_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = False + + def _add_empty_task_item(self): + if not self._empty_tasks_item_used: + self._add_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = True + + def _remove_empty_task_item(self): + if self._empty_tasks_item_used: + self._remove_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = False + + def _refresh(self, project_name, folder_id): + self._is_refreshing = True + self._last_project_name = project_name + self._last_folder_id = folder_id + if not folder_id: + self._add_invalid_selection_item() + self._current_refresh_thread = None + self._is_refreshing = False + self.refreshed.emit() + return + + thread = self._refresh_threads.get(folder_id) + if thread is not None: + self._current_refresh_thread = thread + return + thread = RefreshThread( + folder_id, + self._controller.get_task_items, + project_name, + folder_id + ) + self._current_refresh_thread = thread + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Tasks are stored by name, so if a folder has same task name as + previously selected folder it keeps the selection. + + Args: + thread_id (str): Thread id. + """ + + # Make sure to remove thread from '_refresh_threads' dict + thread = self._refresh_threads.pop(thread_id) + if ( + self._current_refresh_thread is None + or thread_id != self._current_refresh_thread.id + ): + return + + task_items = thread.get_result() + # Task items are refreshed + if task_items is None: + return + + # No tasks are available on folder + if not task_items: + self._add_empty_task_item() + return + self._remove_invalid_items() + + new_items = [] + new_names = set() + for task_item in task_items: + name = task_item.name + new_names.add(name) + item = self._items_by_name.get(name) + if item is None: + item = QtGui.QStandardItem() + item.setEditable(False) + new_items.append(item) + self._items_by_name[name] = item + + # TODO cache locally + icon = get_qt_icon(task_item.icon) + item.setData(task_item.label, QtCore.Qt.DisplayRole) + item.setData(name, ITEM_NAME_ROLE) + item.setData(task_item.id, ITEM_ID_ROLE) + item.setData(task_item.parent_id, PARENT_ID_ROLE) + item.setData(icon, QtCore.Qt.DecorationRole) + + root_item = self.invisibleRootItem() + + for name in set(self._items_by_name) - new_names: + item = self._items_by_name.pop(name) + root_item.removeRow(item.row()) + + if new_items: + root_item.appendRows(new_items) + + self._has_content = root_item.rowCount() > 0 + self._is_refreshing = False + self.refreshed.emit() + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: Model is refreshing + """ + + return self._is_refreshing + + @property + def has_content(self): + """Model has content. + + Returns: + bools: Have at least one task. + """ + + return self._has_content + + def headerData(self, section, orientation, role): + # Show nice labels in the header + if ( + role == QtCore.Qt.DisplayRole + and orientation == QtCore.Qt.Horizontal + ): + if section == 0: + return "Tasks" + + return super(TasksModel, self).headerData( + section, orientation, role + ) + + +class TasksWidget(QtWidgets.QWidget): + """Tasks widget. + + Widget that handles tasks view, model and selection. + + Args: + controller (AbstractWorkfilesFrontend): Workfiles controller. + parent (QtWidgets.QWidget): Parent widget. + handle_expected_selection (Optional[bool]): Handle expected selection. + """ + + refreshed = QtCore.Signal() + selection_changed = QtCore.Signal() + + def __init__(self, controller, parent, handle_expected_selection=False): + super(TasksWidget, self).__init__(parent) + + tasks_view = DeselectableTreeView(self) + tasks_view.setIndentation(0) + + tasks_model = TasksModel(controller) + tasks_proxy_model = QtCore.QSortFilterProxyModel() + tasks_proxy_model.setSourceModel(tasks_model) + tasks_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + tasks_view.setModel(tasks_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(tasks_view, 1) + + controller.register_event_callback( + "tasks.refresh.finished", + self._on_tasks_refresh_finished + ) + controller.register_event_callback( + "selection.folder.changed", + self._folder_selection_changed + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = tasks_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + tasks_model.refreshed.connect(self._on_tasks_model_refresh) + + self._controller = controller + self._tasks_view = tasks_view + self._tasks_model = tasks_model + self._tasks_proxy_model = tasks_proxy_model + + self._selected_folder_id = None + + self._handle_expected_selection = handle_expected_selection + self._expected_selection_data = None + + def refresh(self): + """Refresh folders for last selected project. + + Force to update folders model from controller. This may or may not + trigger query from server, that's based on controller's cache. + """ + + self._tasks_model.refresh() + + def _on_tasks_refresh_finished(self, event): + """Tasks were refreshed in controller. + + Ignore if refresh was triggered by tasks model, or refreshed folder is + not the same as currently selected folder. + + Args: + event (Event): Event object. + """ + + # Refresh only if current folder id is the same + if ( + event["sender"] == TASKS_MODEL_SENDER_NAME + or event["folder_id"] != self._selected_folder_id + ): + return + self._tasks_model.set_context( + event["project_name"], self._selected_folder_id + ) + + def _folder_selection_changed(self, event): + self._selected_folder_id = event["folder_id"] + self._tasks_model.set_context( + event["project_name"], self._selected_folder_id + ) + + def _on_tasks_model_refresh(self): + if not self._set_expected_selection(): + self._on_selection_change() + self._tasks_proxy_model.sort(0) + self.refreshed.emit() + + def _get_selected_item_ids(self): + selection_model = self._tasks_view.selectionModel() + for index in selection_model.selectedIndexes(): + task_id = index.data(ITEM_ID_ROLE) + task_name = index.data(ITEM_NAME_ROLE) + parent_id = index.data(PARENT_ID_ROLE) + if task_name is not None: + return parent_id, task_id, task_name + return self._selected_folder_id, None, None + + def _on_selection_change(self): + # Don't trigger task change during refresh + # - a task was deselected if that happens + # - can cause crash triggered during tasks refreshing + if self._tasks_model.is_refreshing: + return + + parent_id, task_id, task_name = self._get_selected_item_ids() + self._controller.set_selected_task(task_id, task_name) + self.selection_changed.emit() + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return False + + if self._expected_selection_data is None: + return False + folder_id = self._expected_selection_data["folder_id"] + task_name = self._expected_selection_data["task_name"] + self._expected_selection_data = None + model_folder_id = self._tasks_model.get_last_folder_id() + if folder_id != model_folder_id: + return False + if task_name is not None: + index = self._tasks_model.get_index_by_name(task_name) + if index.isValid(): + proxy_index = self._tasks_proxy_model.mapFromSource(index) + self._tasks_view.setCurrentIndex(proxy_index) + self._controller.expected_task_selected(folder_id, task_name) + return True + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + folder_data = expected_data.get("folder") + task_data = expected_data.get("task") + if ( + not folder_data + or not task_data + or not task_data["current"] + ): + return + folder_id = folder_data["id"] + self._expected_selection_data = { + "task_name": task_data["name"], + "folder_id": folder_id, + } + model_folder_id = self._tasks_model.get_last_folder_id() + if folder_id != model_folder_id or self._tasks_model.is_refreshing: + return + self._set_expected_selection() diff --git a/openpype/tools/ayon_utils/widgets/utils.py b/openpype/tools/ayon_utils/widgets/utils.py new file mode 100644 index 0000000000..8bc3b1ea9b --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/utils.py @@ -0,0 +1,98 @@ +import os +from functools import partial + +from qtpy import QtCore, QtGui + +from openpype.tools.utils.lib import get_qta_icon_by_name_and_color + + +class RefreshThread(QtCore.QThread): + refresh_finished = QtCore.Signal(str) + + def __init__(self, thread_id, func, *args, **kwargs): + super(RefreshThread, self).__init__() + self._id = thread_id + self._callback = partial(func, *args, **kwargs) + self._exception = None + self._result = None + + @property + def id(self): + return self._id + + @property + def failed(self): + return self._exception is not None + + def run(self): + try: + self._result = self._callback() + except Exception as exc: + self._exception = exc + self.refresh_finished.emit(self.id) + + def get_result(self): + return self._result + + +class _IconsCache: + """Cache for icons.""" + + _cache = {} + _default = None + + @classmethod + def _get_cache_key(cls, icon_def): + parts = [] + icon_type = icon_def["type"] + if icon_type == "path": + parts = [icon_type, icon_def["path"]] + + elif icon_type == "awesome-font": + parts = [icon_type, icon_def["name"], icon_def["color"]] + return "|".join(parts) + + @classmethod + def get_icon(cls, icon_def): + icon_type = icon_def["type"] + cache_key = cls._get_cache_key(icon_def) + cache = cls._cache.get(cache_key) + if cache is not None: + return cache + + icon = None + if icon_type == "path": + path = icon_def["path"] + if os.path.exists(path): + icon = QtGui.QIcon(path) + + elif icon_type == "awesome-font": + icon_name = icon_def["name"] + icon_color = icon_def["color"] + icon = get_qta_icon_by_name_and_color(icon_name, icon_color) + if icon is None: + icon = get_qta_icon_by_name_and_color( + "fa.{}".format(icon_name), icon_color) + if icon is None: + icon = cls.get_default() + cls._cache[cache_key] = icon + return icon + + @classmethod + def get_default(cls): + pix = QtGui.QPixmap(1, 1) + pix.fill(QtCore.Qt.transparent) + return QtGui.QIcon(pix) + + +def get_qt_icon(icon_def): + """Returns icon from cache or creates new one. + + Args: + icon_def (dict[str, Any]): Icon definition. + + Returns: + QtGui.QIcon: Icon. + """ + + return _IconsCache.get_icon(icon_def) diff --git a/openpype/tools/ayon_workfiles/__init__.py b/openpype/tools/ayon_workfiles/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/tools/ayon_workfiles/abstract.py b/openpype/tools/ayon_workfiles/abstract.py new file mode 100644 index 0000000000..ce399fd4c6 --- /dev/null +++ b/openpype/tools/ayon_workfiles/abstract.py @@ -0,0 +1,996 @@ +import os +from abc import ABCMeta, abstractmethod + +import six +from openpype.style import get_default_entity_icon_color + + +class WorkfileInfo: + """Information about workarea file with possible additional from database. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + filepath (str): Filepath. + filesize (int): File size. + creation_time (int): Creation time (timestamp). + modification_time (int): Modification time (timestamp). + note (str): Note. + """ + + def __init__( + self, + folder_id, + task_id, + filepath, + filesize, + creation_time, + modification_time, + note, + ): + self.folder_id = folder_id + self.task_id = task_id + self.filepath = filepath + self.filesize = filesize + self.creation_time = creation_time + self.modification_time = modification_time + self.note = note + + def to_data(self): + """Converts WorkfileInfo item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "folder_id": self.folder_id, + "task_id": self.task_id, + "filepath": self.filepath, + "filesize": self.filesize, + "creation_time": self.creation_time, + "modification_time": self.modification_time, + "note": self.note, + } + + @classmethod + def from_data(cls, data): + """Re-creates WorkfileInfo item from data. + + Args: + data (dict[str, Any]): Workfile info item data. + + Returns: + WorkfileInfo: Workfile info item. + """ + + return cls(**data) + + +class FolderItem: + """Item representing folder entity on a server. + + Folder can be a child of another folder or a project. + + Args: + entity_id (str): Folder id. + parent_id (Union[str, None]): Parent folder id. If 'None' then project + is parent. + name (str): Name of folder. + label (str): Folder label. + icon_name (str): Name of icon from font awesome. + icon_color (str): Hex color string that will be used for icon. + """ + + def __init__( + self, entity_id, parent_id, name, label, icon_name, icon_color + ): + self.entity_id = entity_id + self.parent_id = parent_id + self.name = name + self.icon_name = icon_name or "fa.folder" + self.icon_color = icon_color or get_default_entity_icon_color() + self.label = label or name + + def to_data(self): + """Converts folder item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "entity_id": self.entity_id, + "parent_id": self.parent_id, + "name": self.name, + "label": self.label, + "icon_name": self.icon_name, + "icon_color": self.icon_color, + } + + @classmethod + def from_data(cls, data): + """Re-creates folder item from data. + + Args: + data (dict[str, Any]): Folder item data. + + Returns: + FolderItem: Folder item. + """ + + return cls(**data) + + +class TaskItem: + """Task item representing task entity on a server. + + Task is child of a folder. + + Task item has label that is used for display in UI. The label is by + default using task name and type. + + Args: + task_id (str): Task id. + name (str): Name of task. + task_type (str): Type of task. + parent_id (str): Parent folder id. + icon_name (str): Name of icon from font awesome. + icon_color (str): Hex color string that will be used for icon. + """ + + def __init__( + self, task_id, name, task_type, parent_id, icon_name, icon_color + ): + self.task_id = task_id + self.name = name + self.task_type = task_type + self.parent_id = parent_id + self.icon_name = icon_name or "fa.male" + self.icon_color = icon_color or get_default_entity_icon_color() + self._label = None + + @property + def id(self): + """Alias for task_id. + + Returns: + str: Task id. + """ + + return self.task_id + + @property + def label(self): + """Label of task item for UI. + + Returns: + str: Label of task item. + """ + + if self._label is None: + self._label = "{} ({})".format(self.name, self.task_type) + return self._label + + def to_data(self): + """Converts task item to data. + + Returns: + dict[str, Any]: Task item data. + """ + + return { + "task_id": self.task_id, + "name": self.name, + "parent_id": self.parent_id, + "task_type": self.task_type, + "icon_name": self.icon_name, + "icon_color": self.icon_color, + } + + @classmethod + def from_data(cls, data): + """Re-create task item from data. + + Args: + data (dict[str, Any]): Task item data. + + Returns: + TaskItem: Task item. + """ + + return cls(**data) + + +class FileItem: + """File item that represents a file. + + Can be used for both Workarea and Published workfile. Workarea file + will always exist on disk which is not the case for Published workfile. + + Args: + dirpath (str): Directory path of file. + filename (str): Filename. + modified (float): Modified timestamp. + representation_id (Optional[str]): Representation id of published + workfile. + filepath (Optional[str]): Prepared filepath. + exists (Optional[bool]): If file exists on disk. + """ + + def __init__( + self, + dirpath, + filename, + modified, + representation_id=None, + filepath=None, + exists=None + ): + self.filename = filename + self.dirpath = dirpath + self.modified = modified + self.representation_id = representation_id + self._filepath = filepath + self._exists = exists + + @property + def filepath(self): + """Filepath of file. + + Returns: + str: Full path to a file. + """ + + if self._filepath is None: + self._filepath = os.path.join(self.dirpath, self.filename) + return self._filepath + + @property + def exists(self): + """File is available. + + Returns: + bool: If file exists on disk. + """ + + if self._exists is None: + self._exists = os.path.exists(self.filepath) + return self._exists + + def to_data(self): + """Converts file item to data. + + Returns: + dict[str, Any]: File item data. + """ + + return { + "filename": self.filename, + "dirpath": self.dirpath, + "modified": self.modified, + "representation_id": self.representation_id, + "filepath": self.filepath, + "exists": self.exists, + } + + @classmethod + def from_data(cls, data): + """Re-creates file item from data. + + Args: + data (dict[str, Any]): File item data. + + Returns: + FileItem: File item. + """ + + required_keys = { + "filename", + "dirpath", + "modified", + "representation_id" + } + missing_keys = required_keys - set(data.keys()) + if missing_keys: + raise KeyError("Missing keys: {}".format(missing_keys)) + + return cls(**{ + key: data[key] + for key in required_keys + }) + + +class WorkareaFilepathResult: + """Result of workarea file formatting. + + Args: + root (str): Root path of workarea. + filename (str): Filename. + exists (bool): True if file exists. + filepath (str): Filepath. If not provided it will be constructed + from root and filename. + """ + + def __init__(self, root, filename, exists, filepath=None): + if not filepath and root and filename: + filepath = os.path.join(root, filename) + self.root = root + self.filename = filename + self.exists = exists + self.filepath = filepath + + +@six.add_metaclass(ABCMeta) +class AbstractWorkfilesCommon(object): + @abstractmethod + def is_host_valid(self): + """Host is valid for workfiles tool work. + + Returns: + bool: True if host is valid. + """ + + pass + + @abstractmethod + def get_workfile_extensions(self): + """Get possible workfile extensions. + + Defined by host implementation. + + Returns: + Iterable[str]: List of extensions. + """ + + pass + + @abstractmethod + def is_save_enabled(self): + """Is workfile save enabled. + + Returns: + bool: True if save is enabled. + """ + + pass + + @abstractmethod + def set_save_enabled(self, enabled): + """Enable or disabled workfile save. + + Args: + enabled (bool): Enable save workfile when True. + """ + + pass + + +class AbstractWorkfilesBackend(AbstractWorkfilesCommon): + # Current context + @abstractmethod + def get_host_name(self): + """Name of host. + + Returns: + str: Name of host. + """ + pass + + @abstractmethod + def get_current_project_name(self): + """Project name from current context of host. + + Returns: + str: Name of project. + """ + + pass + + @abstractmethod + def get_current_folder_id(self): + """Folder id from current context of host. + + Returns: + Union[str, None]: Folder id or None if host does not have + any context. + """ + + pass + + @abstractmethod + def get_current_task_name(self): + """Task name from current context of host. + + Returns: + Union[str, None]: Task name or None if host does not have + any context. + """ + + pass + + @abstractmethod + def get_current_workfile(self): + """Current workfile from current context of host. + + Returns: + Union[str, None]: Path to workfile or None if host does + not have opened specific file. + """ + + pass + + @property + @abstractmethod + def project_anatomy(self): + """Project anatomy for current project. + + Returns: + Anatomy: Project anatomy. + """ + + pass + + @property + @abstractmethod + def project_settings(self): + """Project settings for current project. + + Returns: + dict[str, Any]: Project settings. + """ + + pass + + @abstractmethod + def get_project_entity(self): + """Get current project entity. + + Returns: + dict[str, Any]: Project entity data. + """ + + pass + + @abstractmethod + def get_folder_entity(self, folder_id): + """Get folder entity by id. + + Args: + folder_id (str): Folder id. + + Returns: + dict[str, Any]: Folder entity data. + """ + + pass + + @abstractmethod + def get_task_entity(self, task_id): + """Get task entity by id. + + Args: + task_id (str): Task id. + + Returns: + dict[str, Any]: Task entity data. + """ + + pass + + def emit_event(self, topic, data=None, source=None): + """Emit event. + + Args: + topic (str): Event topic used for callbacks filtering. + data (Optional[dict[str, Any]]): Event data. + source (Optional[str]): Event source. + """ + + pass + + +class AbstractWorkfilesFrontend(AbstractWorkfilesCommon): + """UI controller abstraction that is used for workfiles tool frontend. + + Abstraction to provide data for UI and to handle UI events. + + Provide access to abstract backend data, like folders and tasks. Cares + about handling of selection, keep information about current UI selection + and have ability to tell what selection should UI show. + + Selection is separated into 2 parts, first is what UI elements tell + about selection, and second is what UI should show as selected. + """ + + @abstractmethod + def register_event_callback(self, topic, callback): + """Register event callback. + + Listen for events with given topic. + + Args: + topic (str): Name of topic. + callback (Callable): Callback that will be called when event + is triggered. + """ + + pass + + # Host information + @abstractmethod + def get_workfile_extensions(self): + """Each host can define extensions that can be used for workfile. + + Returns: + List[str]: File extensions that can be used as workfile for + current host. + """ + + pass + + # Selection information + @abstractmethod + def get_selected_folder_id(self): + """Currently selected folder id. + + Returns: + Union[str, None]: Folder id or None if no folder is selected. + """ + + pass + + @abstractmethod + def set_selected_folder(self, folder_id): + """Change selected folder. + + This deselects currently selected task. + + Args: + folder_id (Union[str, None]): Folder id or None if no folder + is selected. + """ + + pass + + @abstractmethod + def get_selected_task_id(self): + """Currently selected task id. + + Returns: + Union[str, None]: Task id or None if no folder is selected. + """ + + pass + + @abstractmethod + def get_selected_task_name(self): + """Currently selected task name. + + Returns: + Union[str, None]: Task name or None if no folder is selected. + """ + + pass + + @abstractmethod + def set_selected_task(self, folder_id, task_id, task_name): + """Change selected task. + + Args: + folder_id (Union[str, None]): Folder id or None if no folder + is selected. + task_id (Union[str, None]): Task id or None if no task + is selected. + task_name (Union[str, None]): Task name or None if no task + is selected. + """ + + pass + + @abstractmethod + def get_selected_workfile_path(self): + """Currently selected workarea workile. + + Returns: + Union[str, None]: Selected workfile path. + """ + + pass + + @abstractmethod + def set_selected_workfile_path(self, path): + """Change selected workfile path. + + Args: + path (Union[str, None]): Selected workfile path. + """ + + pass + + @abstractmethod + def get_selected_representation_id(self): + """Currently selected workfile representation id. + + Returns: + Union[str, None]: Representation id or None if no representation + is selected. + """ + + pass + + @abstractmethod + def set_selected_representation_id(self, representation_id): + """Change selected representation. + + Args: + representation_id (Union[str, None]): Selected workfile + representation id. + """ + + pass + + def get_selected_context(self): + """Obtain selected context. + + Returns: + dict[str, Union[str, None]]: Selected context. + """ + + return { + "folder_id": self.get_selected_folder_id(), + "task_id": self.get_selected_task_id(), + "task_name": self.get_selected_task_name(), + "workfile_path": self.get_selected_workfile_path(), + "representation_id": self.get_selected_representation_id(), + } + + # Expected selection + # - expected selection is used to restore selection after refresh + # or when current context should be used + @abstractmethod + def set_expected_selection( + self, + folder_id, + task_name, + workfile_name=None, + representation_id=None + ): + """Define what should be selected in UI. + + Expected selection provide a way to define/change selection of + sequential UI elements. For example, if folder and task should be + selected a task element should wait until folder element has selected + folder. + + Triggers 'expected_selection.changed' event. + + Args: + folder_id (str): Folder id. + task_name (str): Task name. + workfile_name (Optional[str]): Workfile name. Used for workarea + files UI element. + representation_id (Optional[str]): Representation id. Used for + published filed UI element. + """ + + pass + + @abstractmethod + def get_expected_selection_data(self): + """Data of expected selection. + + TODOs: + Return defined object instead of dict. + + Returns: + dict[str, Any]: Expected selection data. + """ + + pass + + @abstractmethod + def expected_folder_selected(self, folder_id): + """Expected folder was selected in UI. + + Args: + folder_id (str): Folder id which was selected. + """ + + pass + + @abstractmethod + def expected_task_selected(self, folder_id, task_name): + """Expected task was selected in UI. + + Args: + folder_id (str): Folder id under which task is. + task_name (str): Task name which was selected. + """ + + pass + + @abstractmethod + def expected_representation_selected(self, representation_id): + """Expected representation was selected in UI. + + Args: + representation_id (str): Representation id which was selected. + """ + + pass + + @abstractmethod + def expected_workfile_selected(self, workfile_path): + """Expected workfile was selected in UI. + + Args: + workfile_path (str): Workfile path which was selected. + """ + + pass + + @abstractmethod + def go_to_current_context(self): + """Set expected selection to current context.""" + + pass + + # Model functions + @abstractmethod + def get_folder_items(self, sender): + """Folder items to visualize project hierarchy. + + This function may trigger events 'folders.refresh.started' and + 'folders.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of folder items in UI elements. + + Args: + sender (str): Who requested folder items. + + Returns: + list[FolderItem]: Minimum possible information needed + for visualisation of folder hierarchy. + """ + + pass + + @abstractmethod + def get_task_items(self, folder_id, sender): + """Task items. + + This function may trigger events 'tasks.refresh.started' and + 'tasks.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of task items in UI elements. + + Args: + folder_id (str): Folder ID for which are tasks requested. + sender (str): Who requested folder items. + + Returns: + list[TaskItem]: Minimum possible information needed + for visualisation of tasks. + """ + + pass + + @abstractmethod + def has_unsaved_changes(self): + """Has host unsaved change in currently running session. + + Returns: + bool: Has unsaved changes. + """ + + pass + + @abstractmethod + def get_workarea_dir_by_context(self, folder_id, task_id): + """Get workarea directory by context. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + + Returns: + str: Workarea directory. + """ + + pass + + @abstractmethod + def get_workarea_file_items(self, folder_id, task_id): + """Get workarea file items. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + + Returns: + list[FileItem]: List of workarea file items. + """ + + pass + + @abstractmethod + def get_workarea_save_as_data(self, folder_id, task_id): + """Prepare data for Save As operation. + + Todos: + Return defined object instead of dict. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + + Returns: + dict[str, Any]: Data for Save As operation. + """ + + pass + + @abstractmethod + def fill_workarea_filepath( + self, + folder_id, + task_id, + extension, + use_last_version, + version, + comment, + ): + """Calculate workfile path for passed context. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + extension (str): File extension. + use_last_version (bool): Use last version. + version (int): Version used if 'use_last_version' if 'False'. + comment (str): User's comment (subversion). + + Returns: + WorkareaFilepathResult: Result of the operation. + """ + + pass + + @abstractmethod + def get_published_file_items(self, folder_id, task_id): + """Get published file items. + + Args: + folder_id (str): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[FileItem]: List of published file items. + """ + + pass + + @abstractmethod + def get_workfile_info(self, folder_id, task_id, filepath): + """Workfile info from database. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + filepath (str): Workfile path. + + Returns: + Union[WorkfileInfo, None]: Workfile info or None if was passed + invalid context. + """ + + pass + + @abstractmethod + def save_workfile_info(self, folder_id, task_id, filepath, note): + """Save workfile info to database. + + At this moment the only information which can be saved about + workfile is 'note'. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + filepath (str): Workfile path. + note (str): Note. + """ + + pass + + # General commands + @abstractmethod + def refresh(self): + """Refresh everything, models, ui etc. + + Triggers 'controller.refresh.started' event at the beginning and + 'controller.refresh.finished' at the end. + """ + + pass + + # Controller actions + @abstractmethod + def open_workfile(self, folder_id, task_id, filepath): + """Open a workfile for context. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + filepath (str): Workfile path. + """ + + pass + + @abstractmethod + def save_current_workfile(self): + """Save state of current workfile.""" + + pass + + @abstractmethod + def save_as_workfile( + self, + folder_id, + task_id, + workdir, + filename, + template_key, + ): + """Save current state of workfile to workarea. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + workdir (str): Workarea directory. + filename (str): Workarea filename. + template_key (str): Template key used to get the workdir + and filename. + """ + + pass + + @abstractmethod + def copy_workfile_representation( + self, + representation_id, + representation_filepath, + folder_id, + task_id, + workdir, + filename, + template_key, + ): + """Action to copy published workfile representation to workarea. + + Triggers 'copy_representation.started' event on start and + 'copy_representation.finished' event with '{"failed": bool}'. + + Args: + representation_id (str): Representation id. + representation_filepath (str): Path to representation file. + folder_id (str): Folder id. + task_id (str): Task id. + workdir (str): Workarea directory. + filename (str): Workarea filename. + template_key (str): Template key. + """ + + pass + + @abstractmethod + def duplicate_workfile(self, src_filepath, workdir, filename): + """Duplicate workfile. + + Workfiles is not opened when done. + + Args: + src_filepath (str): Source workfile path. + workdir (str): Destination workdir. + filename (str): Destination filename. + """ + + pass diff --git a/openpype/tools/ayon_workfiles/control.py b/openpype/tools/ayon_workfiles/control.py new file mode 100644 index 0000000000..3784959caf --- /dev/null +++ b/openpype/tools/ayon_workfiles/control.py @@ -0,0 +1,688 @@ +import os +import shutil + +import ayon_api + +from openpype.client import get_asset_by_id +from openpype.host import IWorkfileHost +from openpype.lib import Logger, emit_event +from openpype.lib.events import QueuedEventSystem +from openpype.settings import get_project_settings +from openpype.pipeline import Anatomy, registered_host +from openpype.pipeline.context_tools import ( + change_current_context, + get_current_host_name, + get_global_context, +) +from openpype.pipeline.workfile import create_workdir_extra_folders + +from .abstract import ( + AbstractWorkfilesFrontend, + AbstractWorkfilesBackend, +) +from .models import SelectionModel, EntitiesModel, WorkfilesModel + + +class ExpectedSelection: + def __init__(self): + self._folder_id = None + self._task_name = None + self._workfile_name = None + self._representation_id = None + self._folder_selected = True + self._task_selected = True + self._workfile_name_selected = True + self._representation_id_selected = True + + def set_expected_selection( + self, + folder_id, + task_name, + workfile_name=None, + representation_id=None + ): + self._folder_id = folder_id + self._task_name = task_name + self._workfile_name = workfile_name + self._representation_id = representation_id + self._folder_selected = False + self._task_selected = False + self._workfile_name_selected = workfile_name is None + self._representation_id_selected = representation_id is None + + def get_expected_selection_data(self): + return { + "folder_id": self._folder_id, + "task_name": self._task_name, + "workfile_name": self._workfile_name, + "representation_id": self._representation_id, + "folder_selected": self._folder_selected, + "task_selected": self._task_selected, + "workfile_name_selected": self._workfile_name_selected, + "representation_id_selected": self._representation_id_selected, + } + + def is_expected_folder_selected(self, folder_id): + return folder_id == self._folder_id and self._folder_selected + + def is_expected_task_selected(self, folder_id, task_name): + if not self.is_expected_folder_selected(folder_id): + return False + return task_name == self._task_name and self._task_selected + + def expected_folder_selected(self, folder_id): + if folder_id != self._folder_id: + return False + self._folder_selected = True + return True + + def expected_task_selected(self, folder_id, task_name): + if not self.is_expected_folder_selected(folder_id): + return False + + if task_name != self._task_name: + return False + + self._task_selected = True + return True + + def expected_workfile_selected(self, folder_id, task_name, workfile_name): + if not self.is_expected_task_selected(folder_id, task_name): + return False + + if workfile_name != self._workfile_name: + return False + self._workfile_name_selected = True + return True + + def expected_representation_selected( + self, folder_id, task_name, representation_id + ): + if not self.is_expected_task_selected(folder_id, task_name): + return False + if representation_id != self._representation_id: + return False + self._representation_id_selected = True + return True + + +class BaseWorkfileController( + AbstractWorkfilesFrontend, AbstractWorkfilesBackend +): + def __init__(self, host=None): + if host is None: + host = registered_host() + + host_is_valid = False + if host is not None: + missing_methods = ( + IWorkfileHost.get_missing_workfile_methods(host) + ) + host_is_valid = len(missing_methods) == 0 + + self._host = host + self._host_is_valid = host_is_valid + + self._project_anatomy = None + self._project_settings = None + self._event_system = None + self._log = None + + self._current_project_name = None + self._current_folder_name = None + self._current_folder_id = None + self._current_task_name = None + self._save_is_enabled = True + + # Expected selected folder and task + self._expected_selection = self._create_expected_selection_obj() + + self._selection_model = self._create_selection_model() + self._entities_model = self._create_entities_model() + self._workfiles_model = self._create_workfiles_model() + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger("WorkfilesUI") + return self._log + + def is_host_valid(self): + return self._host_is_valid + + def _create_expected_selection_obj(self): + return ExpectedSelection() + + def _create_selection_model(self): + return SelectionModel(self) + + def _create_entities_model(self): + return EntitiesModel(self) + + def _create_workfiles_model(self): + return WorkfilesModel(self) + + @property + def event_system(self): + """Inner event system for workfiles tool controller. + + Is used for communication with UI. Event system is created on demand. + + Returns: + QueuedEventSystem: Event system which can trigger callbacks + for topics. + """ + + if self._event_system is None: + self._event_system = QueuedEventSystem() + return self._event_system + + # ---------------------------------------------------- + # Implementation of methods required for backend logic + # ---------------------------------------------------- + @property + def project_settings(self): + if self._project_settings is None: + self._project_settings = get_project_settings( + self.get_current_project_name()) + return self._project_settings + + @property + def project_anatomy(self): + if self._project_anatomy is None: + self._project_anatomy = Anatomy(self.get_current_project_name()) + return self._project_anatomy + + def get_project_entity(self): + return self._entities_model.get_project_entity() + + def get_folder_entity(self, folder_id): + return self._entities_model.get_folder_entity(folder_id) + + def get_task_entity(self, task_id): + return self._entities_model.get_task_entity(task_id) + + # --------------------------------- + # Implementation of abstract methods + # --------------------------------- + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self.event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self.event_system.add_callback(topic, callback) + + def is_save_enabled(self): + """Is workfile save enabled. + + Returns: + bool: True if save is enabled. + """ + + return self._save_is_enabled + + def set_save_enabled(self, enabled): + """Enable or disabled workfile save. + + Args: + enabled (bool): Enable save workfile when True. + """ + + if self._save_is_enabled == enabled: + return + + self._save_is_enabled = enabled + self._emit_event( + "workfile_save_enable.changed", + {"enabled": enabled} + ) + + # Host information + def get_workfile_extensions(self): + host = self._host + if isinstance(host, IWorkfileHost): + return host.get_workfile_extensions() + return host.file_extensions() + + def has_unsaved_changes(self): + host = self._host + if isinstance(host, IWorkfileHost): + return host.workfile_has_unsaved_changes() + return host.has_unsaved_changes() + + # Current context + def get_host_name(self): + host = self._host + if isinstance(host, IWorkfileHost): + return host.name + return get_current_host_name() + + def _get_host_current_context(self): + if hasattr(self._host, "get_current_context"): + return self._host.get_current_context() + return get_global_context() + + def get_current_project_name(self): + return self._current_project_name + + def get_current_folder_id(self): + return self._current_folder_id + + def get_current_task_name(self): + return self._current_task_name + + def get_current_workfile(self): + host = self._host + if isinstance(host, IWorkfileHost): + return host.get_current_workfile() + return host.current_file() + + # Selection information + def get_selected_folder_id(self): + return self._selection_model.get_selected_folder_id() + + def set_selected_folder(self, folder_id): + self._selection_model.set_selected_folder(folder_id) + + def get_selected_task_id(self): + return self._selection_model.get_selected_task_id() + + def get_selected_task_name(self): + return self._selection_model.get_selected_task_name() + + def set_selected_task(self, folder_id, task_id, task_name): + return self._selection_model.set_selected_task( + folder_id, task_id, task_name) + + def get_selected_workfile_path(self): + return self._selection_model.get_selected_workfile_path() + + def set_selected_workfile_path(self, path): + self._selection_model.set_selected_workfile_path(path) + + def get_selected_representation_id(self): + return self._selection_model.get_selected_representation_id() + + def set_selected_representation_id(self, representation_id): + self._selection_model.set_selected_representation_id( + representation_id) + + def set_expected_selection( + self, + folder_id, + task_name, + workfile_name=None, + representation_id=None + ): + self._expected_selection.set_expected_selection( + folder_id, task_name, workfile_name, representation_id + ) + self._trigger_expected_selection_changed() + + def expected_folder_selected(self, folder_id): + if self._expected_selection.expected_folder_selected(folder_id): + self._trigger_expected_selection_changed() + + def expected_task_selected(self, folder_id, task_name): + if self._expected_selection.expected_task_selected( + folder_id, task_name + ): + self._trigger_expected_selection_changed() + + def expected_workfile_selected(self, folder_id, task_name, workfile_name): + if self._expected_selection.expected_workfile_selected( + folder_id, task_name, workfile_name + ): + self._trigger_expected_selection_changed() + + def expected_representation_selected( + self, folder_id, task_name, representation_id + ): + if self._expected_selection.expected_representation_selected( + folder_id, task_name, representation_id + ): + self._trigger_expected_selection_changed() + + def get_expected_selection_data(self): + return self._expected_selection.get_expected_selection_data() + + def go_to_current_context(self): + self.set_expected_selection( + self._current_folder_id, self._current_task_name + ) + + # Model functions + def get_folder_items(self, sender): + return self._entities_model.get_folder_items(sender) + + def get_task_items(self, folder_id, sender): + return self._entities_model.get_tasks_items(folder_id, sender) + + def get_workarea_dir_by_context(self, folder_id, task_id): + return self._workfiles_model.get_workarea_dir_by_context( + folder_id, task_id) + + def get_workarea_file_items(self, folder_id, task_id): + return self._workfiles_model.get_workarea_file_items( + folder_id, task_id) + + def get_workarea_save_as_data(self, folder_id, task_id): + return self._workfiles_model.get_workarea_save_as_data( + folder_id, task_id) + + def fill_workarea_filepath( + self, + folder_id, + task_id, + extension, + use_last_version, + version, + comment, + ): + return self._workfiles_model.fill_workarea_filepath( + folder_id, + task_id, + extension, + use_last_version, + version, + comment, + ) + + def get_published_file_items(self, folder_id, task_id): + task_name = None + if task_id: + task = self.get_task_entity(task_id) + task_name = task.get("name") + + return self._workfiles_model.get_published_file_items( + folder_id, task_name) + + def get_workfile_info(self, folder_id, task_id, filepath): + return self._workfiles_model.get_workfile_info( + folder_id, task_id, filepath + ) + + def save_workfile_info(self, folder_id, task_id, filepath, note): + self._workfiles_model.save_workfile_info( + folder_id, task_id, filepath, note + ) + + def refresh(self): + if not self._host_is_valid: + self._emit_event("controller.refresh.started") + self._emit_event("controller.refresh.finished") + return + expected_folder_id = self.get_selected_folder_id() + expected_task_name = self.get_selected_task_name() + + self._emit_event("controller.refresh.started") + + context = self._get_host_current_context() + + project_name = context["project_name"] + folder_name = context["asset_name"] + task_name = context["task_name"] + folder_id = None + if folder_name: + folder = ayon_api.get_folder_by_name(project_name, folder_name) + if folder: + folder_id = folder["id"] + + self._project_settings = None + self._project_anatomy = None + + self._current_project_name = project_name + self._current_folder_name = folder_name + self._current_folder_id = folder_id + self._current_task_name = task_name + + if not expected_folder_id: + expected_folder_id = folder_id + expected_task_name = task_name + + self._expected_selection.set_expected_selection( + expected_folder_id, expected_task_name + ) + + self._entities_model.refresh() + + self._emit_event("controller.refresh.finished") + + # Controller actions + def open_workfile(self, folder_id, task_id, filepath): + self._emit_event("open_workfile.started") + + failed = False + try: + self._open_workfile(folder_id, task_id, filepath) + + except Exception: + failed = True + self.log.warning("Open of workfile failed", exc_info=True) + + self._emit_event( + "open_workfile.finished", + {"failed": failed}, + ) + + def save_current_workfile(self): + current_file = self.get_current_workfile() + self._host_save_workfile(current_file) + + def save_as_workfile( + self, + folder_id, + task_id, + workdir, + filename, + template_key, + ): + self._emit_event("save_as.started") + + failed = False + try: + self._save_as_workfile( + folder_id, + task_id, + workdir, + filename, + template_key, + ) + except Exception: + failed = True + self.log.warning("Save as failed", exc_info=True) + + self._emit_event( + "save_as.finished", + {"failed": failed}, + ) + + def copy_workfile_representation( + self, + representation_id, + representation_filepath, + folder_id, + task_id, + workdir, + filename, + template_key, + ): + self._emit_event("copy_representation.started") + + failed = False + try: + self._save_as_workfile( + folder_id, + task_id, + workdir, + filename, + template_key, + ) + except Exception: + failed = True + self.log.warning( + "Copy of workfile representation failed", exc_info=True + ) + + self._emit_event( + "copy_representation.finished", + {"failed": failed}, + ) + + def duplicate_workfile(self, src_filepath, workdir, filename): + self._emit_event("workfile_duplicate.started") + + failed = False + try: + dst_filepath = os.path.join(workdir, filename) + shutil.copy(src_filepath, dst_filepath) + except Exception: + failed = True + self.log.warning("Duplication of workfile failed", exc_info=True) + + self._emit_event( + "workfile_duplicate.finished", + {"failed": failed}, + ) + + # Helper host methods that resolve 'IWorkfileHost' interface + def _host_open_workfile(self, filepath): + host = self._host + if isinstance(host, IWorkfileHost): + host.open_workfile(filepath) + else: + host.open_file(filepath) + + def _host_save_workfile(self, filepath): + host = self._host + if isinstance(host, IWorkfileHost): + host.save_workfile(filepath) + else: + host.save_file(filepath) + + def _emit_event(self, topic, data=None): + self.emit_event(topic, data, "controller") + + # Expected selection + # - expected selection is used to restore selection after refresh + # or when current context should be used + def _trigger_expected_selection_changed(self): + self._emit_event( + "expected_selection_changed", + self._expected_selection.get_expected_selection_data(), + ) + + def _get_event_context_data( + self, project_name, folder_id, task_id, folder=None, task=None + ): + if folder is None: + folder = self.get_folder_entity(folder_id) + if task is None: + task = self.get_task_entity(task_id) + # NOTE keys should be OpenPype compatible + return { + "project_name": project_name, + "folder_id": folder_id, + "asset_id": folder_id, + "asset_name": folder["name"], + "task_id": task_id, + "task_name": task["name"], + "host_name": self.get_host_name(), + } + + def _open_workfile(self, folder_id, task_id, filepath): + project_name = self.get_current_project_name() + event_data = self._get_event_context_data( + project_name, folder_id, task_id + ) + event_data["filepath"] = filepath + + emit_event("workfile.open.before", event_data, source="workfiles.tool") + + # Change context + task_name = event_data["task_name"] + if ( + folder_id != self.get_current_folder_id() + or task_name != self.get_current_task_name() + ): + # Use OpenPype asset-like object + asset_doc = get_asset_by_id( + event_data["project_name"], + event_data["folder_id"], + ) + change_current_context( + asset_doc, + event_data["task_name"] + ) + + self._host_open_workfile(filepath) + + emit_event("workfile.open.after", event_data, source="workfiles.tool") + + def _save_as_workfile( + self, + folder_id, + task_id, + workdir, + filename, + template_key, + src_filepath=None, + ): + # Trigger before save event + project_name = self.get_current_project_name() + folder = self.get_folder_entity(folder_id) + task = self.get_task_entity(task_id) + task_name = task["name"] + + # QUESTION should the data be different for 'before' and 'after'? + event_data = self._get_event_context_data( + project_name, folder_id, task_id, folder, task + ) + event_data.update({ + "filename": filename, + "workdir_path": workdir, + }) + + emit_event("workfile.save.before", event_data, source="workfiles.tool") + + # Create workfiles root folder + if not os.path.exists(workdir): + self.log.debug("Initializing work directory: %s", workdir) + os.makedirs(workdir) + + # Change context + if ( + folder_id != self.get_current_folder_id() + or task_name != self.get_current_task_name() + ): + # Use OpenPype asset-like object + asset_doc = get_asset_by_id(project_name, folder["id"]) + change_current_context( + asset_doc, + task["name"], + template_key=template_key + ) + + # Save workfile + dst_filepath = os.path.join(workdir, filename) + if src_filepath: + shutil.copyfile(src_filepath, dst_filepath) + self._host_open_workfile(dst_filepath) + else: + self._host_save_workfile(dst_filepath) + + # Create extra folders + create_workdir_extra_folders( + workdir, + self.get_host_name(), + task["taskType"], + task_name, + project_name + ) + + # Trigger after save events + emit_event("workfile.save.after", event_data, source="workfiles.tool") + self.refresh() diff --git a/openpype/tools/ayon_workfiles/models/__init__.py b/openpype/tools/ayon_workfiles/models/__init__.py new file mode 100644 index 0000000000..d906b9e7bd --- /dev/null +++ b/openpype/tools/ayon_workfiles/models/__init__.py @@ -0,0 +1,10 @@ +from .hierarchy import EntitiesModel +from .selection import SelectionModel +from .workfiles import WorkfilesModel + + +__all__ = ( + "SelectionModel", + "EntitiesModel", + "WorkfilesModel", +) diff --git a/openpype/tools/ayon_workfiles/models/hierarchy.py b/openpype/tools/ayon_workfiles/models/hierarchy.py new file mode 100644 index 0000000000..a1d51525da --- /dev/null +++ b/openpype/tools/ayon_workfiles/models/hierarchy.py @@ -0,0 +1,236 @@ +"""Hierarchy model that handles folders and tasks. + +The model can be extracted for common usage. In that case it will be required +to add more handling of project name changes. +""" + +import time +import collections +import contextlib + +import ayon_api + +from openpype.tools.ayon_workfiles.abstract import ( + FolderItem, + TaskItem, +) + + +def _get_task_items_from_tasks(tasks): + """ + + Returns: + TaskItem: Task item. + """ + + output = [] + for task in tasks: + folder_id = task["folderId"] + output.append(TaskItem( + task["id"], + task["name"], + task["type"], + folder_id, + None, + None + )) + return output + + +def _get_folder_item_from_hierarchy_item(item): + return FolderItem( + item["id"], + item["parentId"], + item["name"], + item["label"], + None, + None, + ) + + +class CacheItem: + def __init__(self, lifetime=120): + self._lifetime = lifetime + self._last_update = None + self._data = None + + @property + def is_valid(self): + if self._last_update is None: + return False + + return (time.time() - self._last_update) < self._lifetime + + def set_invalid(self, data=None): + self._last_update = None + self._data = data + + def get_data(self): + return self._data + + def update_data(self, data): + self._data = data + self._last_update = time.time() + + +class EntitiesModel(object): + event_source = "entities.model" + + def __init__(self, controller): + project_cache = CacheItem() + project_cache.set_invalid({}) + folders_cache = CacheItem() + folders_cache.set_invalid({}) + self._project_cache = project_cache + self._folders_cache = folders_cache + self._tasks_cache = {} + + self._folders_by_id = {} + self._tasks_by_id = {} + + self._folders_refreshing = False + self._tasks_refreshing = set() + self._controller = controller + + def reset(self): + self._project_cache.set_invalid({}) + self._folders_cache.set_invalid({}) + self._tasks_cache = {} + + self._folders_by_id = {} + self._tasks_by_id = {} + + def refresh(self): + self._refresh_folders_cache() + + def get_project_entity(self): + if not self._project_cache.is_valid: + project_name = self._controller.get_current_project_name() + project_entity = ayon_api.get_project(project_name) + self._project_cache.update_data(project_entity) + return self._project_cache.get_data() + + def get_folder_items(self, sender): + if not self._folders_cache.is_valid: + self._refresh_folders_cache(sender) + return self._folders_cache.get_data() + + def get_tasks_items(self, folder_id, sender): + if not folder_id: + return [] + + task_cache = self._tasks_cache.get(folder_id) + if task_cache is None or not task_cache.is_valid: + self._refresh_tasks_cache(folder_id, sender) + task_cache = self._tasks_cache.get(folder_id) + return task_cache.get_data() + + def get_folder_entity(self, folder_id): + if folder_id not in self._folders_by_id: + entity = None + if folder_id: + project_name = self._controller.get_current_project_name() + entity = ayon_api.get_folder_by_id(project_name, folder_id) + self._folders_by_id[folder_id] = entity + return self._folders_by_id[folder_id] + + def get_task_entity(self, task_id): + if task_id not in self._tasks_by_id: + entity = None + if task_id: + project_name = self._controller.get_current_project_name() + entity = ayon_api.get_task_by_id(project_name, task_id) + self._tasks_by_id[task_id] = entity + return self._tasks_by_id[task_id] + + @contextlib.contextmanager + def _folder_refresh_event_manager(self, project_name, sender): + self._folders_refreshing = True + self._controller.emit_event( + "folders.refresh.started", + {"project_name": project_name, "sender": sender}, + self.event_source + ) + try: + yield + + finally: + self._controller.emit_event( + "folders.refresh.finished", + {"project_name": project_name, "sender": sender}, + self.event_source + ) + self._folders_refreshing = False + + @contextlib.contextmanager + def _task_refresh_event_manager( + self, project_name, folder_id, sender + ): + self._tasks_refreshing.add(folder_id) + self._controller.emit_event( + "tasks.refresh.started", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + self.event_source + ) + try: + yield + + finally: + self._controller.emit_event( + "tasks.refresh.finished", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + self.event_source + ) + self._tasks_refreshing.discard(folder_id) + + def _refresh_folders_cache(self, sender=None): + if self._folders_refreshing: + return + project_name = self._controller.get_current_project_name() + with self._folder_refresh_event_manager(project_name, sender): + folder_items = self._query_folders(project_name) + self._folders_cache.update_data(folder_items) + + def _query_folders(self, project_name): + hierarchy = ayon_api.get_folders_hierarchy(project_name) + + folder_items = {} + hierachy_queue = collections.deque(hierarchy["hierarchy"]) + while hierachy_queue: + item = hierachy_queue.popleft() + folder_item = _get_folder_item_from_hierarchy_item(item) + folder_items[folder_item.entity_id] = folder_item + hierachy_queue.extend(item["children"] or []) + return folder_items + + def _refresh_tasks_cache(self, folder_id, sender=None): + if folder_id in self._tasks_refreshing: + return + + project_name = self._controller.get_current_project_name() + with self._task_refresh_event_manager( + project_name, folder_id, sender + ): + cache_item = self._tasks_cache.get(folder_id) + if cache_item is None: + cache_item = CacheItem() + self._tasks_cache[folder_id] = cache_item + + task_items = self._query_tasks(project_name, folder_id) + cache_item.update_data(task_items) + + def _query_tasks(self, project_name, folder_id): + tasks = list(ayon_api.get_tasks( + project_name, + folder_ids=[folder_id], + fields={"id", "name", "label", "folderId", "type"} + )) + return _get_task_items_from_tasks(tasks) diff --git a/openpype/tools/ayon_workfiles/models/selection.py b/openpype/tools/ayon_workfiles/models/selection.py new file mode 100644 index 0000000000..ad034794d8 --- /dev/null +++ b/openpype/tools/ayon_workfiles/models/selection.py @@ -0,0 +1,91 @@ +class SelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.folder.changed" + - "selection.task.changed" + - "workarea.selection.changed" + - "selection.representation.changed" + """ + + event_source = "selection.model" + + def __init__(self, controller): + self._controller = controller + + self._folder_id = None + self._task_name = None + self._task_id = None + self._workfile_path = None + self._representation_id = None + + def get_selected_folder_id(self): + return self._folder_id + + def set_selected_folder(self, folder_id): + if folder_id == self._folder_id: + return + + self._folder_id = folder_id + self._controller.emit_event( + "selection.folder.changed", + {"folder_id": folder_id}, + self.event_source + ) + + def get_selected_task_name(self): + return self._task_name + + def get_selected_task_id(self): + return self._task_id + + def set_selected_task(self, folder_id, task_id, task_name): + if folder_id != self._folder_id: + self.set_selected_folder(folder_id) + + if task_id == self._task_id: + return + + self._task_name = task_name + self._task_id = task_id + self._controller.emit_event( + "selection.task.changed", + { + "folder_id": folder_id, + "task_name": task_name, + "task_id": task_id + }, + self.event_source + ) + + def get_selected_workfile_path(self): + return self._workfile_path + + def set_selected_workfile_path(self, path): + if path == self._workfile_path: + return + + self._workfile_path = path + self._controller.emit_event( + "workarea.selection.changed", + { + "path": path, + "folder_id": self._folder_id, + "task_name": self._task_name, + "task_id": self._task_id, + }, + self.event_source + ) + + def get_selected_representation_id(self): + return self._representation_id + + def set_selected_representation_id(self, representation_id): + if representation_id == self._representation_id: + return + self._representation_id = representation_id + self._controller.emit_event( + "selection.representation.changed", + {"representation_id": representation_id}, + self.event_source + ) diff --git a/openpype/tools/ayon_workfiles/models/workfiles.py b/openpype/tools/ayon_workfiles/models/workfiles.py new file mode 100644 index 0000000000..4d989ed22c --- /dev/null +++ b/openpype/tools/ayon_workfiles/models/workfiles.py @@ -0,0 +1,721 @@ +import os +import re +import copy + +import arrow +import ayon_api +from ayon_api.operations import OperationsSession + +from openpype.client import get_project +from openpype.client.operations import ( + prepare_workfile_info_update_data, +) +from openpype.pipeline.template_data import ( + get_template_data, +) +from openpype.pipeline.workfile import ( + get_workdir_with_workdir_data, + get_workfile_template_key, + get_last_workfile_with_version, +) +from openpype.pipeline.version_start import get_versioning_start +from openpype.tools.ayon_workfiles.abstract import ( + WorkareaFilepathResult, + FileItem, + WorkfileInfo, +) + + +def get_folder_template_data(folder): + if not folder: + return {} + parts = folder["path"].split("/") + parts.pop(-1) + hierarchy = "/".join(parts) + return { + "asset": folder["name"], + "folder": { + "name": folder["name"], + "type": folder["folderType"], + "path": folder["path"], + }, + "hierarchy": hierarchy, + } + + +def get_task_template_data(project_entity, task): + if not task: + return {} + short_name = None + task_type_name = task["taskType"] + for task_type_info in project_entity["taskTypes"]: + if task_type_info["name"] == task_type_name: + short_name = task_type_info["shortName"] + break + + return { + "task": { + "name": task["name"], + "type": task_type_name, + "short": short_name, + } + } + + +class CommentMatcher(object): + """Use anatomy and work file data to parse comments from filenames""" + def __init__(self, extensions, file_template, data): + self.fname_regex = None + + if "{comment}" not in file_template: + # Don't look for comment if template doesn't allow it + return + + # Create a regex group for extensions + any_extension = "(?:{})".format( + "|".join(re.escape(ext.lstrip(".")) for ext in extensions) + ) + + # Use placeholders that will never be in the filename + temp_data = copy.deepcopy(data) + temp_data["comment"] = "<>" + temp_data["version"] = "<>" + temp_data["ext"] = "<>" + + fname_pattern = file_template.format_strict(temp_data) + fname_pattern = re.escape(fname_pattern) + + # Replace comment and version with something we can match with regex + replacements = { + "<>": "(.+)", + "<>": "[0-9]+", + "<>": any_extension, + } + for src, dest in replacements.items(): + fname_pattern = fname_pattern.replace(re.escape(src), dest) + + # Match from beginning to end of string to be safe + fname_pattern = "^{}$".format(fname_pattern) + + self.fname_regex = re.compile(fname_pattern) + + def parse_comment(self, filepath): + """Parse the {comment} part from a filename""" + if not self.fname_regex: + return + + fname = os.path.basename(filepath) + match = self.fname_regex.match(fname) + if match: + return match.group(1) + + +class WorkareaModel: + """Workfiles model looking for workfiles in workare folder. + + Workarea folder is usually task and host specific, defined by + anatomy templates. Is looking for files with extensions defined + by host integration. + """ + + def __init__(self, controller): + self._controller = controller + extensions = None + if controller.is_host_valid(): + extensions = controller.get_workfile_extensions() + self._extensions = extensions + self._base_data = None + self._fill_data_by_folder_id = {} + self._task_data_by_folder_id = {} + self._workdir_by_context = {} + + @property + def project_name(self): + return self._controller.get_current_project_name() + + def reset(self): + self._base_data = None + self._fill_data_by_folder_id = {} + self._task_data_by_folder_id = {} + + def _get_base_data(self): + if self._base_data is None: + base_data = get_template_data(get_project(self.project_name)) + base_data["app"] = self._controller.get_host_name() + self._base_data = base_data + return copy.deepcopy(self._base_data) + + def _get_folder_data(self, folder_id): + fill_data = self._fill_data_by_folder_id.get(folder_id) + if fill_data is None: + folder = self._controller.get_folder_entity(folder_id) + fill_data = get_folder_template_data(folder) + self._fill_data_by_folder_id[folder_id] = fill_data + return copy.deepcopy(fill_data) + + def _get_task_data(self, project_entity, folder_id, task_id): + task_data = self._task_data_by_folder_id.setdefault(folder_id, {}) + if task_id not in task_data: + task = self._controller.get_task_entity(task_id) + if task: + task_data[task_id] = get_task_template_data( + project_entity, task) + return copy.deepcopy(task_data[task_id]) + + def _prepare_fill_data(self, folder_id, task_id): + if not folder_id or not task_id: + return {} + + base_data = self._get_base_data() + folder_data = self._get_folder_data(folder_id) + project_entity = self._controller.get_project_entity() + task_data = self._get_task_data(project_entity, folder_id, task_id) + + base_data.update(folder_data) + base_data.update(task_data) + + return base_data + + def get_workarea_dir_by_context(self, folder_id, task_id): + if not folder_id or not task_id: + return None + folder_mapping = self._workdir_by_context.setdefault(folder_id, {}) + workdir = folder_mapping.get(task_id) + if workdir is not None: + return workdir + + workdir_data = self._prepare_fill_data(folder_id, task_id) + + workdir = get_workdir_with_workdir_data( + workdir_data, + self.project_name, + anatomy=self._controller.project_anatomy, + ) + folder_mapping[task_id] = workdir + return workdir + + def get_file_items(self, folder_id, task_id): + items = [] + if not folder_id or not task_id: + return items + + workdir = self.get_workarea_dir_by_context(folder_id, task_id) + if not os.path.exists(workdir): + return items + + for filename in os.listdir(workdir): + filepath = os.path.join(workdir, filename) + if not os.path.isfile(filepath): + continue + + ext = os.path.splitext(filename)[1].lower() + if ext not in self._extensions: + continue + + modified = os.path.getmtime(filepath) + items.append( + FileItem(workdir, filename, modified) + ) + return items + + def _get_template_key(self, fill_data): + task_type = fill_data.get("task", {}).get("type") + # TODO cache + return get_workfile_template_key( + task_type, + self._controller.get_host_name(), + project_name=self.project_name + ) + + def _get_last_workfile_version( + self, workdir, file_template, fill_data, extensions + ): + version = get_last_workfile_with_version( + workdir, str(file_template), fill_data, extensions + )[1] + + if version is None: + task_info = fill_data.get("task", {}) + version = get_versioning_start( + self.project_name, + self._controller.get_host_name(), + task_name=task_info.get("name"), + task_type=task_info.get("type"), + family="workfile", + project_settings=self._controller.project_settings, + ) + else: + version += 1 + return version + + def _get_comments_from_root( + self, + file_template, + extensions, + fill_data, + root, + current_filename, + ): + current_comment = None + comment_hints = set() + filenames = [] + if root and os.path.exists(root): + for filename in os.listdir(root): + path = os.path.join(root, filename) + if not os.path.isfile(path): + continue + + ext = os.path.splitext(filename)[-1].lower() + if ext in extensions: + filenames.append(filename) + + if not filenames: + return comment_hints, current_comment + + matcher = CommentMatcher(extensions, file_template, fill_data) + + for filename in filenames: + comment = matcher.parse_comment(filename) + if comment: + comment_hints.add(comment) + if filename == current_filename: + current_comment = comment + + return list(comment_hints), current_comment + + def _get_workdir(self, anatomy, template_key, fill_data): + template_info = anatomy.templates_obj[template_key] + directory_template = template_info["folder"] + return directory_template.format_strict(fill_data).normalized() + + def get_workarea_save_as_data(self, folder_id, task_id): + folder = None + task = None + if folder_id: + folder = self._controller.get_folder_entity(folder_id) + if task_id: + task = self._controller.get_task_entity(task_id) + + if not folder or not task: + return { + "template_key": None, + "template_has_version": None, + "template_has_comment": None, + "ext": None, + "workdir": None, + "comment": None, + "comment_hints": None, + "last_version": None, + "extensions": None, + } + + anatomy = self._controller.project_anatomy + fill_data = self._prepare_fill_data(folder_id, task_id) + template_key = self._get_template_key(fill_data) + + current_workfile = self._controller.get_current_workfile() + current_filename = None + current_ext = None + if current_workfile: + current_filename = os.path.basename(current_workfile) + current_ext = os.path.splitext(current_filename)[1].lower() + + extensions = self._extensions + if not current_ext and extensions: + current_ext = tuple(extensions)[0] + + workdir = self._get_workdir(anatomy, template_key, fill_data) + + template_info = anatomy.templates_obj[template_key] + file_template = template_info["file"] + + comment_hints, comment = self._get_comments_from_root( + file_template, + extensions, + fill_data, + workdir, + current_filename, + ) + last_version = self._get_last_workfile_version( + workdir, file_template, fill_data, extensions) + str_file_template = str(file_template) + template_has_version = "{version" in str_file_template + template_has_comment = "{comment" in str_file_template + + return { + "template_key": template_key, + "template_has_version": template_has_version, + "template_has_comment": template_has_comment, + "ext": current_ext, + "workdir": workdir, + "comment": comment, + "comment_hints": comment_hints, + "last_version": last_version, + "extensions": extensions, + } + + def fill_workarea_filepath( + self, + folder_id, + task_id, + extension, + use_last_version, + version, + comment, + ): + anatomy = self._controller.project_anatomy + fill_data = self._prepare_fill_data(folder_id, task_id) + template_key = self._get_template_key(fill_data) + + workdir = self._get_workdir(anatomy, template_key, fill_data) + + template_info = anatomy.templates_obj[template_key] + file_template = template_info["file"] + + if use_last_version: + version = self._get_last_workfile_version( + workdir, file_template, fill_data, self._extensions + ) + fill_data["version"] = version + fill_data["ext"] = extension.lstrip(".") + + if comment: + fill_data["comment"] = comment + + filename = file_template.format(fill_data) + if not filename.solved: + filename = None + + exists = False + if filename: + filepath = os.path.join(workdir, filename) + exists = os.path.exists(filepath) + + return WorkareaFilepathResult( + workdir, + filename, + exists + ) + + +class WorkfileEntitiesModel: + """Workfile entities model. + + Args: + control (AbstractWorkfileController): Controller object. + """ + + def __init__(self, controller): + self._controller = controller + self._cache = {} + self._items = {} + + def _get_workfile_info_identifier( + self, folder_id, task_id, rootless_path + ): + return "_".join([folder_id, task_id, rootless_path]) + + def _get_rootless_path(self, filepath): + anatomy = self._controller.project_anatomy + + workdir, filename = os.path.split(filepath) + success, rootless_dir = anatomy.find_root_template_from_path(workdir) + return "/".join([ + os.path.normpath(rootless_dir).replace("\\", "/"), + filename + ]) + + def _prepare_workfile_info_item( + self, folder_id, task_id, workfile_info, filepath + ): + note = "" + if workfile_info: + note = workfile_info["attrib"].get("description") or "" + + filestat = os.stat(filepath) + return WorkfileInfo( + folder_id, + task_id, + filepath, + filesize=filestat.st_size, + creation_time=filestat.st_ctime, + modification_time=filestat.st_mtime, + note=note + ) + + def _get_workfile_info(self, folder_id, task_id, identifier): + workfile_info = self._cache.get(identifier) + if workfile_info is not None: + return workfile_info + + for workfile_info in ayon_api.get_workfiles_info( + self._controller.get_current_project_name(), + task_ids=[task_id], + fields=["id", "path", "attrib"], + ): + workfile_identifier = self._get_workfile_info_identifier( + folder_id, task_id, workfile_info["path"] + ) + self._cache[workfile_identifier] = workfile_info + return self._cache.get(identifier) + + def get_workfile_info( + self, folder_id, task_id, filepath, rootless_path=None + ): + if not folder_id or not task_id or not filepath: + return None + + if rootless_path is None: + rootless_path = self._get_rootless_path(filepath) + + identifier = self._get_workfile_info_identifier( + folder_id, task_id, rootless_path) + item = self._items.get(identifier) + if item is None: + workfile_info = self._get_workfile_info( + folder_id, task_id, identifier + ) + item = self._prepare_workfile_info_item( + folder_id, task_id, workfile_info, filepath + ) + self._items[identifier] = item + return item + + def save_workfile_info(self, folder_id, task_id, filepath, note): + rootless_path = self._get_rootless_path(filepath) + identifier = self._get_workfile_info_identifier( + folder_id, task_id, rootless_path + ) + workfile_info = self._get_workfile_info( + folder_id, task_id, identifier + ) + if not workfile_info: + self._cache[identifier] = self._create_workfile_info_entity( + task_id, rootless_path, note) + self._items.pop(identifier, None) + return + + new_workfile_info = copy.deepcopy(workfile_info) + attrib = new_workfile_info.setdefault("attrib", {}) + attrib["description"] = note + update_data = prepare_workfile_info_update_data( + workfile_info, new_workfile_info + ) + self._cache[identifier] = new_workfile_info + self._items.pop(identifier, None) + if not update_data: + return + + project_name = self._controller.get_current_project_name() + + session = OperationsSession() + session.update_entity( + project_name, "workfile", workfile_info["id"], update_data + ) + session.commit() + + def _create_workfile_info_entity(self, task_id, rootless_path, note): + extension = os.path.splitext(rootless_path)[1] + + project_name = self._controller.get_current_project_name() + + workfile_info = { + "path": rootless_path, + "taskId": task_id, + "attrib": { + "extension": extension, + "description": note + } + } + + session = OperationsSession() + session.create_entity(project_name, "workfile", workfile_info) + session.commit() + return workfile_info + + +class PublishWorkfilesModel: + """Model for handling of published workfiles. + + Todos: + Cache workfiles products and representations for some time. + Note Representations won't change. Only what can change are + versions. + """ + + def __init__(self, controller): + self._controller = controller + self._cached_extensions = None + self._cached_repre_extensions = None + + @property + def _extensions(self): + if self._cached_extensions is None: + exts = self._controller.get_workfile_extensions() or [] + self._cached_extensions = exts + return self._cached_extensions + + @property + def _repre_extensions(self): + if self._cached_repre_extensions is None: + self._cached_repre_extensions = { + ext.lstrip(".") for ext in self._extensions + } + return self._cached_repre_extensions + + def _file_item_from_representation( + self, repre_entity, project_anatomy, task_name=None + ): + if task_name is not None: + task_info = repre_entity["context"].get("task") + if not task_info or task_info["name"] != task_name: + return None + + # Filter by extension + extensions = self._repre_extensions + workfile_path = None + for repre_file in repre_entity["files"]: + ext = ( + os.path.splitext(repre_file["name"])[1] + .lower() + .lstrip(".") + ) + if ext in extensions: + workfile_path = repre_file["path"] + break + + if not workfile_path: + return None + + try: + workfile_path = workfile_path.format( + root=project_anatomy.roots) + except Exception as exc: + print("Failed to format workfile path: {}".format(exc)) + + dirpath, filename = os.path.split(workfile_path) + created_at = arrow.get(repre_entity["createdAt"]) + return FileItem( + dirpath, + filename, + created_at.float_timestamp, + repre_entity["id"] + ) + + def get_file_items(self, folder_id, task_name): + # TODO refactor to use less server API calls + project_name = self._controller.get_current_project_name() + # Get subset docs of asset + product_entities = ayon_api.get_products( + project_name, + folder_ids=[folder_id], + product_types=["workfile"], + fields=["id", "name"] + ) + + output = [] + product_ids = {product["id"] for product in product_entities} + if not product_ids: + return output + + # Get version docs of subsets with their families + version_entities = ayon_api.get_versions( + project_name, + product_ids=product_ids, + fields=["id", "productId"] + ) + version_ids = {version["id"] for version in version_entities} + if not version_ids: + return output + + # Query representations of filtered versions and add filter for + # extension + repre_entities = ayon_api.get_representations( + project_name, + version_ids=version_ids + ) + project_anatomy = self._controller.project_anatomy + + # Filter queried representations by task name if task is set + file_items = [] + for repre_entity in repre_entities: + file_item = self._file_item_from_representation( + repre_entity, project_anatomy, task_name + ) + if file_item is not None: + file_items.append(file_item) + + return file_items + + +class WorkfilesModel: + """Workfiles model.""" + + def __init__(self, controller): + self._controller = controller + + self._entities_model = WorkfileEntitiesModel(controller) + self._workarea_model = WorkareaModel(controller) + self._published_model = PublishWorkfilesModel(controller) + + def get_workfile_info(self, folder_id, task_id, filepath): + return self._entities_model.get_workfile_info( + folder_id, task_id, filepath + ) + + def save_workfile_info(self, folder_id, task_id, filepath, note): + self._entities_model.save_workfile_info( + folder_id, task_id, filepath, note + ) + + def get_workarea_dir_by_context(self, folder_id, task_id): + """Workarea dir for passed context. + + The directory path is based on project anatomy templates. + + Args: + folder_id (str): Folder id. + task_id (str): Task id. + + Returns: + Union[str, None]: Workarea dir path or None for invalid context. + """ + + return self._workarea_model.get_workarea_dir_by_context( + folder_id, task_id) + + def get_workarea_file_items(self, folder_id, task_id): + """Workfile items for passed context from workarea. + + Args: + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[FileItem]: List of file items matching workarea of passed + context. + """ + + return self._workarea_model.get_file_items(folder_id, task_id) + + def get_workarea_save_as_data(self, folder_id, task_id): + return self._workarea_model.get_workarea_save_as_data( + folder_id, task_id) + + def fill_workarea_filepath(self, *args, **kwargs): + return self._workarea_model.fill_workarea_filepath( + *args, **kwargs + ) + + def get_published_file_items(self, folder_id, task_name): + """Published workfiles for passed context. + + Args: + folder_id (str): Folder id. + task_name (str): Task name. + + Returns: + list[FileItem]: List of files for published workfiles. + """ + + return self._published_model.get_file_items(folder_id, task_name) diff --git a/openpype/tools/ayon_workfiles/widgets/__init__.py b/openpype/tools/ayon_workfiles/widgets/__init__.py new file mode 100644 index 0000000000..f0c5ba1c40 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/__init__.py @@ -0,0 +1,6 @@ +from .window import WorkfilesToolWindow + + +__all__ = ( + "WorkfilesToolWindow", +) diff --git a/openpype/tools/ayon_workfiles/widgets/constants.py b/openpype/tools/ayon_workfiles/widgets/constants.py new file mode 100644 index 0000000000..fc74fd9bc4 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/constants.py @@ -0,0 +1,7 @@ +from qtpy import QtCore + + +ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 +PARENT_ID_ROLE = QtCore.Qt.UserRole + 2 +ITEM_NAME_ROLE = QtCore.Qt.UserRole + 3 +TASK_TYPE_ROLE = QtCore.Qt.UserRole + 4 diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget.py b/openpype/tools/ayon_workfiles/widgets/files_widget.py new file mode 100644 index 0000000000..656ddf1dd8 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/files_widget.py @@ -0,0 +1,407 @@ +import os + +import qtpy +from qtpy import QtWidgets, QtCore + +from .save_as_dialog import SaveAsDialog +from .files_widget_workarea import WorkAreaFilesWidget +from .files_widget_published import PublishedFilesWidget + + +class FilesWidget(QtWidgets.QWidget): + """A widget displaying files that allows to save and open files. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + def __init__(self, controller, parent): + super(FilesWidget, self).__init__(parent) + + files_widget = QtWidgets.QStackedWidget(self) + workarea_widget = WorkAreaFilesWidget(controller, files_widget) + published_widget = PublishedFilesWidget(controller, files_widget) + files_widget.addWidget(workarea_widget) + files_widget.addWidget(published_widget) + + btns_widget = QtWidgets.QWidget(self) + + workarea_btns_widget = QtWidgets.QWidget(btns_widget) + workarea_btn_open = QtWidgets.QPushButton( + "Open", workarea_btns_widget) + workarea_btn_browse = QtWidgets.QPushButton( + "Browse", workarea_btns_widget) + workarea_btn_save = QtWidgets.QPushButton( + "Save As", workarea_btns_widget) + + workarea_btns_layout = QtWidgets.QHBoxLayout(workarea_btns_widget) + workarea_btns_layout.setContentsMargins(0, 0, 0, 0) + workarea_btns_layout.addWidget(workarea_btn_open, 1) + workarea_btns_layout.addWidget(workarea_btn_browse, 1) + workarea_btns_layout.addWidget(workarea_btn_save, 1) + + published_btns_widget = QtWidgets.QWidget(btns_widget) + published_btn_copy_n_open = QtWidgets.QPushButton( + "Copy && Open", published_btns_widget + ) + published_btn_change_context = QtWidgets.QPushButton( + "Choose different context", published_btns_widget + ) + published_btn_cancel = QtWidgets.QPushButton( + "Cancel", published_btns_widget + ) + + published_btns_layout = QtWidgets.QHBoxLayout(published_btns_widget) + published_btns_layout.setContentsMargins(0, 0, 0, 0) + published_btns_layout.addWidget(published_btn_copy_n_open, 1) + published_btns_layout.addWidget(published_btn_change_context, 1) + published_btns_layout.addWidget(published_btn_cancel, 1) + + btns_layout = QtWidgets.QVBoxLayout(btns_widget) + btns_layout.setContentsMargins(0, 0, 0, 0) + btns_layout.addWidget(workarea_btns_widget, 1) + btns_layout.addWidget(published_btns_widget, 1) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(files_widget, 1) + main_layout.addWidget(btns_widget, 0) + + controller.register_event_callback( + "workarea.selection.changed", + self._on_workarea_path_changed + ) + controller.register_event_callback( + "selection.representation.changed", + self._on_published_repre_changed + ) + controller.register_event_callback( + "selection.task.changed", + self._on_task_changed + ) + controller.register_event_callback( + "copy_representation.finished", + self._on_copy_representation_finished, + ) + controller.register_event_callback( + "workfile_save_enable.changed", + self._on_workfile_save_enabled_change, + ) + + workarea_widget.open_current_requested.connect( + self._on_current_open_requests) + workarea_widget.duplicate_requested.connect( + self._on_duplicate_request) + workarea_btn_open.clicked.connect(self._on_workarea_open_clicked) + workarea_btn_browse.clicked.connect(self._on_workarea_browse_clicked) + workarea_btn_save.clicked.connect(self._on_workarea_save_clicked) + + published_widget.save_as_requested.connect(self._on_save_as_request) + published_btn_copy_n_open.clicked.connect( + self._on_published_save_clicked) + published_btn_change_context.clicked.connect( + self._on_published_change_context_clicked) + published_btn_cancel.clicked.connect( + self._on_published_cancel_clicked) + + self._selected_folder_id = None + self._selected_task_id = None + self._selected_task_name = None + + self._pre_select_folder_id = None + self._pre_select_task_name = None + + self._select_context_mode = False + self._valid_selected_context = False + self._valid_representation_id = False + self._tmp_text_filter = None + self._is_save_enabled = True + + self._controller = controller + self._files_widget = files_widget + self._workarea_widget = workarea_widget + self._published_widget = published_widget + self._workarea_btns_widget = workarea_btns_widget + self._published_btns_widget = published_btns_widget + + self._workarea_btn_open = workarea_btn_open + self._workarea_btn_browse = workarea_btn_browse + self._workarea_btn_save = workarea_btn_save + + self._published_widget = published_widget + self._published_btn_copy_n_open = published_btn_copy_n_open + self._published_btn_change_context = published_btn_change_context + self._published_btn_cancel = published_btn_cancel + + # Initial setup + workarea_btn_open.setEnabled(False) + published_btn_copy_n_open.setEnabled(False) + published_btn_change_context.setEnabled(False) + published_btn_cancel.setVisible(False) + + def set_published_mode(self, published_mode): + # Make sure context selection is disabled + self._set_select_contex_mode(False) + # Change current widget + self._files_widget.setCurrentWidget(( + self._published_widget + if published_mode + else self._workarea_widget + )) + # Pass the mode to the widgets, so they can start/stop handle events + self._workarea_widget.set_published_mode(published_mode) + self._published_widget.set_published_mode(published_mode) + + # Change available buttons + self._workarea_btns_widget.setVisible(not published_mode) + self._published_btns_widget.setVisible(published_mode) + + def set_text_filter(self, text_filter): + if self._select_context_mode: + self._tmp_text_filter = text_filter + return + self._workarea_widget.set_text_filter(text_filter) + self._published_widget.set_text_filter(text_filter) + + def _exec_save_as_dialog(self): + """Show SaveAs dialog using currently selected context. + + Returns: + Union[dict[str, Any], None]: Result of the dialog. + """ + + dialog = SaveAsDialog(self._controller, self) + dialog.update_context() + dialog.exec_() + return dialog.get_result() + + # ------------------------------------------------------------- + # Workarea workfiles + # ------------------------------------------------------------- + def _open_workfile(self, folder_id, task_name, filepath): + if self._controller.has_unsaved_changes(): + result = self._save_changes_prompt() + if result is None: + return + + if result: + self._controller.save_current_workfile() + self._controller.open_workfile(folder_id, task_name, filepath) + + def _on_workarea_open_clicked(self): + path = self._workarea_widget.get_selected_path() + if not path: + return + folder_id = self._selected_folder_id + task_id = self._selected_task_id + self._open_workfile(folder_id, task_id, path) + + def _on_current_open_requests(self): + self._on_workarea_open_clicked() + + def _on_duplicate_request(self): + filepath = self._workarea_widget.get_selected_path() + if filepath is None: + return + + result = self._exec_save_as_dialog() + if result is None: + return + self._controller.duplicate_workfile( + filepath, + result["workdir"], + result["filename"] + ) + + def _on_workarea_browse_clicked(self): + extnsions = self._controller.get_workfile_extensions() + ext_filter = "Work File (*{0})".format( + " *".join(extnsions) + ) + dir_key = "directory" + if qtpy.API in ("pyside", "pyside2", "pyside6"): + dir_key = "dir" + + selected_context = self._controller.get_selected_context() + workfile_root = self._controller.get_workarea_dir_by_context( + selected_context["folder_id"], selected_context["task_id"] + ) + # Find existing directory of workfile root + # - Qt will use 'cwd' instead, if path does not exist, which may lead + # to igniter directory + while workfile_root: + if os.path.exists(workfile_root): + break + workfile_root = os.path.dirname(workfile_root) + + kwargs = { + "caption": "Work Files", + "filter": ext_filter, + dir_key: workfile_root + } + + filepath = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] + if not filepath: + return + + folder_id = self._selected_folder_id + task_id = self._selected_task_id + self._open_workfile(folder_id, task_id, filepath) + + def _on_workarea_save_clicked(self): + result = self._exec_save_as_dialog() + if result is None: + return + self._controller.save_as_workfile( + result["folder_id"], + result["task_id"], + result["workdir"], + result["filename"], + result["template_key"], + ) + + def _on_workarea_path_changed(self, event): + valid_path = event["path"] is not None + self._workarea_btn_open.setEnabled(valid_path) + + # ------------------------------------------------------------- + # Published workfiles + # ------------------------------------------------------------- + def _update_published_btns_state(self): + enabled = ( + self._valid_representation_id + and self._valid_selected_context + and self._is_save_enabled + ) + self._published_btn_copy_n_open.setEnabled(enabled) + self._published_btn_change_context.setEnabled(enabled) + + def _update_workarea_btns_state(self): + enabled = self._is_save_enabled + self._workarea_btn_save.setEnabled(enabled) + + def _on_published_repre_changed(self, event): + self._valid_representation_id = event["representation_id"] is not None + self._update_published_btns_state() + + def _on_task_changed(self, event): + self._selected_folder_id = event["folder_id"] + self._selected_task_id = event["task_id"] + self._selected_task_name = event["task_name"] + self._valid_selected_context = ( + self._selected_folder_id is not None + and self._selected_task_id is not None + ) + self._update_published_btns_state() + + def _on_published_save_clicked(self): + result = self._exec_save_as_dialog() + if result is None: + return + + repre_info = self._published_widget.get_selected_repre_info() + self._controller.copy_workfile_representation( + repre_info["representation_id"], + repre_info["filepath"], + result["folder_id"], + result["task_id"], + result["workdir"], + result["filename"], + result["template_key"], + ) + + def _on_save_as_request(self): + self._on_published_save_clicked() + + def _set_select_contex_mode(self, enabled): + if self._select_context_mode is enabled: + return + + if enabled: + self._pre_select_folder_id = self._selected_folder_id + self._pre_select_task_name = self._selected_task_name + else: + self._pre_select_folder_id = None + self._pre_select_task_name = None + self._select_context_mode = enabled + self._published_btn_cancel.setVisible(enabled) + self._published_btn_change_context.setVisible(not enabled) + self._published_widget.set_select_context_mode(enabled) + + if not enabled and self._tmp_text_filter is not None: + self.set_text_filter(self._tmp_text_filter) + self._tmp_text_filter = None + + def _on_published_change_context_clicked(self): + self._set_select_contex_mode(True) + + def _should_set_pre_select_context(self): + if self._pre_select_folder_id is None: + return False + if self._pre_select_folder_id != self._selected_folder_id: + return True + if self._pre_select_task_name is None: + return False + return self._pre_select_task_name != self._selected_task_name + + def _on_published_cancel_clicked(self): + folder_id = self._pre_select_folder_id + task_name = self._pre_select_task_name + representation_id = self._published_widget.get_selected_repre_id() + should_change_selection = self._should_set_pre_select_context() + self._set_select_contex_mode(False) + if should_change_selection: + self._controller.set_expected_selection( + folder_id, task_name, representation_id=representation_id + ) + + def _on_copy_representation_finished(self, event): + """Callback for when copy representation is finished. + + Make sure that select context mode is disabled when representation + copy is finished. + + Args: + event (Event): Event object. + """ + + if not event["failed"]: + self._set_select_contex_mode(False) + + def _on_workfile_save_enabled_change(self, event): + enabled = event["enabled"] + self._is_save_enabled = enabled + self._update_published_btns_state() + self._update_workarea_btns_state() + + def _save_changes_prompt(self): + """Ask user if wants to save changes to current file. + + Returns: + Union[bool, None]: True if user wants to save changes, False if + user does not want to save changes, None if user cancels + operation. + """ + messagebox = QtWidgets.QMessageBox(parent=self) + messagebox.setWindowFlags( + messagebox.windowFlags() | QtCore.Qt.FramelessWindowHint + ) + messagebox.setIcon(QtWidgets.QMessageBox.Warning) + messagebox.setWindowTitle("Unsaved Changes!") + messagebox.setText( + "There are unsaved changes to the current file." + "\nDo you want to save the changes?" + ) + messagebox.setStandardButtons( + QtWidgets.QMessageBox.Yes + | QtWidgets.QMessageBox.No + | QtWidgets.QMessageBox.Cancel + ) + + result = messagebox.exec_() + if result == QtWidgets.QMessageBox.Yes: + return True + if result == QtWidgets.QMessageBox.No: + return False + return None diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget_published.py b/openpype/tools/ayon_workfiles/widgets/files_widget_published.py new file mode 100644 index 0000000000..576cf18d73 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/files_widget_published.py @@ -0,0 +1,380 @@ +import qtawesome +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.style import ( + get_default_entity_icon_color, + get_disabled_entity_icon_color, +) +from openpype.tools.utils import TreeView +from openpype.tools.utils.delegates import PrettyTimeDelegate + +from .utils import BaseOverlayFrame + + +REPRE_ID_ROLE = QtCore.Qt.UserRole + 1 +FILEPATH_ROLE = QtCore.Qt.UserRole + 2 +DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3 + + +class PublishedFilesModel(QtGui.QStandardItemModel): + """A model for displaying files. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + def __init__(self, controller): + super(PublishedFilesModel, self).__init__() + + self.setColumnCount(2) + + self.setHeaderData(0, QtCore.Qt.Horizontal, "Name") + self.setHeaderData(1, QtCore.Qt.Horizontal, "Date Modified") + + controller.register_event_callback( + "selection.task.changed", + self._on_task_changed + ) + controller.register_event_callback( + "selection.folder.changed", + self._on_folder_changed + ) + + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._controller = controller + self._items_by_id = {} + self._missing_context_item = None + self._missing_context_used = False + self._empty_root_item = None + self._empty_item_used = False + + self._published_mode = False + self._context_select_mode = False + + self._last_folder_id = None + self._last_task_id = None + + self._add_empty_item() + + def _clear_items(self): + self._remove_missing_context_item() + self._remove_empty_item() + if self._items_by_id: + root = self.invisibleRootItem() + root.removeRows(0, root.rowCount()) + self._items_by_id = {} + + def set_published_mode(self, published_mode): + if self._published_mode == published_mode: + return + self._published_mode = published_mode + if published_mode: + self._fill_items() + elif self._context_select_mode: + self.set_select_context_mode(False) + + def set_select_context_mode(self, select_mode): + if self._context_select_mode is select_mode: + return + self._context_select_mode = select_mode + if not select_mode and self._published_mode: + self._fill_items() + + def get_index_by_representation_id(self, representation_id): + item = self._items_by_id.get(representation_id) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def _get_missing_context_item(self): + if self._missing_context_item is None: + message = "Select folder" + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._missing_context_item = item + return self._missing_context_item + + def _add_missing_context_item(self): + if self._missing_context_used: + return + self._clear_items() + root_item = self.invisibleRootItem() + root_item.appendRow(self._get_missing_context_item()) + self._missing_context_used = True + + def _remove_missing_context_item(self): + if not self._missing_context_used: + return + root_item = self.invisibleRootItem() + root_item.takeRow(self._missing_context_item.row()) + self._missing_context_used = False + + def _get_empty_root_item(self): + if self._empty_root_item is None: + message = "Didn't find any published workfiles." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._empty_root_item = item + return self._empty_root_item + + def _add_empty_item(self): + if self._empty_item_used: + return + self._clear_items() + root_item = self.invisibleRootItem() + root_item.appendRow(self._get_empty_root_item()) + self._empty_item_used = True + + def _remove_empty_item(self): + if not self._empty_item_used: + return + root_item = self.invisibleRootItem() + root_item.takeRow(self._empty_root_item.row()) + self._empty_item_used = False + + def _on_folder_changed(self, event): + self._last_folder_id = event["folder_id"] + self._last_task_id = None + if self._context_select_mode: + return + + if self._published_mode: + self._fill_items() + + def _on_task_changed(self, event): + self._last_folder_id = event["folder_id"] + self._last_task_id = event["task_id"] + if self._context_select_mode: + return + + if self._published_mode: + self._fill_items() + + def _fill_items(self): + folder_id = self._last_folder_id + task_id = self._last_task_id + if not folder_id: + self._add_missing_context_item() + return + + file_items = self._controller.get_published_file_items( + folder_id, task_id + ) + root_item = self.invisibleRootItem() + if not file_items: + self._add_empty_item() + return + self._remove_empty_item() + self._remove_missing_context_item() + + items_to_remove = set(self._items_by_id.keys()) + new_items = [] + for file_item in file_items: + repre_id = file_item.representation_id + if repre_id in self._items_by_id: + items_to_remove.discard(repre_id) + item = self._items_by_id[repre_id] + else: + item = QtGui.QStandardItem() + new_items.append(item) + item.setColumnCount(self.columnCount()) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + item.setData(file_item.filename, QtCore.Qt.DisplayRole) + item.setData(repre_id, REPRE_ID_ROLE) + + if file_item.exists: + flags = QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + else: + flags = QtCore.Qt.NoItemFlags + + item.setFlags(flags) + item.setData(file_item.filepath, FILEPATH_ROLE) + item.setData(file_item.modified, DATE_MODIFIED_ROLE) + + self._items_by_id[repre_id] = item + + if new_items: + root_item.appendRows(new_items) + + for repre_id in items_to_remove: + item = self._items_by_id.pop(repre_id) + root_item.removeRow(item.row()) + + if root_item.rowCount() == 0: + self._add_empty_item() + + def flags(self, index): + # Use flags of first column for all columns + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(PublishedFilesModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + # Handle roles for first column + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None + + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) + + return super(PublishedFilesModel, self).data(index, role) + + +class SelectContextOverlay(BaseOverlayFrame): + """Overlay for files view when user should select context. + + Todos: + The look of this overlay should be improved, it is "not nice" now. + """ + + def __init__(self, parent): + super(SelectContextOverlay, self).__init__(parent) + + label_widget = QtWidgets.QLabel( + "Please choose context on the left
<", + self + ) + label_widget.setAlignment(QtCore.Qt.AlignCenter) + label_widget.setObjectName("OverlayFrameLabel") + + layout = QtWidgets.QHBoxLayout(self) + layout.addWidget(label_widget, 1, QtCore.Qt.AlignCenter) + + label_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + +class PublishedFilesWidget(QtWidgets.QWidget): + """Published workfiles widget. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + selection_changed = QtCore.Signal() + save_as_requested = QtCore.Signal() + + def __init__(self, controller, parent): + super(PublishedFilesWidget, self).__init__(parent) + + view = TreeView(self) + view.setSortingEnabled(True) + view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + # Smaller indentation + view.setIndentation(0) + + model = PublishedFilesModel(controller) + proxy_model = QtCore.QSortFilterProxyModel() + proxy_model.setSourceModel(model) + proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + proxy_model.setDynamicSortFilter(True) + + view.setModel(proxy_model) + + time_delegate = PrettyTimeDelegate() + view.setItemDelegateForColumn(1, time_delegate) + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + view.setColumnWidth(0, 330) + + select_overlay = SelectContextOverlay(view) + select_overlay.setVisible(False) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(view, 1) + + selection_model = view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + view.double_clicked.connect(self._on_mouse_double_click) + + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + self._view = view + self._select_overlay = select_overlay + self._model = model + self._proxy_model = proxy_model + self._time_delegate = time_delegate + self._controller = controller + + def set_published_mode(self, published_mode): + self._model.set_published_mode(published_mode) + + def set_select_context_mode(self, select_mode): + self._model.set_select_context_mode(select_mode) + self._select_overlay.setVisible(select_mode) + + def set_text_filter(self, text_filter): + self._proxy_model.setFilterFixedString(text_filter) + + def get_selected_repre_info(self): + selection_model = self._view.selectionModel() + representation_id = None + filepath = None + for index in selection_model.selectedIndexes(): + representation_id = index.data(REPRE_ID_ROLE) + filepath = index.data(FILEPATH_ROLE) + + return { + "representation_id": representation_id, + "filepath": filepath, + } + + def get_selected_repre_id(self): + return self.get_selected_repre_info()["representation_id"] + + def _on_selection_change(self): + repre_id = self.get_selected_repre_id() + self._controller.set_selected_representation_id(repre_id) + + def _on_mouse_double_click(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.save_as_requested.emit() + + def _on_expected_selection_change(self, event): + if ( + event["representation_id_selected"] + or not event["folder_selected"] + or (event["task_name"] and not event["task_selected"]) + ): + return + + representation_id = event["representation_id"] + selected_repre_id = self.get_selected_repre_id() + if ( + representation_id is not None + and representation_id != selected_repre_id + ): + index = self._model.get_index_by_representation_id( + representation_id) + if index.isValid(): + proxy_index = self._proxy_model.mapFromSource(index) + self._view.setCurrentIndex(proxy_index) + + self._controller.expected_representation_selected( + event["folder_id"], event["task_name"], representation_id + ) diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py b/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py new file mode 100644 index 0000000000..e59b319459 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py @@ -0,0 +1,380 @@ +import qtawesome +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.style import ( + get_default_entity_icon_color, + get_disabled_entity_icon_color, +) +from openpype.tools.utils import TreeView +from openpype.tools.utils.delegates import PrettyTimeDelegate + +FILENAME_ROLE = QtCore.Qt.UserRole + 1 +FILEPATH_ROLE = QtCore.Qt.UserRole + 2 +DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3 + + +class WorkAreaFilesModel(QtGui.QStandardItemModel): + """A model for workare workfiles. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + def __init__(self, controller): + super(WorkAreaFilesModel, self).__init__() + + self.setColumnCount(2) + + self.setHeaderData(0, QtCore.Qt.Horizontal, "Name") + self.setHeaderData(1, QtCore.Qt.Horizontal, "Date Modified") + + controller.register_event_callback( + "selection.task.changed", + self._on_task_changed + ) + controller.register_event_callback( + "workfile_duplicate.finished", + self._on_duplicate_finished + ) + controller.register_event_callback( + "save_as.finished", + self._on_save_as_finished + ) + + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._controller = controller + self._items_by_filename = {} + self._missing_context_item = None + self._missing_context_used = False + self._empty_root_item = None + self._empty_item_used = False + self._published_mode = False + self._selected_folder_id = None + self._selected_task_id = None + + self._add_missing_context_item() + + def get_index_by_filename(self, filename): + item = self._items_by_filename.get(filename) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def _get_missing_context_item(self): + if self._missing_context_item is None: + message = "Select folder and task" + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._missing_context_item = item + return self._missing_context_item + + def _clear_items(self): + self._remove_missing_context_item() + self._remove_empty_item() + if self._items_by_filename: + root = self.invisibleRootItem() + root.removeRows(0, root.rowCount()) + self._items_by_filename = {} + + def _add_missing_context_item(self): + if self._missing_context_used: + return + self._clear_items() + root_item = self.invisibleRootItem() + root_item.appendRow(self._get_missing_context_item()) + self._missing_context_used = True + + def _remove_missing_context_item(self): + if not self._missing_context_used: + return + root_item = self.invisibleRootItem() + root_item.takeRow(self._missing_context_item.row()) + self._missing_context_used = False + + def _get_empty_root_item(self): + if self._empty_root_item is None: + message = "Work Area is empty.." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.exclamation-circle", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._empty_root_item = item + return self._empty_root_item + + def _add_empty_item(self): + if self._empty_item_used: + return + self._clear_items() + root_item = self.invisibleRootItem() + root_item.appendRow(self._get_empty_root_item()) + self._empty_item_used = True + + def _remove_empty_item(self): + if not self._empty_item_used: + return + root_item = self.invisibleRootItem() + root_item.takeRow(self._empty_root_item.row()) + self._empty_item_used = False + + def _on_task_changed(self, event): + self._selected_folder_id = event["folder_id"] + self._selected_task_id = event["task_id"] + if not self._published_mode: + self._fill_items() + + def _on_duplicate_finished(self, event): + if event["failed"]: + return + + if not self._published_mode: + self._fill_items() + + def _on_save_as_finished(self, event): + if event["failed"]: + return + + if not self._published_mode: + self._fill_items() + + def _fill_items(self): + folder_id = self._selected_folder_id + task_id = self._selected_task_id + if not folder_id or not task_id: + self._add_missing_context_item() + return + + file_items = self._controller.get_workarea_file_items( + folder_id, task_id + ) + root_item = self.invisibleRootItem() + if not file_items: + self._add_empty_item() + return + self._remove_empty_item() + self._remove_missing_context_item() + + items_to_remove = set(self._items_by_filename.keys()) + new_items = [] + for file_item in file_items: + filename = file_item.filename + if filename in self._items_by_filename: + items_to_remove.discard(filename) + item = self._items_by_filename[filename] + else: + item = QtGui.QStandardItem() + new_items.append(item) + item.setColumnCount(self.columnCount()) + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + item.setData(file_item.filename, QtCore.Qt.DisplayRole) + item.setData(file_item.filename, FILENAME_ROLE) + + item.setData(file_item.filepath, FILEPATH_ROLE) + item.setData(file_item.modified, DATE_MODIFIED_ROLE) + + self._items_by_filename[file_item.filename] = item + + if new_items: + root_item.appendRows(new_items) + + for filename in items_to_remove: + item = self._items_by_filename.pop(filename) + root_item.removeRow(item.row()) + + if root_item.rowCount() == 0: + self._add_empty_item() + + def flags(self, index): + # Use flags of first column for all columns + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(WorkAreaFilesModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + # Handle roles for first column + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None + + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) + + return super(WorkAreaFilesModel, self).data(index, role) + + def set_published_mode(self, published_mode): + if self._published_mode == published_mode: + return + self._published_mode = published_mode + if not published_mode: + self._fill_items() + + +class WorkAreaFilesWidget(QtWidgets.QWidget): + """Workarea files widget. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + selection_changed = QtCore.Signal() + open_current_requested = QtCore.Signal() + duplicate_requested = QtCore.Signal() + + def __init__(self, controller, parent): + super(WorkAreaFilesWidget, self).__init__(parent) + + view = TreeView(self) + view.setSortingEnabled(True) + view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + # Smaller indentation + view.setIndentation(0) + + model = WorkAreaFilesModel(controller) + proxy_model = QtCore.QSortFilterProxyModel() + proxy_model.setSourceModel(model) + proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + proxy_model.setDynamicSortFilter(True) + + view.setModel(proxy_model) + + time_delegate = PrettyTimeDelegate() + view.setItemDelegateForColumn(1, time_delegate) + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + view.setColumnWidth(0, 330) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(view, 1) + + selection_model = view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + view.double_clicked.connect(self._on_mouse_double_click) + view.customContextMenuRequested.connect(self._on_context_menu) + + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + self._view = view + self._model = model + self._proxy_model = proxy_model + self._time_delegate = time_delegate + self._controller = controller + + self._published_mode = False + + def set_published_mode(self, published_mode): + """Set the published mode. + + Widget should ignore most of events when in published mode is enabled. + + Args: + published_mode (bool): The published mode. + """ + + self._model.set_published_mode(published_mode) + self._published_mode = published_mode + + def set_text_filter(self, text_filter): + """Set the text filter. + + Args: + text_filter (str): The text filter. + """ + + self._proxy_model.setFilterFixedString(text_filter) + + def _get_selected_info(self): + selection_model = self._view.selectionModel() + filepath = None + filename = None + for index in selection_model.selectedIndexes(): + filepath = index.data(FILEPATH_ROLE) + filename = index.data(FILENAME_ROLE) + return { + "filepath": filepath, + "filename": filename, + } + + def get_selected_path(self): + """Selected filepath. + + Returns: + Union[str, None]: The selected filepath or None if nothing is + selected. + """ + return self._get_selected_info()["filepath"] + + def _on_selection_change(self): + filepath = self.get_selected_path() + self._controller.set_selected_workfile_path(filepath) + + def _on_mouse_double_click(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.save_as_requested.emit() + + def _on_context_menu(self, point): + index = self._view.indexAt(point) + if not index.isValid(): + return + + if not index.flags() & QtCore.Qt.ItemIsEnabled: + return + + menu = QtWidgets.QMenu(self) + + # Duplicate + action = QtWidgets.QAction("Duplicate", menu) + tip = "Duplicate selected file." + action.setToolTip(tip) + action.setStatusTip(tip) + action.triggered.connect(self._on_duplicate_pressed) + menu.addAction(action) + + # Show the context action menu + global_point = self._view.mapToGlobal(point) + _ = menu.exec_(global_point) + + def _on_duplicate_pressed(self): + self.duplicate_requested.emit() + + def _on_expected_selection_change(self, event): + if event["workfile_name_selected"]: + return + + workfile_name = event["workfile_name"] + if ( + workfile_name is not None + and workfile_name != self._get_selected_info()["filename"] + ): + index = self._model.get_index_by_filename(workfile_name) + if index.isValid(): + proxy_index = self._proxy_model.mapFromSource(index) + self._view.setCurrentIndex(proxy_index) + + self._controller.expected_workfile_selected( + event["folder_id"], event["task_name"], workfile_name + ) diff --git a/openpype/tools/ayon_workfiles/widgets/folders_widget.py b/openpype/tools/ayon_workfiles/widgets/folders_widget.py new file mode 100644 index 0000000000..b04f8e4098 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/folders_widget.py @@ -0,0 +1,324 @@ +import uuid +import collections + +import qtawesome +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) + +from .constants import ITEM_ID_ROLE, ITEM_NAME_ROLE + +SENDER_NAME = "qt_folders_model" + + +class FoldersRefreshThread(QtCore.QThread): + """Thread for refreshing folders. + + Call controller to get folders and emit signal when finished. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refresh_finished = QtCore.Signal(str) + + def __init__(self, controller): + super(FoldersRefreshThread, self).__init__() + self._id = uuid.uuid4().hex + self._controller = controller + self._result = None + + @property + def id(self): + """Thread id. + + Returns: + str: Unique id of the thread. + """ + + return self._id + + def run(self): + self._result = self._controller.get_folder_items(SENDER_NAME) + self.refresh_finished.emit(self.id) + + def get_result(self): + return self._result + + +class FoldersModel(QtGui.QStandardItemModel): + """Folders model which cares about refresh of folders. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(FoldersModel, self).__init__() + + self._controller = controller + self._items_by_id = {} + self._parent_id_by_id = {} + + self._refresh_threads = {} + self._current_refresh_thread = None + + self._has_content = False + self._is_refreshing = False + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: True if model is refreshing. + """ + return self._is_refreshing + + @property + def has_content(self): + """Has at least one folder. + + Returns: + bool: True if model has at least one folder. + """ + + return self._has_content + + def clear(self): + self._items_by_id = {} + self._parent_id_by_id = {} + self._has_content = False + super(FoldersModel, self).clear() + + def get_index_by_id(self, item_id): + """Get index by folder id. + + Returns: + QtCore.QModelIndex: Index of the folder. Can be invalid if folder + is not available. + """ + item = self._items_by_id.get(item_id) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def refresh(self): + """Refresh folders items. + + Refresh start thread because it can cause that controller can + start query from database if folders are not cached. + """ + + self._is_refreshing = True + + thread = FoldersRefreshThread(self._controller) + self._current_refresh_thread = thread.id + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Folders are stored by id. + + Args: + thread_id (str): Thread id. + """ + + thread = self._refresh_threads.pop(thread_id) + if thread_id != self._current_refresh_thread: + return + + folder_items_by_id = thread.get_result() + if not folder_items_by_id: + if folder_items_by_id is not None: + self.clear() + self._is_refreshing = False + return + + self._has_content = True + + folder_ids = set(folder_items_by_id) + ids_to_remove = set(self._items_by_id) - folder_ids + + folder_items_by_parent = collections.defaultdict(list) + for folder_item in folder_items_by_id.values(): + folder_items_by_parent[folder_item.parent_id].append(folder_item) + + hierarchy_queue = collections.deque() + hierarchy_queue.append(None) + + while hierarchy_queue: + parent_id = hierarchy_queue.popleft() + folder_items = folder_items_by_parent[parent_id] + if parent_id is None: + parent_item = self.invisibleRootItem() + else: + parent_item = self._items_by_id[parent_id] + + new_items = [] + for folder_item in folder_items: + item_id = folder_item.entity_id + item = self._items_by_id.get(item_id) + if item is None: + is_new = True + item = QtGui.QStandardItem() + item.setEditable(False) + else: + is_new = self._parent_id_by_id[item_id] != parent_id + + icon = qtawesome.icon( + folder_item.icon_name, + color=folder_item.icon_color, + ) + item.setData(item_id, ITEM_ID_ROLE) + item.setData(folder_item.name, ITEM_NAME_ROLE) + item.setData(folder_item.label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + if is_new: + new_items.append(item) + self._items_by_id[item_id] = item + self._parent_id_by_id[item_id] = parent_id + + hierarchy_queue.append(item_id) + + if new_items: + parent_item.appendRows(new_items) + + for item_id in ids_to_remove: + item = self._items_by_id[item_id] + parent_id = self._parent_id_by_id[item_id] + if parent_id is None: + parent_item = self.invisibleRootItem() + else: + parent_item = self._items_by_id[parent_id] + parent_item.takeChild(item.row()) + + for item_id in ids_to_remove: + self._items_by_id.pop(item_id) + self._parent_id_by_id.pop(item_id) + + self._is_refreshing = False + self.refreshed.emit() + + +class FoldersWidget(QtWidgets.QWidget): + """Folders widget. + + Widget that handles folders view, model and selection. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + def __init__(self, controller, parent): + super(FoldersWidget, self).__init__(parent) + + folders_view = DeselectableTreeView(self) + folders_view.setHeaderHidden(True) + + folders_model = FoldersModel(controller) + folders_proxy_model = RecursiveSortFilterProxyModel() + folders_proxy_model.setSourceModel(folders_model) + + folders_view.setModel(folders_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(folders_view, 1) + + controller.register_event_callback( + "folders.refresh.finished", + self._on_folders_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = folders_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + folders_model.refreshed.connect(self._on_model_refresh) + + self._controller = controller + self._folders_view = folders_view + self._folders_model = folders_model + self._folders_proxy_model = folders_proxy_model + + self._expected_selection = None + + def set_name_filter(self, name): + self._folders_proxy_model.setFilterFixedString(name) + + def _clear(self): + self._folders_model.clear() + + def _on_folders_refresh_finished(self, event): + if event["sender"] != SENDER_NAME: + self._folders_model.refresh() + + def _on_controller_refresh(self): + self._update_expected_selection() + + def _update_expected_selection(self, expected_data=None): + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + # We're done + if expected_data["folder_selected"]: + return + + folder_id = expected_data["folder_id"] + self._expected_selection = folder_id + if not self._folders_model.is_refreshing: + self._set_expected_selection() + + def _set_expected_selection(self): + folder_id = self._expected_selection + self._expected_selection = None + if ( + folder_id is not None + and folder_id != self._get_selected_item_id() + ): + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + proxy_index = self._folders_proxy_model.mapFromSource(index) + self._folders_view.setCurrentIndex(proxy_index) + self._controller.expected_folder_selected(folder_id) + + def _on_model_refresh(self): + if self._expected_selection: + self._set_expected_selection() + self._folders_proxy_model.sort(0) + + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _get_selected_item_id(self): + selection_model = self._folders_view.selectionModel() + for index in selection_model.selectedIndexes(): + item_id = index.data(ITEM_ID_ROLE) + if item_id is not None: + return item_id + return None + + def _on_selection_change(self): + item_id = self._get_selected_item_id() + self._controller.set_selected_folder(item_id) diff --git a/openpype/tools/ayon_workfiles/widgets/save_as_dialog.py b/openpype/tools/ayon_workfiles/widgets/save_as_dialog.py new file mode 100644 index 0000000000..cdce73f030 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/save_as_dialog.py @@ -0,0 +1,351 @@ +from qtpy import QtWidgets, QtCore + +from openpype.tools.utils import PlaceholderLineEdit + + +class SubversionLineEdit(QtWidgets.QWidget): + """QLineEdit with QPushButton for drop down selection of list of strings""" + + text_changed = QtCore.Signal(str) + + def __init__(self, *args, **kwargs): + super(SubversionLineEdit, self).__init__(*args, **kwargs) + + input_field = PlaceholderLineEdit(self) + menu_btn = QtWidgets.QPushButton(self) + menu_btn.setFixedWidth(18) + + menu = QtWidgets.QMenu(self) + menu_btn.setMenu(menu) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(3) + + layout.addWidget(input_field, 1) + layout.addWidget(menu_btn, 0) + + input_field.textChanged.connect(self.text_changed) + + self.setFocusProxy(input_field) + + self._input_field = input_field + self._menu_btn = menu_btn + self._menu = menu + + def set_placeholder(self, placeholder): + self._input_field.setPlaceholderText(placeholder) + + def set_text(self, text): + self._input_field.setText(text) + + def set_values(self, values): + self._update(values) + + def _on_button_clicked(self): + self._menu.exec_() + + def _on_action_clicked(self, action): + self._input_field.setText(action.text()) + + def _update(self, values): + """Create optional predefined subset names + + Args: + default_names(list): all predefined names + + Returns: + None + """ + + menu = self._menu + button = self._menu_btn + + state = any(values) + button.setEnabled(state) + if state is False: + return + + # Include an empty string + values = [""] + sorted(values) + + # Get and destroy the action group + group = button.findChild(QtWidgets.QActionGroup) + if group: + group.deleteLater() + + # Build new action group + group = QtWidgets.QActionGroup(button) + for name in values: + action = group.addAction(name) + menu.addAction(action) + + group.triggered.connect(self._on_action_clicked) + + +class SaveAsDialog(QtWidgets.QDialog): + """Save as dialog to define a unique filename inside workdir. + + The filename is calculated in controller where UI sends values from + dialog inputs. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + def __init__(self, controller, parent): + super(SaveAsDialog, self).__init__(parent=parent) + self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) + + self._controller = controller + + self._folder_id = None + self._task_id = None + self._last_version = None + self._template_key = None + self._comment_value = None + self._version_value = None + self._ext_value = None + self._filename = None + self._workdir = None + + self._result = None + + # Btns widget + btns_widget = QtWidgets.QWidget(self) + + btn_ok = QtWidgets.QPushButton("Ok", btns_widget) + btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.addWidget(btn_ok) + btns_layout.addWidget(btn_cancel) + + # Inputs widget + inputs_widget = QtWidgets.QWidget(self) + + # Version widget + version_widget = QtWidgets.QWidget(inputs_widget) + + # Version number input + version_input = QtWidgets.QSpinBox(version_widget) + version_input.setMinimum(1) + version_input.setMaximum(9999) + + # Last version checkbox + last_version_check = QtWidgets.QCheckBox( + "Next Available Version", version_widget + ) + last_version_check.setChecked(True) + + version_layout = QtWidgets.QHBoxLayout(version_widget) + version_layout.setContentsMargins(0, 0, 0, 0) + version_layout.addWidget(version_input) + version_layout.addWidget(last_version_check) + + # Preview widget + preview_widget = QtWidgets.QLabel("Preview filename", inputs_widget) + preview_widget.setWordWrap(True) + + # Subversion input + subversion_input = SubversionLineEdit(inputs_widget) + subversion_input.set_placeholder("Will be part of filename.") + + # Extensions combobox + extension_combobox = QtWidgets.QComboBox(inputs_widget) + # Add styled delegate to use stylesheets + extension_delegate = QtWidgets.QStyledItemDelegate() + extension_combobox.setItemDelegate(extension_delegate) + + version_label = QtWidgets.QLabel("Version:", inputs_widget) + subversion_label = QtWidgets.QLabel("Subversion:", inputs_widget) + extension_label = QtWidgets.QLabel("Extension:", inputs_widget) + preview_label = QtWidgets.QLabel("Preview:", inputs_widget) + + # Build inputs + inputs_layout = QtWidgets.QGridLayout(inputs_widget) + inputs_layout.addWidget(version_label, 0, 0) + inputs_layout.addWidget(version_widget, 0, 1) + inputs_layout.addWidget(subversion_label, 1, 0) + inputs_layout.addWidget(subversion_input, 1, 1) + inputs_layout.addWidget(extension_label, 2, 0) + inputs_layout.addWidget(extension_combobox, 2, 1) + inputs_layout.addWidget(preview_label, 3, 0) + inputs_layout.addWidget(preview_widget, 3, 1) + + # Build layout + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.addWidget(inputs_widget) + main_layout.addWidget(btns_widget) + + # Signal callback registration + version_input.valueChanged.connect(self._on_version_spinbox_change) + last_version_check.stateChanged.connect( + self._on_version_checkbox_change + ) + + subversion_input.text_changed.connect(self._on_comment_change) + extension_combobox.currentIndexChanged.connect( + self._on_extension_change) + + btn_ok.pressed.connect(self._on_ok_pressed) + btn_cancel.pressed.connect(self._on_cancel_pressed) + + # Store objects + self._inputs_layout = inputs_layout + + self._btn_ok = btn_ok + self._btn_cancel = btn_cancel + + self._version_widget = version_widget + + self._version_input = version_input + self._last_version_check = last_version_check + + self._extension_delegate = extension_delegate + self._extension_combobox = extension_combobox + self._subversion_input = subversion_input + self._preview_widget = preview_widget + + self._version_label = version_label + self._subversion_label = subversion_label + self._extension_label = extension_label + self._preview_label = preview_label + + # Post init setup + + # Allow "Enter" key to accept the save. + btn_ok.setDefault(True) + + # Disable version input if last version is checked + version_input.setEnabled(not last_version_check.isChecked()) + + # Force default focus to comment, some hosts didn't automatically + # apply focus to this line edit (e.g. Houdini) + subversion_input.setFocus() + + def get_result(self): + return self._result + + def update_context(self): + # Add version only if template contains version key + # - since the version can be padded with "{version:0>4}" we only search + # for "{version". + selected_context = self._controller.get_selected_context() + folder_id = selected_context["folder_id"] + task_id = selected_context["task_id"] + data = self._controller.get_workarea_save_as_data(folder_id, task_id) + last_version = data["last_version"] + comment = data["comment"] + comment_hints = data["comment_hints"] + + template_has_version = data["template_has_version"] + template_has_comment = data["template_has_comment"] + + self._folder_id = folder_id + self._task_id = task_id + self._workdir = data["workdir"] + self._comment_value = data["comment"] + self._ext_value = data["ext"] + self._template_key = data["template_key"] + self._last_version = data["last_version"] + + self._extension_combobox.clear() + self._extension_combobox.addItems(data["extensions"]) + + self._version_input.setValue(last_version) + + vw_idx = self._inputs_layout.indexOf(self._version_widget) + self._version_label.setVisible(template_has_version) + self._version_widget.setVisible(template_has_version) + if template_has_version: + if vw_idx == -1: + self._inputs_layout.addWidget(self._version_label, 0, 0) + self._inputs_layout.addWidget(self._version_widget, 0, 1) + elif vw_idx != -1: + self._inputs_layout.takeAt(vw_idx) + self._inputs_layout.takeAt( + self._inputs_layout.indexOf(self._version_label) + ) + + cw_idx = self._inputs_layout.indexOf(self._subversion_input) + self._subversion_label.setVisible(template_has_comment) + self._subversion_input.setVisible(template_has_comment) + if template_has_comment: + if cw_idx == -1: + self._inputs_layout.addWidget(self._subversion_label, 1, 0) + self._inputs_layout.addWidget(self._subversion_input, 1, 1) + elif cw_idx != -1: + self._inputs_layout.takeAt(cw_idx) + self._inputs_layout.takeAt( + self._inputs_layout.indexOf(self._subversion_label) + ) + + if template_has_comment: + self._subversion_input.set_text(comment or "") + self._subversion_input.set_values(comment_hints) + self._update_filename() + + def _on_version_spinbox_change(self, value): + if value == self._version_value: + return + self._version_value = value + if not self._last_version_check.isChecked(): + self._update_filename() + + def _on_version_checkbox_change(self): + use_last_version = self._last_version_check.isChecked() + self._version_input.setEnabled(not use_last_version) + if use_last_version: + self._version_input.blockSignals(True) + self._version_input.setValue(self._last_version) + self._version_input.blockSignals(False) + self._update_filename() + + def _on_comment_change(self, text): + if self._comment_value == text: + return + self._comment_value = text + self._update_filename() + + def _on_extension_change(self): + ext = self._extension_combobox.currentText() + if ext == self._ext_value: + return + self._ext_value = ext + self._update_filename() + + def _on_ok_pressed(self): + self._result = { + "filename": self._filename, + "workdir": self._workdir, + "folder_id": self._folder_id, + "task_id": self._task_id, + "template_key": self._template_key, + } + self.close() + + def _on_cancel_pressed(self): + self.close() + + def _update_filename(self): + result = self._controller.fill_workarea_filepath( + self._folder_id, + self._task_id, + self._ext_value, + self._last_version_check.isChecked(), + self._version_value, + self._comment_value, + ) + self._filename = result.filename + self._btn_ok.setEnabled(not result.exists) + + if result.exists: + self._preview_widget.setText(( + "Cannot create \"{}\" because file exists!" + "" + ).format(result.filename)) + else: + self._preview_widget.setText( + "{}".format(result.filename) + ) diff --git a/openpype/tools/ayon_workfiles/widgets/side_panel.py b/openpype/tools/ayon_workfiles/widgets/side_panel.py new file mode 100644 index 0000000000..7f06576a00 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/side_panel.py @@ -0,0 +1,163 @@ +import datetime + +from qtpy import QtWidgets, QtCore + + +def file_size_to_string(file_size): + size = 0 + size_ending_mapping = { + "KB": 1024 ** 1, + "MB": 1024 ** 2, + "GB": 1024 ** 3 + } + ending = "B" + for _ending, _size in size_ending_mapping.items(): + if file_size < _size: + break + size = file_size / _size + ending = _ending + return "{:.2f} {}".format(size, ending) + + +class SidePanelWidget(QtWidgets.QWidget): + """Details about selected workfile. + + Todos: + At this moment only shows created and modified date of file + or its size. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + published_workfile_message = ( + "INFO: Opened published workfiles will be stored in" + " temp directory on your machine. Current temp size: {}." + ) + + def __init__(self, controller, parent): + super(SidePanelWidget, self).__init__(parent) + + details_label = QtWidgets.QLabel("Details", self) + details_input = QtWidgets.QPlainTextEdit(self) + details_input.setReadOnly(True) + + artist_note_widget = QtWidgets.QWidget(self) + note_label = QtWidgets.QLabel("Artist note", artist_note_widget) + note_input = QtWidgets.QPlainTextEdit(artist_note_widget) + btn_note_save = QtWidgets.QPushButton("Save note", artist_note_widget) + + artist_note_layout = QtWidgets.QVBoxLayout(artist_note_widget) + artist_note_layout.setContentsMargins(0, 0, 0, 0) + artist_note_layout.addWidget(note_label, 0) + artist_note_layout.addWidget(note_input, 1) + artist_note_layout.addWidget( + btn_note_save, 0, alignment=QtCore.Qt.AlignRight + ) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(details_label, 0) + main_layout.addWidget(details_input, 1) + main_layout.addWidget(artist_note_widget, 1) + + note_input.textChanged.connect(self._on_note_change) + btn_note_save.clicked.connect(self._on_save_click) + + controller.register_event_callback( + "workarea.selection.changed", self._on_selection_change + ) + + self._details_input = details_input + self._artist_note_widget = artist_note_widget + self._note_input = note_input + self._btn_note_save = btn_note_save + + self._folder_id = None + self._task_id = None + self._filepath = None + self._orig_note = "" + self._controller = controller + + self._set_context(None, None, None) + + def set_published_mode(self, published_mode): + """Change published mode. + + Args: + published_mode (bool): Published mode enabled. + """ + + self._artist_note_widget.setVisible(not published_mode) + + def _on_selection_change(self, event): + folder_id = event["folder_id"] + task_id = event["task_id"] + filepath = event["path"] + + self._set_context(folder_id, task_id, filepath) + + def _on_note_change(self): + text = self._note_input.toPlainText() + self._btn_note_save.setEnabled(self._orig_note != text) + + def _on_save_click(self): + note = self._note_input.toPlainText() + self._controller.save_workfile_info( + self._folder_id, + self._task_id, + self._filepath, + note + ) + self._orig_note = note + self._btn_note_save.setEnabled(False) + + def _set_context(self, folder_id, task_id, filepath): + workfile_info = None + # Check if folder, task and file are selected + if bool(folder_id) and bool(task_id) and bool(filepath): + workfile_info = self._controller.get_workfile_info( + folder_id, task_id, filepath + ) + enabled = workfile_info is not None + + self._details_input.setEnabled(enabled) + self._note_input.setEnabled(enabled) + self._btn_note_save.setEnabled(enabled) + + self._folder_id = folder_id + self._task_id = task_id + self._filepath = filepath + + # Disable inputs and remove texts if any required arguments are + # missing + if not enabled: + self._orig_note = "" + self._details_input.setPlainText("") + self._note_input.setPlainText("") + return + + note = workfile_info.note + size_value = file_size_to_string(workfile_info.filesize) + + # Append html string + datetime_format = "%b %d %Y %H:%M:%S" + creation_time = datetime.datetime.fromtimestamp( + workfile_info.creation_time) + modification_time = datetime.datetime.fromtimestamp( + workfile_info.modification_time) + lines = ( + "Size:", + size_value, + "Created:", + creation_time.strftime(datetime_format), + "Modified:", + modification_time.strftime(datetime_format) + ) + self._orig_note = note + self._note_input.setPlainText(note) + + # Set as empty string + self._details_input.setPlainText("") + self._details_input.appendHtml("
".join(lines)) diff --git a/openpype/tools/ayon_workfiles/widgets/tasks_widget.py b/openpype/tools/ayon_workfiles/widgets/tasks_widget.py new file mode 100644 index 0000000000..04f5b286b1 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/tasks_widget.py @@ -0,0 +1,420 @@ +import uuid +import qtawesome +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.style import get_disabled_entity_icon_color +from openpype.tools.utils import DeselectableTreeView + +from .constants import ( + ITEM_NAME_ROLE, + ITEM_ID_ROLE, + PARENT_ID_ROLE, +) + +SENDER_NAME = "qt_tasks_model" + + +class RefreshThread(QtCore.QThread): + """Thread for refreshing tasks. + + Call controller to get tasks and emit signal when finished. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + folder_id (str): Folder id. + """ + + refresh_finished = QtCore.Signal(str) + + def __init__(self, controller, folder_id): + super(RefreshThread, self).__init__() + self._id = uuid.uuid4().hex + self._controller = controller + self._folder_id = folder_id + self._result = None + + @property + def id(self): + return self._id + + def run(self): + self._result = self._controller.get_task_items( + self._folder_id, SENDER_NAME) + self.refresh_finished.emit(self.id) + + def get_result(self): + return self._result + + +class TasksModel(QtGui.QStandardItemModel): + """Tasks model which cares about refresh of tasks by folder id. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(TasksModel, self).__init__() + + self._controller = controller + + self._items_by_name = {} + self._has_content = False + self._is_refreshing = False + + self._invalid_selection_item_used = False + self._invalid_selection_item = None + self._empty_tasks_item_used = False + self._empty_tasks_item = None + + self._last_folder_id = None + + self._refresh_threads = {} + self._current_refresh_thread = None + + # Initial state + self._add_invalid_selection_item() + + def clear(self): + self._items_by_name = {} + self._has_content = False + self._remove_invalid_items() + super(TasksModel, self).clear() + + def refresh(self, folder_id): + """Refresh tasks for folder. + + Args: + folder_id (Union[str, None]): Folder id. + """ + + self._refresh(folder_id) + + def get_index_by_name(self, task_name): + """Find item by name and return its index. + + Returns: + QtCore.QModelIndex: Index of item. Is invalid if task is not + found by name. + """ + + item = self._items_by_name.get(task_name) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def get_last_folder_id(self): + """Get last refreshed folder id. + + Returns: + Union[str, None]: Folder id. + """ + + return self._last_folder_id + + def _get_invalid_selection_item(self): + if self._invalid_selection_item is None: + item = QtGui.QStandardItem("Select a folder") + item.setFlags(QtCore.Qt.NoItemFlags) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + self._invalid_selection_item = item + return self._invalid_selection_item + + def _get_empty_task_item(self): + if self._empty_tasks_item is None: + item = QtGui.QStandardItem("No task") + icon = qtawesome.icon( + "fa.exclamation-circle", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + self._empty_tasks_item = item + return self._empty_tasks_item + + def _add_invalid_item(self, item): + self.clear() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_invalid_item(self, item): + root_item = self.invisibleRootItem() + root_item.takeRow(item.row()) + + def _remove_invalid_items(self): + self._remove_invalid_selection_item() + self._remove_empty_task_item() + + def _add_invalid_selection_item(self): + if not self._invalid_selection_item_used: + self._add_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = True + + def _remove_invalid_selection_item(self): + if self._invalid_selection_item: + self._remove_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = False + + def _add_empty_task_item(self): + if not self._empty_tasks_item_used: + self._add_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = True + + def _remove_empty_task_item(self): + if self._empty_tasks_item_used: + self._remove_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = False + + def _refresh(self, folder_id): + self._is_refreshing = True + self._last_folder_id = folder_id + if not folder_id: + self._add_invalid_selection_item() + self._current_refresh_thread = None + self._is_refreshing = False + self.refreshed.emit() + return + + thread = RefreshThread(self._controller, folder_id) + self._current_refresh_thread = thread.id + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Tasks are stored by name, so if a folder has same task name as + previously selected folder it keeps the selection. + + Args: + thread_id (str): Thread id. + """ + + thread = self._refresh_threads.pop(thread_id) + if thread_id != self._current_refresh_thread: + return + + task_items = thread.get_result() + # Task items are refreshed + if task_items is None: + return + + # No tasks are available on folder + if not task_items: + self._add_empty_task_item() + return + self._remove_invalid_items() + + new_items = [] + new_names = set() + for task_item in task_items: + name = task_item.name + new_names.add(name) + item = self._items_by_name.get(name) + if item is None: + item = QtGui.QStandardItem() + item.setEditable(False) + new_items.append(item) + self._items_by_name[name] = item + + # TODO cache locally + icon = qtawesome.icon( + task_item.icon_name, + color=task_item.icon_color, + ) + item.setData(task_item.label, QtCore.Qt.DisplayRole) + item.setData(name, ITEM_NAME_ROLE) + item.setData(task_item.id, ITEM_ID_ROLE) + item.setData(task_item.parent_id, PARENT_ID_ROLE) + item.setData(icon, QtCore.Qt.DecorationRole) + + root_item = self.invisibleRootItem() + + for name in set(self._items_by_name) - new_names: + item = self._items_by_name.pop(name) + root_item.removeRow(item.row()) + + if new_items: + root_item.appendRows(new_items) + + self._has_content = root_item.rowCount() > 0 + self._is_refreshing = False + self.refreshed.emit() + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: Model is refreshing + """ + + return self._is_refreshing + + @property + def has_content(self): + """Model has content. + + Returns: + bools: Have at least one task. + """ + + return self._has_content + + def headerData(self, section, orientation, role): + # Show nice labels in the header + if ( + role == QtCore.Qt.DisplayRole + and orientation == QtCore.Qt.Horizontal + ): + if section == 0: + return "Tasks" + + return super(TasksModel, self).headerData( + section, orientation, role + ) + + +class TasksWidget(QtWidgets.QWidget): + """Tasks widget. + + Widget that handles tasks view, model and selection. + + Args: + controller (AbstractWorkfilesFrontend): Workfiles controller. + """ + + def __init__(self, controller, parent): + super(TasksWidget, self).__init__(parent) + + tasks_view = DeselectableTreeView(self) + tasks_view.setIndentation(0) + + tasks_model = TasksModel(controller) + tasks_proxy_model = QtCore.QSortFilterProxyModel() + tasks_proxy_model.setSourceModel(tasks_model) + + tasks_view.setModel(tasks_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(tasks_view, 1) + + controller.register_event_callback( + "tasks.refresh.finished", + self._on_tasks_refresh_finished + ) + controller.register_event_callback( + "selection.folder.changed", + self._folder_selection_changed + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = tasks_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + tasks_model.refreshed.connect(self._on_tasks_model_refresh) + + self._controller = controller + self._tasks_view = tasks_view + self._tasks_model = tasks_model + self._tasks_proxy_model = tasks_proxy_model + + self._selected_folder_id = None + + self._expected_selection_data = None + + def _clear(self): + self._tasks_model.clear() + + def _on_tasks_refresh_finished(self, event): + """Tasks were refreshed in controller. + + Ignore if refresh was triggered by tasks model, or refreshed folder is + not the same as currently selected folder. + + Args: + event (Event): Event object. + """ + + # Refresh only if current folder id is the same + if ( + event["sender"] == SENDER_NAME + or event["folder_id"] != self._selected_folder_id + ): + return + self._tasks_model.refresh(self._selected_folder_id) + + def _folder_selection_changed(self, event): + self._selected_folder_id = event["folder_id"] + self._tasks_model.refresh(self._selected_folder_id) + + def _on_tasks_model_refresh(self): + if not self._set_expected_selection(): + self._on_selection_change() + self._tasks_proxy_model.sort(0) + + def _set_expected_selection(self): + if self._expected_selection_data is None: + return False + folder_id = self._expected_selection_data["folder_id"] + task_name = self._expected_selection_data["task_name"] + self._expected_selection_data = None + model_folder_id = self._tasks_model.get_last_folder_id() + if folder_id != model_folder_id: + return False + if task_name is not None: + index = self._tasks_model.get_index_by_name(task_name) + if index.isValid(): + proxy_index = self._tasks_proxy_model.mapFromSource(index) + self._tasks_view.setCurrentIndex(proxy_index) + self._controller.expected_task_selected(folder_id, task_name) + return True + + def _on_expected_selection_change(self, event): + if event["task_selected"] or not event["folder_selected"]: + return + + model_folder_id = self._tasks_model.get_last_folder_id() + folder_id = event["folder_id"] + self._expected_selection_data = { + "task_name": event["task_name"], + "folder_id": folder_id, + } + + if folder_id != model_folder_id or self._tasks_model.is_refreshing: + return + self._set_expected_selection() + + def _get_selected_item_ids(self): + selection_model = self._tasks_view.selectionModel() + for index in selection_model.selectedIndexes(): + task_id = index.data(ITEM_ID_ROLE) + task_name = index.data(ITEM_NAME_ROLE) + parent_id = index.data(PARENT_ID_ROLE) + if task_name is not None: + return parent_id, task_id, task_name + return self._selected_folder_id, None, None + + def _on_selection_change(self): + # Don't trigger task change during refresh + # - a task was deselected if that happens + # - can cause crash triggered during tasks refreshing + if self._tasks_model.is_refreshing: + return + parent_id, task_id, task_name = self._get_selected_item_ids() + self._controller.set_selected_task(parent_id, task_id, task_name) diff --git a/openpype/tools/ayon_workfiles/widgets/utils.py b/openpype/tools/ayon_workfiles/widgets/utils.py new file mode 100644 index 0000000000..9171638546 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/utils.py @@ -0,0 +1,28 @@ +from qtpy import QtWidgets, QtCore + + +class BaseOverlayFrame(QtWidgets.QFrame): + """Base frame for overlay widgets. + + Has implemented automated resize and event filtering. + """ + + def __init__(self, parent): + super(BaseOverlayFrame, self).__init__(parent) + self.setObjectName("OverlayFrame") + + self._parent = parent + + def setVisible(self, visible): + super(BaseOverlayFrame, self).setVisible(visible) + if visible: + self._parent.installEventFilter(self) + self.resize(self._parent.size()) + else: + self._parent.removeEventFilter(self) + + def eventFilter(self, obj, event): + if event.type() == QtCore.QEvent.Resize: + self.resize(obj.size()) + + return super(BaseOverlayFrame, self).eventFilter(obj, event) diff --git a/openpype/tools/ayon_workfiles/widgets/window.py b/openpype/tools/ayon_workfiles/widgets/window.py new file mode 100644 index 0000000000..6218d2dd06 --- /dev/null +++ b/openpype/tools/ayon_workfiles/widgets/window.py @@ -0,0 +1,400 @@ +from qtpy import QtCore, QtWidgets, QtGui + +from openpype import style, resources +from openpype.tools.utils import ( + PlaceholderLineEdit, + MessageOverlayObject, +) +from openpype.tools.utils.lib import get_qta_icon_by_name_and_color + +from openpype.tools.ayon_workfiles.control import BaseWorkfileController + +from .side_panel import SidePanelWidget +from .folders_widget import FoldersWidget +from .tasks_widget import TasksWidget +from .files_widget import FilesWidget +from .utils import BaseOverlayFrame + + +# TODO move to utils +# from openpype.tools.utils.lib import ( +# get_refresh_icon, get_go_to_current_icon) +def get_refresh_icon(): + return get_qta_icon_by_name_and_color( + "fa.refresh", style.get_default_tools_icon_color() + ) + + +def get_go_to_current_icon(): + return get_qta_icon_by_name_and_color( + "fa.arrow-down", style.get_default_tools_icon_color() + ) + + +class InvalidHostOverlay(BaseOverlayFrame): + def __init__(self, parent): + super(InvalidHostOverlay, self).__init__(parent) + + label_widget = QtWidgets.QLabel( + ( + "Workfiles tool is not supported in this host/DCCs." + "

This may be caused by a bug." + " Please contact your TD for more information." + ), + self + ) + label_widget.setAlignment(QtCore.Qt.AlignCenter) + label_widget.setObjectName("OverlayFrameLabel") + + layout = QtWidgets.QVBoxLayout(self) + layout.addStretch(2) + layout.addWidget(label_widget, 0, QtCore.Qt.AlignCenter) + layout.addStretch(3) + + label_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + +class WorkfilesToolWindow(QtWidgets.QWidget): + """WorkFiles Window. + + Main windows of workfiles tool. + + Args: + controller (AbstractWorkfilesFrontend): Frontend controller. + parent (Optional[QtWidgets.QWidget]): Parent widget. + """ + + title = "Work Files" + + def __init__(self, controller=None, parent=None): + super(WorkfilesToolWindow, self).__init__(parent=parent) + + if controller is None: + controller = BaseWorkfileController() + + self.setWindowTitle(self.title) + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + flags = self.windowFlags() | QtCore.Qt.Window + self.setWindowFlags(flags) + + self._default_window_flags = flags + + self._folder_widget = None + self._folder_filter_input = None + + self._files_widget = None + + self._first_show = True + self._controller_refreshed = False + self._context_to_set = None + # Host validation should happen only once + self._host_is_valid = None + + self._controller = controller + + # Create pages widget and set it as central widget + pages_widget = QtWidgets.QStackedWidget(self) + + home_page_widget = QtWidgets.QWidget(pages_widget) + home_body_widget = QtWidgets.QWidget(home_page_widget) + + col_1_widget = self._create_col_1_widget(controller, parent) + tasks_widget = TasksWidget(controller, home_body_widget) + col_3_widget = self._create_col_3_widget(controller, home_body_widget) + side_panel = SidePanelWidget(controller, home_body_widget) + + pages_widget.addWidget(home_page_widget) + + # Build home + home_page_layout = QtWidgets.QVBoxLayout(home_page_widget) + home_page_layout.addWidget(home_body_widget) + + # Build home - body + body_layout = QtWidgets.QVBoxLayout(home_body_widget) + split_widget = QtWidgets.QSplitter(home_body_widget) + split_widget.addWidget(col_1_widget) + split_widget.addWidget(tasks_widget) + split_widget.addWidget(col_3_widget) + split_widget.addWidget(side_panel) + split_widget.setSizes([255, 160, 455, 175]) + + body_layout.addWidget(split_widget) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.addWidget(pages_widget, 1) + + overlay_messages_widget = MessageOverlayObject(self) + overlay_invalid_host = InvalidHostOverlay(self) + overlay_invalid_host.setVisible(False) + + first_show_timer = QtCore.QTimer() + first_show_timer.setSingleShot(True) + first_show_timer.setInterval(50) + + first_show_timer.timeout.connect(self._on_first_show) + + controller.register_event_callback( + "save_as.finished", + self._on_save_as_finished, + ) + controller.register_event_callback( + "copy_representation.finished", + self._on_copy_representation_finished, + ) + controller.register_event_callback( + "workfile_duplicate.finished", + self._on_duplicate_finished + ) + controller.register_event_callback( + "open_workfile.finished", + self._on_open_finished + ) + controller.register_event_callback( + "controller.refresh.started", + self._on_controller_refresh_started, + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh_finished, + ) + + self._overlay_messages_widget = overlay_messages_widget + self._overlay_invalid_host = overlay_invalid_host + self._home_page_widget = home_page_widget + self._pages_widget = pages_widget + self._home_body_widget = home_body_widget + self._split_widget = split_widget + + self._tasks_widget = tasks_widget + self._side_panel = side_panel + + self._first_show_timer = first_show_timer + + self._post_init() + + def _post_init(self): + self._on_published_checkbox_changed() + + # Force focus on the open button by default, required for Houdini. + self._files_widget.setFocus() + + self.resize(1200, 600) + + def _create_col_1_widget(self, controller, parent): + col_widget = QtWidgets.QWidget(parent) + header_widget = QtWidgets.QWidget(col_widget) + + folder_filter_input = PlaceholderLineEdit(header_widget) + folder_filter_input.setPlaceholderText("Filter folders..") + + go_to_current_btn = QtWidgets.QPushButton(header_widget) + go_to_current_btn.setIcon(get_go_to_current_icon()) + go_to_current_btn_sp = go_to_current_btn.sizePolicy() + go_to_current_btn_sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + go_to_current_btn.setSizePolicy(go_to_current_btn_sp) + + refresh_btn = QtWidgets.QPushButton(header_widget) + refresh_btn.setIcon(get_refresh_icon()) + refresh_btn_sp = refresh_btn.sizePolicy() + refresh_btn_sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + refresh_btn.setSizePolicy(refresh_btn_sp) + + folder_widget = FoldersWidget(controller, col_widget) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(folder_filter_input, 1) + header_layout.addWidget(go_to_current_btn, 0) + header_layout.addWidget(refresh_btn, 0) + + col_layout = QtWidgets.QVBoxLayout(col_widget) + col_layout.setContentsMargins(0, 0, 0, 0) + col_layout.addWidget(header_widget, 0) + col_layout.addWidget(folder_widget, 1) + + folder_filter_input.textChanged.connect(self._on_folder_filter_change) + go_to_current_btn.clicked.connect(self._on_go_to_current_clicked) + refresh_btn.clicked.connect(self._on_refresh_clicked) + + self._folder_filter_input = folder_filter_input + self._folder_widget = folder_widget + + return col_widget + + def _create_col_3_widget(self, controller, parent): + col_widget = QtWidgets.QWidget(parent) + + header_widget = QtWidgets.QWidget(col_widget) + + files_filter_input = PlaceholderLineEdit(header_widget) + files_filter_input.setPlaceholderText("Filter files..") + + published_checkbox = QtWidgets.QCheckBox("Published", header_widget) + published_checkbox.setToolTip("Show published workfiles") + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(files_filter_input, 1) + header_layout.addWidget(published_checkbox, 0) + + files_widget = FilesWidget(controller, col_widget) + + col_layout = QtWidgets.QVBoxLayout(col_widget) + col_layout.setContentsMargins(0, 0, 0, 0) + col_layout.addWidget(header_widget, 0) + col_layout.addWidget(files_widget, 1) + + files_filter_input.textChanged.connect( + self._on_file_text_filter_change) + published_checkbox.stateChanged.connect( + self._on_published_checkbox_changed + ) + + self._files_filter_input = files_filter_input + self._published_checkbox = published_checkbox + + self._files_widget = files_widget + + return col_widget + + def set_window_on_top(self, on_top): + """Set window on top of other windows. + + Args: + on_top (bool): Show on top of other windows. + """ + + flags = self._default_window_flags + if on_top: + flags |= QtCore.Qt.WindowStaysOnTopHint + if self.windowFlags() != flags: + self.setWindowFlags(flags) + + def ensure_visible(self, use_context=True, save=True, on_top=False): + """Ensure the window is visible. + + This method expects arguments for compatibility with previous variant + of Workfiles tool. + + Args: + use_context (Optional[bool]): DEPRECATED: This argument is + ignored. + save (Optional[bool]): Allow to save workfiles. + on_top (Optional[bool]): Show on top of other windows. + """ + + save = True if save is None else save + on_top = False if on_top is None else on_top + + is_visible = self.isVisible() + self._controller.set_save_enabled(save) + self.set_window_on_top(on_top) + + self.show() + self.raise_() + self.activateWindow() + if is_visible: + self.refresh() + + def refresh(self): + """Trigger refresh of workfiles tool controller.""" + + self._controller.refresh() + + def showEvent(self, event): + super(WorkfilesToolWindow, self).showEvent(event) + if self._first_show: + self._first_show = False + self._first_show_timer.start() + self.setStyleSheet(style.load_stylesheet()) + + def keyPressEvent(self, event): + """Custom keyPressEvent. + + Override keyPressEvent to do nothing so that Maya's panels won't + take focus when pressing "SHIFT" whilst mouse is over viewport or + outliner. This way users don't accidentally perform Maya commands + whilst trying to name an instance. + """ + + pass + + def _on_first_show(self): + if not self._controller_refreshed: + self.refresh() + + def _on_file_text_filter_change(self, text): + self._files_widget.set_text_filter(text) + + def _on_published_checkbox_changed(self): + """Publish mode changed. + + Tell children widgets about it so they can handle the mode. + """ + + published_mode = self._published_checkbox.isChecked() + self._files_widget.set_published_mode(published_mode) + self._side_panel.set_published_mode(published_mode) + + def _on_folder_filter_change(self, text): + self._folder_widget.set_name_filter(text) + + def _on_go_to_current_clicked(self): + self._controller.go_to_current_context() + + def _on_refresh_clicked(self): + self.refresh() + + def _on_controller_refresh_started(self): + self._controller_refreshed = True + + def _on_controller_refresh_finished(self): + if self._host_is_valid is None: + self._host_is_valid = self._controller.is_host_valid() + self._overlay_invalid_host.setVisible(not self._host_is_valid) + + if not self._host_is_valid: + return + + def _on_save_as_finished(self, event): + if event["failed"]: + self._overlay_messages_widget.add_message( + "Failed to save workfile", + "error", + ) + else: + self._overlay_messages_widget.add_message( + "Workfile saved" + ) + + def _on_copy_representation_finished(self, event): + if event["failed"]: + self._overlay_messages_widget.add_message( + "Failed to copy published workfile", + "error", + ) + else: + self._overlay_messages_widget.add_message( + "Publish workfile saved" + ) + + def _on_duplicate_finished(self, event): + if event["failed"]: + self._overlay_messages_widget.add_message( + "Failed to duplicate workfile", + "error", + ) + else: + self._overlay_messages_widget.add_message( + "Workfile duplicated" + ) + + def _on_open_finished(self, event): + if event["failed"]: + self._overlay_messages_widget.add_message( + "Failed to open workfile", + "error", + ) + else: + self.close() diff --git a/openpype/tools/context_dialog/window.py b/openpype/tools/context_dialog/window.py index 86c53b55c5..4fe41c9949 100644 --- a/openpype/tools/context_dialog/window.py +++ b/openpype/tools/context_dialog/window.py @@ -5,7 +5,7 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype.pipeline import AvalonMongoDB -from openpype.tools.utils.lib import center_window +from openpype.tools.utils.lib import center_window, get_openpype_qt_app from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget from openpype.tools.utils.constants import ( PROJECT_NAME_ROLE @@ -376,9 +376,7 @@ def main( strict=True ): # Run Qt application - app = QtWidgets.QApplication.instance() - if app is None: - app = QtWidgets.QApplication([]) + app = get_openpype_qt_app() window = ContextDialog() window.set_strict(strict) window.set_context(project_name, asset_name) diff --git a/openpype/tools/creator/widgets.py b/openpype/tools/creator/widgets.py index 74f75811ff..0ebbd905e5 100644 --- a/openpype/tools/creator/widgets.py +++ b/openpype/tools/creator/widgets.py @@ -5,6 +5,7 @@ from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS from openpype.tools.utils import ErrorMessageBox @@ -42,10 +43,13 @@ class CreateErrorMessageBox(ErrorMessageBox): def _get_report_data(self): report_message = ( - "Failed to create Subset: \"{subset}\" Family: \"{family}\"" + "Failed to create {subset_label}: \"{subset}\"" + " {family_label}: \"{family}\"" " in Asset: \"{asset}\"" "\n\nError: {message}" ).format( + subset_label="Product" if AYON_SERVER_ENABLED else "Subset", + family_label="Type" if AYON_SERVER_ENABLED else "Family", subset=self._subset_name, family=self._family, asset=self._asset_name, @@ -57,9 +61,13 @@ class CreateErrorMessageBox(ErrorMessageBox): def _create_content(self, content_layout): item_name_template = ( - "Family: {}
" - "Subset: {}
" - "Asset: {}
" + "{}: {{}}
" + "{}: {{}}
" + "{}: {{}}
" + ).format( + "Product type" if AYON_SERVER_ENABLED else "Family", + "Product name" if AYON_SERVER_ENABLED else "Subset", + "Folder" if AYON_SERVER_ENABLED else "Asset" ) exc_msg_template = "{}" @@ -151,15 +159,21 @@ class VariantLineEdit(QtWidgets.QLineEdit): def as_empty(self): self._set_border("empty") - self.report.emit("Empty subset name ..") + self.report.emit("Empty {} name ..".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def as_exists(self): self._set_border("exists") - self.report.emit("Existing subset, appending next version.") + self.report.emit("Existing {}, appending next version.".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def as_new(self): self._set_border("new") - self.report.emit("New subset, creating first version.") + self.report.emit("New {}, creating first version.".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def _set_border(self, status): qcolor, style = self.colors[status] diff --git a/openpype/tools/creator/window.py b/openpype/tools/creator/window.py index 57e2c49576..47f27a262a 100644 --- a/openpype/tools/creator/window.py +++ b/openpype/tools/creator/window.py @@ -8,7 +8,11 @@ from openpype.client import get_asset_by_name, get_subsets from openpype import style from openpype.settings import get_current_project_settings from openpype.tools.utils.lib import qt_app_context -from openpype.pipeline import legacy_io +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name, +) from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, legacy_create, @@ -216,7 +220,7 @@ class CreatorWindow(QtWidgets.QDialog): self._set_valid_state(False) return - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = None if creator_plugin: # Get the asset from the database which match with the name @@ -237,7 +241,7 @@ class CreatorWindow(QtWidgets.QDialog): return asset_id = asset_doc["_id"] - task_name = legacy_io.Session["AVALON_TASK"] + task_name = get_current_task_name() # Calculate subset name with Creator plugin subset_name = creator_plugin.get_subset_name( @@ -369,7 +373,7 @@ class CreatorWindow(QtWidgets.QDialog): self.setStyleSheet(style.load_stylesheet()) def refresh(self): - self._asset_name_input.setText(legacy_io.Session["AVALON_ASSET"]) + self._asset_name_input.setText(get_current_asset_name()) self._creators_model.reset() @@ -382,7 +386,7 @@ class CreatorWindow(QtWidgets.QDialog): ) current_index = None family = None - task_name = legacy_io.Session.get("AVALON_TASK", None) + task_name = get_current_task_name() or None lowered_task_name = task_name.lower() if task_name: for _family, _task_names in pype_project_setting.items(): diff --git a/openpype/tools/launcher/actions.py b/openpype/tools/launcher/actions.py index 61660ee9b7..285b5d04ca 100644 --- a/openpype/tools/launcher/actions.py +++ b/openpype/tools/launcher/actions.py @@ -1,8 +1,5 @@ -import os - from qtpy import QtWidgets, QtGui -from openpype import PLUGINS_DIR from openpype import style from openpype import resources from openpype.lib import ( @@ -10,46 +7,7 @@ from openpype.lib import ( ApplictionExecutableNotFound, ApplicationLaunchFailed ) -from openpype.pipeline import ( - LauncherAction, - register_launcher_action_path, -) - - -def register_actions_from_paths(paths): - if not paths: - return - - for path in paths: - if not path: - continue - - if path.startswith("."): - print(( - "BUG: Relative paths are not allowed for security reasons. {}" - ).format(path)) - continue - - if not os.path.exists(path): - print("Path was not found: {}".format(path)) - continue - - register_launcher_action_path(path) - - -def register_config_actions(): - """Register actions from the configuration for Launcher""" - - actions_dir = os.path.join(PLUGINS_DIR, "actions") - if os.path.exists(actions_dir): - register_actions_from_paths([actions_dir]) - - -def register_environment_actions(): - """Register actions from AVALON_ACTIONS for Launcher.""" - - paths_str = os.environ.get("AVALON_ACTIONS") or "" - register_actions_from_paths(paths_str.split(os.pathsep)) +from openpype.pipeline import LauncherAction # TODO move to 'openpype.pipeline.actions' diff --git a/openpype/tools/libraryloader/app.py b/openpype/tools/libraryloader/app.py index bd10595333..e68e9a5931 100644 --- a/openpype/tools/libraryloader/app.py +++ b/openpype/tools/libraryloader/app.py @@ -114,9 +114,10 @@ class LibraryLoaderWindow(QtWidgets.QDialog): manager = ModulesManager() sync_server = manager.modules_by_name.get("sync_server") - sync_server_enabled = False - if sync_server is not None: - sync_server_enabled = sync_server.enabled + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) repres_widget = None if sync_server_enabled: diff --git a/openpype/tools/loader/app.py b/openpype/tools/loader/app.py index 302fe6c366..b305233247 100644 --- a/openpype/tools/loader/app.py +++ b/openpype/tools/loader/app.py @@ -223,7 +223,7 @@ class LoaderWindow(QtWidgets.QDialog): lib.schedule(self._refresh, 50, channel="mongo") def on_assetschanged(self, *args): - self.echo("Fetching asset..") + self.echo("Fetching hierarchy..") lib.schedule(self._assetschanged, 50, channel="mongo") def on_subsetschanged(self, *args): diff --git a/openpype/tools/loader/model.py b/openpype/tools/loader/model.py index e58e02f89a..69b7e593b1 100644 --- a/openpype/tools/loader/model.py +++ b/openpype/tools/loader/model.py @@ -7,6 +7,7 @@ from uuid import uuid4 from qtpy import QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_assets, get_subsets, @@ -63,6 +64,7 @@ class BaseRepresentationModel(object): """Sets/Resets sync server vars after every change (refresh.)""" repre_icons = {} sync_server = None + sync_server_enabled = False active_site = active_provider = None remote_site = remote_provider = None @@ -74,6 +76,7 @@ class BaseRepresentationModel(object): if not project_name: self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -99,8 +102,13 @@ class BaseRepresentationModel(object): self._modules_manager = ModulesManager() self._last_manager_cache = now_time - sync_server = self._modules_manager.modules_by_name["sync_server"] - if sync_server.is_project_enabled(project_name, single=True): + sync_server = self._modules_manager.modules_by_name.get("sync_server") + if ( + sync_server is not None + and sync_server.enabled + and sync_server.is_project_enabled(project_name, single=True) + ): + sync_server_enabled = True active_site = sync_server.get_active_site(project_name) active_provider = sync_server.get_provider_for_site( project_name, active_site) @@ -117,6 +125,7 @@ class BaseRepresentationModel(object): self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -143,9 +152,9 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): ] column_labels_mapping = { - "subset": "Subset", - "asset": "Asset", - "family": "Family", + "subset": "Product" if AYON_SERVER_ENABLED else "Subset", + "asset": "Folder" if AYON_SERVER_ENABLED else "Asset", + "family": "Product type" if AYON_SERVER_ENABLED else "Family", "version": "Version", "time": "Time", "author": "Author", @@ -212,6 +221,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): self.repre_icons = {} self.sync_server = None + self.sync_server_enabled = False self.active_site = self.active_provider = None self.columns_index = dict( @@ -281,7 +291,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): ) # update availability on active site when version changes - if self.sync_server.enabled and version_doc: + if self.sync_server_enabled and version_doc: repres_info = list( self.sync_server.get_repre_info_for_versions( project_name, @@ -506,7 +516,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): return repre_info_by_version_id = {} - if self.sync_server.enabled: + if self.sync_server_enabled: versions_by_id = {} for _subset_id, doc in last_versions_by_subset_id.items(): versions_by_id[doc["_id"]] = doc @@ -1032,12 +1042,16 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): self._version_ids = [] manager = ModulesManager() - sync_server = active_site = remote_site = None + active_site = remote_site = None active_provider = remote_provider = None + sync_server = manager.modules_by_name.get("sync_server") + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) project_name = dbcon.current_project() - if project_name: - sync_server = manager.modules_by_name["sync_server"] + if sync_server_enabled and project_name: active_site = sync_server.get_active_site(project_name) remote_site = sync_server.get_remote_site(project_name) @@ -1056,6 +1070,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): remote_provider = 'studio' self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -1173,9 +1188,15 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): repre_groups_items[doc["name"]] = 0 group = group_item - progress = lib.get_progress_for_repre( - doc, self.active_site, self.remote_site - ) + progress = { + self.active_site: 0, + self.remote_site: 0, + } + if self.sync_server_enabled: + progress = self.sync_server.get_progress_for_repre( + doc, + self.active_site, + self.remote_site) active_site_icon = self._icons.get(self.active_provider) remote_site_icon = self._icons.get(self.remote_provider) diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py index b3aa381d14..5dd3af08d6 100644 --- a/openpype/tools/loader/widgets.py +++ b/openpype/tools/loader/widgets.py @@ -886,7 +886,9 @@ class ThumbnailWidget(QtWidgets.QLabel): self.set_pixmap() return - thumbnail_ent = get_thumbnail(project_name, thumbnail_id) + thumbnail_ent = get_thumbnail( + project_name, thumbnail_id, src_type, src_id + ) if not thumbnail_ent: return diff --git a/openpype/tools/project_manager/project_manager/model.py b/openpype/tools/project_manager/project_manager/model.py index 29a26f700f..f6c98d6f6c 100644 --- a/openpype/tools/project_manager/project_manager/model.py +++ b/openpype/tools/project_manager/project_manager/model.py @@ -84,6 +84,13 @@ class ProjectProxyFilter(QtCore.QSortFilterProxyModel): super(ProjectProxyFilter, self).__init__(*args, **kwargs) self._filter_default = False + def lessThan(self, left, right): + if left.data(PROJECT_NAME_ROLE) is None: + return True + if right.data(PROJECT_NAME_ROLE) is None: + return False + return super(ProjectProxyFilter, self).lessThan(left, right) + def set_filter_default(self, enabled=True): """Set if filtering of default item is enabled.""" if enabled == self._filter_default: diff --git a/openpype/tools/project_manager/project_manager/multiselection_combobox.py b/openpype/tools/project_manager/project_manager/multiselection_combobox.py index 4b5d468982..4100ada221 100644 --- a/openpype/tools/project_manager/project_manager/multiselection_combobox.py +++ b/openpype/tools/project_manager/project_manager/multiselection_combobox.py @@ -1,6 +1,14 @@ from qtpy import QtCore, QtWidgets -from openpype.tools.utils.lib import checkstate_int_to_enum +from openpype.tools.utils.lib import ( + checkstate_int_to_enum, + checkstate_enum_to_int, +) +from openpype.tools.utils.constants import ( + CHECKED_INT, + UNCHECKED_INT, + ITEM_IS_USER_TRISTATE, +) class ComboItemDelegate(QtWidgets.QStyledItemDelegate): @@ -107,9 +115,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): return if state == QtCore.Qt.Unchecked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT elif event.type() == QtCore.QEvent.KeyPress: # TODO: handle QtCore.Qt.Key_Enter, Key_Return? @@ -117,15 +125,15 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): # toggle the current items check state if ( index_flags & QtCore.Qt.ItemIsUserCheckable - and index_flags & QtCore.Qt.ItemIsTristate + and index_flags & ITEM_IS_USER_TRISTATE ): - new_state = QtCore.Qt.CheckState((int(state) + 1) % 3) + new_state = (checkstate_enum_to_int(state) + 1) % 3 elif index_flags & QtCore.Qt.ItemIsUserCheckable: if state != QtCore.Qt.Checked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT if new_state is not None: model.setData(current_index, new_state, QtCore.Qt.CheckStateRole) @@ -180,9 +188,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): for idx in range(self.count()): value = self.itemData(idx, role=QtCore.Qt.UserRole) if value in values: - check_state = QtCore.Qt.Checked + check_state = CHECKED_INT else: - check_state = QtCore.Qt.Unchecked + check_state = UNCHECKED_INT self.setItemData(idx, check_state, QtCore.Qt.CheckStateRole) def value(self): diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index d4e0ae0453..a6264303d5 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -176,11 +176,10 @@ class PublishReportMaker: self._create_discover_result = None self._convert_discover_result = None self._publish_discover_result = None - self._plugin_data = [] - self._plugin_data_with_plugin = [] - self._stored_plugins = [] - self._current_plugin_data = [] + self._plugin_data_by_id = {} + self._current_plugin = None + self._current_plugin_data = {} self._all_instances_by_id = {} self._current_context = None @@ -192,8 +191,9 @@ class PublishReportMaker: create_context.convertor_discover_result ) self._publish_discover_result = create_context.publish_discover_result - self._plugin_data = [] - self._plugin_data_with_plugin = [] + + self._plugin_data_by_id = {} + self._current_plugin = None self._current_plugin_data = {} self._all_instances_by_id = {} self._current_context = context @@ -210,18 +210,11 @@ class PublishReportMaker: if self._current_plugin_data: self._current_plugin_data["passed"] = True + self._current_plugin = plugin self._current_plugin_data = self._add_plugin_data_item(plugin) - def _get_plugin_data_item(self, plugin): - store_item = None - for item in self._plugin_data_with_plugin: - if item["plugin"] is plugin: - store_item = item["data"] - break - return store_item - def _add_plugin_data_item(self, plugin): - if plugin in self._stored_plugins: + if plugin.id in self._plugin_data_by_id: # A plugin would be processed more than once. What can cause it: # - there is a bug in controller # - plugin class is imported into multiple files @@ -229,15 +222,9 @@ class PublishReportMaker: raise ValueError( "Plugin '{}' is already stored".format(str(plugin))) - self._stored_plugins.append(plugin) - plugin_data_item = self._create_plugin_data_item(plugin) + self._plugin_data_by_id[plugin.id] = plugin_data_item - self._plugin_data_with_plugin.append({ - "plugin": plugin, - "data": plugin_data_item - }) - self._plugin_data.append(plugin_data_item) return plugin_data_item def _create_plugin_data_item(self, plugin): @@ -278,7 +265,7 @@ class PublishReportMaker: """Add result of single action.""" plugin = result["plugin"] - store_item = self._get_plugin_data_item(plugin) + store_item = self._plugin_data_by_id.get(plugin.id) if store_item is None: store_item = self._add_plugin_data_item(plugin) @@ -300,14 +287,24 @@ class PublishReportMaker: instance, instance in self._current_context ) - plugins_data = copy.deepcopy(self._plugin_data) - if plugins_data and not plugins_data[-1]["passed"]: - plugins_data[-1]["passed"] = True + plugins_data_by_id = copy.deepcopy( + self._plugin_data_by_id + ) + + # Ensure the current plug-in is marked as `passed` in the result + # so that it shows on reports for paused publishes + if self._current_plugin is not None: + current_plugin_data = plugins_data_by_id.get( + self._current_plugin.id + ) + if current_plugin_data and not current_plugin_data["passed"]: + current_plugin_data["passed"] = True if publish_plugins: for plugin in publish_plugins: - if plugin not in self._stored_plugins: - plugins_data.append(self._create_plugin_data_item(plugin)) + if plugin.id not in plugins_data_by_id: + plugins_data_by_id[plugin.id] = \ + self._create_plugin_data_item(plugin) reports = [] if self._create_discover_result is not None: @@ -328,7 +325,7 @@ class PublishReportMaker: ) return { - "plugins_data": plugins_data, + "plugins_data": list(plugins_data_by_id.values()), "instances": instances_details, "context": self._extract_context_data(self._current_context), "crashed_file_paths": crashed_file_paths, diff --git a/openpype/tools/publisher/widgets/assets_widget.py b/openpype/tools/publisher/widgets/assets_widget.py index a750d8d540..c536f93c9b 100644 --- a/openpype/tools/publisher/widgets/assets_widget.py +++ b/openpype/tools/publisher/widgets/assets_widget.py @@ -2,6 +2,7 @@ import collections from qtpy import QtWidgets, QtCore, QtGui +from openpype import AYON_SERVER_ENABLED from openpype.tools.utils import ( PlaceholderLineEdit, RecursiveSortFilterProxyModel, @@ -187,7 +188,8 @@ class AssetsDialog(QtWidgets.QDialog): proxy_model.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) filter_input = PlaceholderLineEdit(self) - filter_input.setPlaceholderText("Filter assets..") + filter_input.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) asset_view = AssetDialogView(self) asset_view.setModel(proxy_model) diff --git a/openpype/tools/publisher/widgets/create_widget.py b/openpype/tools/publisher/widgets/create_widget.py index b7605b1188..64fed1d70c 100644 --- a/openpype/tools/publisher/widgets/create_widget.py +++ b/openpype/tools/publisher/widgets/create_widget.py @@ -2,9 +2,11 @@ import re from qtpy import QtWidgets, QtCore, QtGui +from openpype import AYON_SERVER_ENABLED from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, TaskNotSetError, ) @@ -203,7 +205,9 @@ class CreateWidget(QtWidgets.QWidget): variant_subset_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) variant_subset_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) variant_subset_layout.addRow("Variant", variant_widget) - variant_subset_layout.addRow("Subset", subset_name_input) + variant_subset_layout.addRow( + "Product" if AYON_SERVER_ENABLED else "Subset", + subset_name_input) creator_basics_layout = QtWidgets.QVBoxLayout(creator_basics_widget) creator_basics_layout.setContentsMargins(0, 0, 0, 0) @@ -623,7 +627,7 @@ class CreateWidget(QtWidgets.QWidget): default_variants = creator_item.default_variants if not default_variants: - default_variants = ["Main"] + default_variants = [DEFAULT_VARIANT_VALUE] default_variant = creator_item.default_variant if not default_variant: @@ -639,7 +643,7 @@ class CreateWidget(QtWidgets.QWidget): elif variant: self.variant_hints_menu.addAction(variant) - variant_text = default_variant or "Main" + variant_text = default_variant or DEFAULT_VARIANT_VALUE # Make sure subset name is updated to new plugin if variant_text == self.variant_input.text(): self._on_variant_change() diff --git a/openpype/tools/publisher/widgets/images/browse.png b/openpype/tools/publisher/widgets/images/browse.png new file mode 100644 index 0000000000..b115bb6766 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/browse.png differ diff --git a/openpype/tools/publisher/widgets/images/options.png b/openpype/tools/publisher/widgets/images/options.png new file mode 100644 index 0000000000..b394dbd4ce Binary files /dev/null and b/openpype/tools/publisher/widgets/images/options.png differ diff --git a/openpype/tools/publisher/widgets/images/paste.png b/openpype/tools/publisher/widgets/images/paste.png new file mode 100644 index 0000000000..14a6050da1 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/paste.png differ diff --git a/openpype/tools/publisher/widgets/images/take_screenshot.png b/openpype/tools/publisher/widgets/images/take_screenshot.png new file mode 100644 index 0000000000..242a36a026 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/take_screenshot.png differ diff --git a/openpype/tools/publisher/widgets/overview_widget.py b/openpype/tools/publisher/widgets/overview_widget.py index 25fff73134..778aa1139f 100644 --- a/openpype/tools/publisher/widgets/overview_widget.py +++ b/openpype/tools/publisher/widgets/overview_widget.py @@ -28,12 +28,14 @@ class OverviewWidget(QtWidgets.QFrame): self._refreshing_instances = False self._controller = controller - create_widget = CreateWidget(controller, self) + subset_content_widget = QtWidgets.QWidget(self) + + create_widget = CreateWidget(controller, subset_content_widget) # --- Created Subsets/Instances --- # Common widget for creation and overview subset_views_widget = BorderedLabelWidget( - "Subsets to publish", self + "Subsets to publish", subset_content_widget ) subset_view_cards = InstanceCardView(controller, subset_views_widget) @@ -45,14 +47,14 @@ class OverviewWidget(QtWidgets.QFrame): subset_views_layout.setCurrentWidget(subset_view_cards) # Buttons at the bottom of subset view - create_btn = CreateInstanceBtn(self) - delete_btn = RemoveInstanceBtn(self) - change_view_btn = ChangeViewBtn(self) + create_btn = CreateInstanceBtn(subset_views_widget) + delete_btn = RemoveInstanceBtn(subset_views_widget) + change_view_btn = ChangeViewBtn(subset_views_widget) # --- Overview --- # Subset details widget subset_attributes_wrap = BorderedLabelWidget( - "Publish options", self + "Publish options", subset_content_widget ) subset_attributes_widget = SubsetAttributesWidget( controller, subset_attributes_wrap @@ -81,7 +83,6 @@ class OverviewWidget(QtWidgets.QFrame): subset_views_widget.set_center_widget(subset_view_widget) # Whole subset layout with attributes and details - subset_content_widget = QtWidgets.QWidget(self) subset_content_layout = QtWidgets.QHBoxLayout(subset_content_widget) subset_content_layout.setContentsMargins(0, 0, 0, 0) subset_content_layout.addWidget(create_widget, 7) @@ -161,44 +162,62 @@ class OverviewWidget(QtWidgets.QFrame): self._change_anim = change_anim # Start in create mode - self._create_widget_policy = create_widget.sizePolicy() - self._subset_views_widget_policy = subset_views_widget.sizePolicy() - self._subset_attributes_wrap_policy = ( - subset_attributes_wrap.sizePolicy() - ) - self._max_widget_width = None self._current_state = "create" subset_attributes_wrap.setVisible(False) + def make_sure_animation_is_finished(self): + if self._change_anim.state() == QtCore.QAbstractAnimation.Running: + self._change_anim.stop() + self._on_change_anim_finished() + def set_state(self, new_state, animate): if new_state == self._current_state: return self._current_state = new_state - anim_is_running = ( - self._change_anim.state() == QtCore.QAbstractAnimation.Running - ) if not animate: - self._change_visibility_for_state() - if anim_is_running: - self._change_anim.stop() + self.make_sure_animation_is_finished() return - if self._max_widget_width is None: - self._max_widget_width = self._subset_views_widget.maximumWidth() - if new_state == "create": direction = QtCore.QAbstractAnimation.Backward else: direction = QtCore.QAbstractAnimation.Forward self._change_anim.setDirection(direction) - if not anim_is_running: - view_width = self._subset_views_widget.width() - self._subset_views_widget.setMinimumWidth(view_width) - self._subset_views_widget.setMaximumWidth(view_width) + if ( + self._change_anim.state() != QtCore.QAbstractAnimation.Running + ): + self._start_animation() + + def _start_animation(self): + views_geo = self._subset_views_widget.geometry() + layout_spacing = self._subset_content_layout.spacing() + if self._create_widget.isVisible(): + create_geo = self._create_widget.geometry() + subset_geo = QtCore.QRect(create_geo) + subset_geo.moveTop(views_geo.top()) + subset_geo.moveLeft(views_geo.right() + layout_spacing) + self._subset_attributes_wrap.setVisible(True) + + elif self._subset_attributes_wrap.isVisible(): + subset_geo = self._subset_attributes_wrap.geometry() + create_geo = QtCore.QRect(subset_geo) + create_geo.moveTop(views_geo.top()) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + self._create_widget.setVisible(True) + else: self._change_anim.start() + return + + while self._subset_content_layout.count(): + self._subset_content_layout.takeAt(0) + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_geo) + + self._change_anim.start() def get_subset_views_geo(self): parent = self._subset_views_widget.parent() @@ -281,41 +300,39 @@ class OverviewWidget(QtWidgets.QFrame): def _on_change_anim(self, value): self._create_widget.setVisible(True) self._subset_attributes_wrap.setVisible(True) - width = ( - self._subset_content_widget.width() - - ( - self._subset_views_widget.width() - + (self._subset_content_layout.spacing() * 2) - ) - ) - subset_attrs_width = int((float(width) / self.anim_end_value) * value) - if subset_attrs_width > width: - subset_attrs_width = width + layout_spacing = self._subset_content_layout.spacing() + content_width = ( + self._subset_content_widget.width() - (layout_spacing * 2) + ) + content_height = self._subset_content_widget.height() + views_width = max( + int(content_width * 0.3), + self._subset_views_widget.minimumWidth() + ) + width = content_width - views_width + # Visible widths of other widgets + subset_attrs_width = int((float(width) / self.anim_end_value) * value) create_width = width - subset_attrs_width - self._create_widget.setMinimumWidth(create_width) - self._create_widget.setMaximumWidth(create_width) - self._subset_attributes_wrap.setMinimumWidth(subset_attrs_width) - self._subset_attributes_wrap.setMaximumWidth(subset_attrs_width) + views_geo = QtCore.QRect( + create_width + layout_spacing, 0, + views_width, content_height + ) + create_geo = QtCore.QRect(0, 0, width, content_height) + subset_attrs_geo = QtCore.QRect(create_geo) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + subset_attrs_geo.moveLeft(views_geo.right() + layout_spacing) + + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_attrs_geo) def _on_change_anim_finished(self): self._change_visibility_for_state() - self._create_widget.setMinimumWidth(0) - self._create_widget.setMaximumWidth(self._max_widget_width) - self._subset_attributes_wrap.setMinimumWidth(0) - self._subset_attributes_wrap.setMaximumWidth(self._max_widget_width) - self._subset_views_widget.setMinimumWidth(0) - self._subset_views_widget.setMaximumWidth(self._max_widget_width) - self._create_widget.setSizePolicy( - self._create_widget_policy - ) - self._subset_attributes_wrap.setSizePolicy( - self._subset_attributes_wrap_policy - ) - self._subset_views_widget.setSizePolicy( - self._subset_views_widget_policy - ) + self._subset_content_layout.addWidget(self._create_widget, 7) + self._subset_content_layout.addWidget(self._subset_views_widget, 3) + self._subset_content_layout.addWidget(self._subset_attributes_wrap, 7) def _change_visibility_for_state(self): self._create_widget.setVisible( diff --git a/openpype/tools/publisher/widgets/screenshot_widget.py b/openpype/tools/publisher/widgets/screenshot_widget.py new file mode 100644 index 0000000000..3504b419b4 --- /dev/null +++ b/openpype/tools/publisher/widgets/screenshot_widget.py @@ -0,0 +1,303 @@ +import os +import tempfile + +from qtpy import QtCore, QtGui, QtWidgets + + +class ScreenMarquee(QtWidgets.QDialog): + """Dialog to interactively define screen area. + + This allows to select a screen area through a marquee selection. + + You can use any of its classmethods for easily saving an image, + capturing to QClipboard or returning a QPixmap, respectively + `capture_to_file`, `capture_to_clipboard` and `capture_to_pixmap`. + """ + + def __init__(self, parent=None): + super(ScreenMarquee, self).__init__(parent=parent) + + self.setWindowFlags( + QtCore.Qt.FramelessWindowHint + | QtCore.Qt.WindowStaysOnTopHint + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.Tool) + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + self.setCursor(QtCore.Qt.CrossCursor) + self.setMouseTracking(True) + + app = QtWidgets.QApplication.instance() + if hasattr(app, "screenAdded"): + app.screenAdded.connect(self._on_screen_added) + app.screenRemoved.connect(self._fit_screen_geometry) + elif hasattr(app, "desktop"): + desktop = app.desktop() + desktop.screenCountChanged.connect(self._fit_screen_geometry) + + for screen in QtWidgets.QApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + self._opacity = 50 + self._click_pos = None + self._capture_rect = None + + def get_captured_pixmap(self): + if self._capture_rect is None: + return QtGui.QPixmap() + + return self.get_desktop_pixmap(self._capture_rect) + + def paintEvent(self, event): + """Paint event""" + + # Convert click and current mouse positions to local space. + mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos()) + click_pos = None + if self._click_pos is not None: + click_pos = self.mapFromGlobal(self._click_pos) + + painter = QtGui.QPainter(self) + painter.setRenderHints( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + + # Draw background. Aside from aesthetics, this makes the full + # tool region accept mouse events. + painter.setBrush(QtGui.QColor(0, 0, 0, self._opacity)) + painter.setPen(QtCore.Qt.NoPen) + rect = event.rect() + fill_path = QtGui.QPainterPath() + fill_path.addRect(rect) + + # Clear the capture area + if click_pos is not None: + sub_path = QtGui.QPainterPath() + capture_rect = QtCore.QRect(click_pos, mouse_pos) + sub_path.addRect(capture_rect) + fill_path = fill_path.subtracted(sub_path) + + painter.drawPath(fill_path) + + pen_color = QtGui.QColor(255, 255, 255, self._opacity) + pen = QtGui.QPen(pen_color, 1, QtCore.Qt.DotLine) + painter.setPen(pen) + + # Draw cropping markers at click position + if click_pos is not None: + painter.drawLine( + rect.left(), click_pos.y(), + rect.right(), click_pos.y() + ) + painter.drawLine( + click_pos.x(), rect.top(), + click_pos.x(), rect.bottom() + ) + + # Draw cropping markers at current mouse position + painter.drawLine( + rect.left(), mouse_pos.y(), + rect.right(), mouse_pos.y() + ) + painter.drawLine( + mouse_pos.x(), rect.top(), + mouse_pos.x(), rect.bottom() + ) + painter.end() + + def mousePressEvent(self, event): + """Mouse click event""" + + if event.button() == QtCore.Qt.LeftButton: + # Begin click drag operation + self._click_pos = event.globalPos() + + def mouseReleaseEvent(self, event): + """Mouse release event""" + if ( + self._click_pos is not None + and event.button() == QtCore.Qt.LeftButton + ): + # End click drag operation and commit the current capture rect + self._capture_rect = QtCore.QRect( + self._click_pos, event.globalPos() + ).normalized() + self._click_pos = None + self.close() + + def mouseMoveEvent(self, event): + """Mouse move event""" + self.repaint() + + def keyPressEvent(self, event): + """Mouse press event""" + if event.key() == QtCore.Qt.Key_Escape: + self._click_pos = None + self._capture_rect = None + event.accept() + self.close() + return + return super(ScreenMarquee, self).keyPressEvent(event) + + def showEvent(self, event): + self._fit_screen_geometry() + + def _fit_screen_geometry(self): + # Compute the union of all screen geometries, and resize to fit. + workspace_rect = QtCore.QRect() + for screen in QtWidgets.QApplication.screens(): + workspace_rect = workspace_rect.united(screen.geometry()) + self.setGeometry(workspace_rect) + + def _on_screen_added(self): + for screen in QtGui.QGuiApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + @classmethod + def get_desktop_pixmap(cls, rect): + """Performs a screen capture on the specified rectangle. + + Args: + rect (QtCore.QRect): The rectangle to capture. + + Returns: + QtGui.QPixmap: Captured pixmap image + """ + + if rect.width() < 1 or rect.height() < 1: + return QtGui.QPixmap() + + screen_pixes = [] + for screen in QtWidgets.QApplication.screens(): + screen_geo = screen.geometry() + if not screen_geo.intersects(rect): + continue + + screen_pix_rect = screen_geo.intersected(rect) + screen_pix = screen.grabWindow( + 0, + screen_pix_rect.x() - screen_geo.x(), + screen_pix_rect.y() - screen_geo.y(), + screen_pix_rect.width(), screen_pix_rect.height() + ) + paste_point = QtCore.QPoint( + screen_pix_rect.x() - rect.x(), + screen_pix_rect.y() - rect.y() + ) + screen_pixes.append((screen_pix, paste_point)) + + output_pix = QtGui.QPixmap(rect.width(), rect.height()) + output_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(output_pix) + for item in screen_pixes: + (screen_pix, offset) = item + pix_painter.drawPixmap(offset, screen_pix) + + pix_painter.end() + + return output_pix + + @classmethod + def capture_to_pixmap(cls): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + tool = cls() + tool.exec_() + return tool.get_captured_pixmap() + + @classmethod + def capture_to_file(cls, filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return None + + if filepath is None: + with tempfile.NamedTemporaryFile( + prefix="screenshot_", suffix=".png", delete=False + ) as tmpfile: + filepath = tmpfile.name + + else: + output_dir = os.path.dirname(filepath) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + pixmap.save(filepath) + return filepath + + @classmethod + def capture_to_clipboard(cls): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return False + image = pixmap.toImage() + clipboard.setImage(image, QtGui.QClipboard.Clipboard) + return True + + +def capture_to_pixmap(): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + return ScreenMarquee.capture_to_pixmap() + + +def capture_to_file(filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + return ScreenMarquee.capture_to_file(filepath) + + +def capture_to_clipboard(): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + return ScreenMarquee.capture_to_clipboard() diff --git a/openpype/tools/publisher/widgets/thumbnail_widget.py b/openpype/tools/publisher/widgets/thumbnail_widget.py index b17ca0adc8..60970710d8 100644 --- a/openpype/tools/publisher/widgets/thumbnail_widget.py +++ b/openpype/tools/publisher/widgets/thumbnail_widget.py @@ -7,8 +7,8 @@ from openpype.style import get_objected_colors from openpype.lib import ( run_subprocess, is_oiio_supported, - get_oiio_tools_path, - get_ffmpeg_tool_path, + get_oiio_tool_args, + get_ffmpeg_tool_args, ) from openpype.lib.transcoding import ( IMAGE_EXTENSIONS, @@ -22,6 +22,7 @@ from openpype.tools.utils import ( from openpype.tools.publisher.control import CardMessageTypes from .icons import get_image +from .screenshot_widget import capture_to_file class ThumbnailPainterWidget(QtWidgets.QWidget): @@ -306,20 +307,43 @@ class ThumbnailWidget(QtWidgets.QWidget): thumbnail_painter = ThumbnailPainterWidget(self) + icon_color = get_objected_colors("bg-view-selection").get_qcolor() + icon_color.setAlpha(255) + buttons_widget = QtWidgets.QWidget(self) buttons_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) - icon_color = get_objected_colors("bg-view-selection").get_qcolor() - icon_color.setAlpha(255) clear_image = get_image("clear_thumbnail") clear_pix = paint_image_with_color(clear_image, icon_color) - clear_button = PixmapButton(clear_pix, buttons_widget) clear_button.setObjectName("ThumbnailPixmapHoverButton") + clear_button.setToolTip("Clear thumbnail") + + take_screenshot_image = get_image("take_screenshot") + take_screenshot_pix = paint_image_with_color( + take_screenshot_image, icon_color) + take_screenshot_btn = PixmapButton( + take_screenshot_pix, buttons_widget) + take_screenshot_btn.setObjectName("ThumbnailPixmapHoverButton") + take_screenshot_btn.setToolTip("Take screenshot") + + paste_image = get_image("paste") + paste_pix = paint_image_with_color(paste_image, icon_color) + paste_btn = PixmapButton(paste_pix, buttons_widget) + paste_btn.setObjectName("ThumbnailPixmapHoverButton") + paste_btn.setToolTip("Paste from clipboard") + + browse_image = get_image("browse") + browse_pix = paint_image_with_color(browse_image, icon_color) + browse_btn = PixmapButton(browse_pix, buttons_widget) + browse_btn.setObjectName("ThumbnailPixmapHoverButton") + browse_btn.setToolTip("Browse...") buttons_layout = QtWidgets.QHBoxLayout(buttons_widget) - buttons_layout.setContentsMargins(3, 3, 3, 3) - buttons_layout.addStretch(1) + buttons_layout.setContentsMargins(0, 0, 0, 0) + buttons_layout.addWidget(take_screenshot_btn, 0) + buttons_layout.addWidget(paste_btn, 0) + buttons_layout.addWidget(browse_btn, 0) buttons_layout.addWidget(clear_button, 0) layout = QtWidgets.QHBoxLayout(self) @@ -327,6 +351,9 @@ class ThumbnailWidget(QtWidgets.QWidget): layout.addWidget(thumbnail_painter) clear_button.clicked.connect(self._on_clear_clicked) + take_screenshot_btn.clicked.connect(self._on_take_screenshot) + paste_btn.clicked.connect(self._on_paste_from_clipboard) + browse_btn.clicked.connect(self._on_browse_clicked) self._controller = controller self._output_dir = controller.get_thumbnail_temp_dir_path() @@ -338,9 +365,16 @@ class ThumbnailWidget(QtWidgets.QWidget): self._adapted_to_size = True self._last_width = None self._last_height = None + self._hide_on_finish = False self._buttons_widget = buttons_widget self._thumbnail_painter = thumbnail_painter + self._clear_button = clear_button + self._take_screenshot_btn = take_screenshot_btn + self._paste_btn = paste_btn + self._browse_btn = browse_btn + + clear_button.setEnabled(False) @property def width_ratio(self): @@ -430,13 +464,75 @@ class ThumbnailWidget(QtWidgets.QWidget): self._thumbnail_painter.clear_cache() + def _set_current_thumbails(self, thumbnail_paths): + self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) + self._update_buttons_position() + def set_current_thumbnails(self, thumbnail_paths=None): self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) self._update_buttons_position() + self._clear_button.setEnabled(self._thumbnail_painter.has_pixes) def _on_clear_clicked(self): self.set_current_thumbnails() self.thumbnail_cleared.emit() + self._clear_button.setEnabled(False) + + def _on_take_screenshot(self): + window = self.window() + state = window.windowState() + window.setWindowState(QtCore.Qt.WindowMinimized) + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + if capture_to_file(output_path): + self.thumbnail_created.emit(output_path) + # restore original window state + window.setWindowState(state) + + def _on_paste_from_clipboard(self): + """Set thumbnail from a pixmap image in the system clipboard""" + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = clipboard.pixmap() + if pixmap.isNull(): + return + + # Save as temporary file + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + + output_dir = os.path.dirname(output_path) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + if pixmap.save(output_path): + self.thumbnail_created.emit(output_path) + + def _on_browse_clicked(self): + ext_filter = "Source (*{0})".format( + " *".join(self._review_extensions) + ) + filepath, _ = QtWidgets.QFileDialog.getOpenFileName( + self, "Choose thumbnail", os.path.expanduser("~"), ext_filter + ) + if not filepath: + return + valid_path = False + ext = os.path.splitext(filepath)[-1].lower() + if ext in self._review_extensions: + valid_path = True + + output = None + if valid_path: + output = export_thumbnail(filepath, self._output_dir) + + if output: + self.thumbnail_created.emit(output) + else: + self._controller.emit_card_message( + "Couldn't convert the source for thumbnail", + CardMessageTypes.error + ) def _adapt_to_size(self): if not self._adapted_to_size: @@ -452,13 +548,25 @@ class ThumbnailWidget(QtWidgets.QWidget): self._thumbnail_painter.clear_cache() def _update_buttons_position(self): - self._buttons_widget.setVisible(self._thumbnail_painter.has_pixes) size = self.size() + my_width = size.width() my_height = size.height() - height = self._buttons_widget.sizeHint().height() + buttons_sh = self._buttons_widget.sizeHint() + buttons_height = buttons_sh.height() + buttons_width = buttons_sh.width() + pos_x = my_width - (buttons_width + 3) + pos_y = my_height - (buttons_height + 3) + if pos_x < 0: + pos_x = 0 + buttons_width = my_width + if pos_y < 0: + pos_y = 0 + buttons_height = my_height self._buttons_widget.setGeometry( - 0, my_height - height, - size.width(), height + pos_x, + pos_y, + buttons_width, + buttons_height ) def resizeEvent(self, event): @@ -481,12 +589,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): if not is_oiio_supported(): return None - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-i", src_path, "--subimage", "0", "-o", dst_path - ] + ) try: _run_silent_subprocess(oiio_cmd) except Exception: @@ -495,12 +603,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): def _convert_thumbnail_ffmpeg(src_path, dst_path): - ffmpeg_cmd = [ - get_ffmpeg_tool_path(), + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", src_path, dst_path - ] + ) try: _run_silent_subprocess(ffmpeg_cmd) except Exception: diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py index 0b13f26d57..1bbe73381f 100644 --- a/openpype/tools/publisher/widgets/widgets.py +++ b/openpype/tools/publisher/widgets/widgets.py @@ -9,6 +9,7 @@ import collections from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.lib.attribute_definitions import UnknownDef from openpype.tools.attribute_defs import create_widget_for_attr_def from openpype.tools import resources @@ -1116,10 +1117,16 @@ class GlobalAttrsWidget(QtWidgets.QWidget): main_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) main_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) main_layout.addRow("Variant", variant_input) - main_layout.addRow("Asset", asset_value_widget) + main_layout.addRow( + "Folder" if AYON_SERVER_ENABLED else "Asset", + asset_value_widget) main_layout.addRow("Task", task_value_widget) - main_layout.addRow("Family", family_value_widget) - main_layout.addRow("Subset", subset_value_widget) + main_layout.addRow( + "Product type" if AYON_SERVER_ENABLED else "Family", + family_value_widget) + main_layout.addRow( + "Product name" if AYON_SERVER_ENABLED else "Subset", + subset_value_widget) main_layout.addRow(btns_layout) variant_input.value_changed.connect(self._on_variant_change) diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 2bda0c1cfe..39e78c01bb 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -634,16 +634,7 @@ class PublisherWindow(QtWidgets.QDialog): if old_tab == "details": self._publish_details_widget.close_details_popup() - if new_tab in ("create", "publish"): - animate = True - if old_tab not in ("create", "publish"): - animate = False - self._content_stacked_layout.setCurrentWidget( - self._overview_widget - ) - self._overview_widget.set_state(new_tab, animate) - - elif new_tab == "details": + if new_tab == "details": self._content_stacked_layout.setCurrentWidget( self._publish_details_widget ) @@ -654,6 +645,21 @@ class PublisherWindow(QtWidgets.QDialog): self._report_widget ) + old_on_overview = old_tab in ("create", "publish") + if new_tab in ("create", "publish"): + self._content_stacked_layout.setCurrentWidget( + self._overview_widget + ) + # Overview state is animated only when switching between + # 'create' and 'publish' tab + self._overview_widget.set_state(new_tab, old_on_overview) + + elif old_on_overview: + # Make sure animation finished if previous tab was 'create' + # or 'publish'. That is just for safety to avoid stuck animation + # when user clicks too fast. + self._overview_widget.make_sure_animation_is_finished() + is_create = new_tab == "create" if is_create: self._install_app_event_listener() diff --git a/openpype/tools/push_to_project/app.py b/openpype/tools/push_to_project/app.py index 9ca5fd83e9..b3ec33f353 100644 --- a/openpype/tools/push_to_project/app.py +++ b/openpype/tools/push_to_project/app.py @@ -1,6 +1,6 @@ import click -from qtpy import QtWidgets, QtCore +from openpype.tools.utils import get_openpype_qt_app from openpype.tools.push_to_project.window import PushToContextSelectWindow @@ -15,20 +15,7 @@ def main(project, version): version (str): Version id. """ - app = QtWidgets.QApplication.instance() - if not app: - # 'AA_EnableHighDpiScaling' must be set before app instance creation - high_dpi_scale_attr = getattr( - QtCore.Qt, "AA_EnableHighDpiScaling", None - ) - if high_dpi_scale_attr is not None: - QtWidgets.QApplication.setAttribute(high_dpi_scale_attr) - - app = QtWidgets.QApplication([]) - - attr = getattr(QtCore.Qt, "AA_UseHighDpiPixmaps", None) - if attr is not None: - app.setAttribute(attr) + app = get_openpype_qt_app() window = PushToContextSelectWindow() window.show() diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py index 37a0512d59..a822339ccf 100644 --- a/openpype/tools/push_to_project/control_integrate.py +++ b/openpype/tools/push_to_project/control_integrate.py @@ -40,6 +40,7 @@ from openpype.lib import ( from openpype.lib.file_transaction import FileTransaction from openpype.settings import get_project_settings from openpype.pipeline import Anatomy +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.template_data import get_template_data from openpype.pipeline.publish import get_publish_template_name from openpype.pipeline.create import get_subset_name @@ -940,9 +941,17 @@ class ProjectPushItemProcess: last_version_doc = get_last_version_by_subset_id( project_name, subset_id ) - version = 1 if last_version_doc: - version += int(last_version_doc["name"]) + version = int(last_version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + self.host_name, + task_name=self.task_info["name"], + task_type=self.task_info["type"], + family=families[0], + subset=subset_doc["name"] + ) existing_version_doc = get_version_by_name( project_name, version, subset_id @@ -966,14 +975,6 @@ class ProjectPushItemProcess: return - if version is None: - last_version_doc = get_last_version_by_subset_id( - project_name, subset_id - ) - version = 1 - if last_version_doc: - version += int(last_version_doc["name"]) - version_doc = new_version_doc( version, subset_id, version_data ) diff --git a/openpype/tools/sceneinventory/lib.py b/openpype/tools/sceneinventory/lib.py index 5db3c479c5..0ac7622d65 100644 --- a/openpype/tools/sceneinventory/lib.py +++ b/openpype/tools/sceneinventory/lib.py @@ -1,9 +1,3 @@ -import os -from openpype_modules import sync_server - -from qtpy import QtGui - - def walk_hierarchy(node): """Recursively yield group node.""" for child in node.children(): @@ -12,71 +6,3 @@ def walk_hierarchy(node): for _child in walk_hierarchy(child): yield _child - - -def get_site_icons(): - resource_path = os.path.join( - os.path.dirname(sync_server.sync_server_module.__file__), - "providers", - "resources" - ) - icons = {} - # TODO get from sync module - for provider in ["studio", "local_drive", "gdrive"]: - pix_url = "{}/{}.png".format(resource_path, provider) - icons[provider] = QtGui.QIcon(pix_url) - - return icons - - -def get_progress_for_repre(repre_doc, active_site, remote_site): - """ - Calculates average progress for representation. - - If site has created_dt >> fully available >> progress == 1 - - Could be calculated in aggregate if it would be too slow - Args: - repre_doc(dict): representation dict - Returns: - (dict) with active and remote sites progress - {'studio': 1.0, 'gdrive': -1} - gdrive site is not present - -1 is used to highlight the site should be added - {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not - uploaded yet - """ - progress = {active_site: -1, remote_site: -1} - if not repre_doc: - return progress - - files = {active_site: 0, remote_site: 0} - doc_files = repre_doc.get("files") or [] - for doc_file in doc_files: - if not isinstance(doc_file, dict): - continue - - sites = doc_file.get("sites") or [] - for site in sites: - if ( - # Pype 2 compatibility - not isinstance(site, dict) - # Check if site name is one of progress sites - or site["name"] not in progress - ): - continue - - files[site["name"]] += 1 - norm_progress = max(progress[site["name"]], 0) - if site.get("created_dt"): - progress[site["name"]] = norm_progress + 1 - elif site.get("progress"): - progress[site["name"]] = norm_progress + site["progress"] - else: # site exists, might be failed, do not add again - progress[site["name"]] = 0 - - # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 - avg_progress = { - active_site: progress[active_site] / max(files[active_site], 1), - remote_site: progress[remote_site] / max(files[remote_site], 1) - } - return avg_progress diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 5cc849bb9e..4fd82f04a4 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -15,7 +15,7 @@ from openpype.client import ( get_representation_by_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, schema, HeroVersionType, registered_host, @@ -24,11 +24,7 @@ from openpype.style import get_default_entity_icon_color from openpype.tools.utils.models import TreeModel, Item from openpype.modules import ModulesManager -from .lib import ( - get_site_icons, - walk_hierarchy, - get_progress_for_repre -) +from .lib import walk_hierarchy class InventoryModel(TreeModel): @@ -54,8 +50,10 @@ class InventoryModel(TreeModel): self._default_icon_color = get_default_entity_icon_color() manager = ModulesManager() - sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + self.sync_enabled = ( + sync_server is not None and sync_server.enabled + ) self._site_icons = {} self.active_site = self.remote_site = None self.active_provider = self.remote_provider = None @@ -63,7 +61,7 @@ class InventoryModel(TreeModel): if not self.sync_enabled: return - project_name = legacy_io.current_project() + project_name = get_current_project_name() active_site = sync_server.get_active_site(project_name) remote_site = sync_server.get_remote_site(project_name) @@ -80,12 +78,15 @@ class InventoryModel(TreeModel): project_name, remote_site ) - # self.sync_server = sync_server + self.sync_server = sync_server self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site self.remote_provider = remote_provider - self._site_icons = get_site_icons() + self._site_icons = { + provider: QtGui.QIcon(icon_path) + for provider, icon_path in sync_server.get_site_icons().items() + } if "active_site" not in self.Columns: self.Columns.append("active_site") if "remote_site" not in self.Columns: @@ -321,7 +322,7 @@ class InventoryModel(TreeModel): """ # NOTE: @iLLiCiTiT this need refactor - project_name = legacy_io.active_project() + project_name = get_current_project_name() self.beginResetModel() @@ -445,7 +446,7 @@ class InventoryModel(TreeModel): group_node["group"] = subset["data"].get("subsetGroup") if self.sync_enabled: - progress = get_progress_for_repre( + progress = self.sync_server.get_progress_for_repre( representation, self.active_site, self.remote_site ) group_node["active_site"] = self.active_site diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index 73d33392b9..af463e4867 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -1,5 +1,6 @@ import collections import logging +import itertools from functools import partial from qtpy import QtWidgets, QtCore @@ -22,7 +23,6 @@ from openpype.pipeline import ( ) from openpype.modules import ModulesManager from openpype.tools.utils.lib import ( - get_progress_for_repre, iter_model_rows, format_version ) @@ -54,8 +54,11 @@ class SceneInventoryView(QtWidgets.QTreeView): self._selected = None manager = ModulesManager() - self.sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = self.sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + sync_enabled = sync_server is not None and sync_server.enabled + + self.sync_server = sync_server + self.sync_enabled = sync_enabled def _set_hierarchy_view(self, enabled): if enabled == self._hierarchy_view: @@ -195,20 +198,17 @@ class SceneInventoryView(QtWidgets.QTreeView): version_name_by_id[version_doc["_id"]] = \ version_doc["name"] + # Specify version per item to update to + update_items = [] + update_versions = [] for item in items: repre_id = item["representation"] version_id = version_id_by_repre_id.get(repre_id) version_name = version_name_by_id.get(version_id) if version_name is not None: - try: - update_container(item, version_name) - except AssertionError: - self._show_version_error_dialog( - version_name, [item] - ) - log.warning("Update failed", exc_info=True) - - self.data_changed.emit() + update_items.append(item) + update_versions.append(version_name) + self._update_containers(update_items, update_versions) update_icon = qtawesome.icon( "fa.asterisk", @@ -225,16 +225,6 @@ class SceneInventoryView(QtWidgets.QTreeView): update_to_latest_action = None if has_outdated or has_loaded_hero_versions: - # update to latest version - def _on_update_to_latest(items): - for item in items: - try: - update_container(item, -1) - except AssertionError: - self._show_version_error_dialog(None, [item]) - log.warning("Update failed", exc_info=True) - self.data_changed.emit() - update_icon = qtawesome.icon( "fa.angle-double-up", color=DEFAULT_COLOR @@ -245,21 +235,11 @@ class SceneInventoryView(QtWidgets.QTreeView): menu ) update_to_latest_action.triggered.connect( - lambda: _on_update_to_latest(items) + lambda: self._update_containers(items, version=-1) ) change_to_hero = None if has_available_hero_version: - # change to hero version - def _on_update_to_hero(items): - for item in items: - try: - update_container(item, HeroVersionType(-1)) - except AssertionError: - self._show_version_error_dialog('hero', [item]) - log.warning("Update failed", exc_info=True) - self.data_changed.emit() - # TODO change icon change_icon = qtawesome.icon( "fa.asterisk", @@ -271,7 +251,8 @@ class SceneInventoryView(QtWidgets.QTreeView): menu ) change_to_hero.triggered.connect( - lambda: _on_update_to_hero(items) + lambda: self._update_containers(items, + version=HeroVersionType(-1)) ) # set version @@ -382,7 +363,7 @@ class SceneInventoryView(QtWidgets.QTreeView): if not repre_doc: continue - progress = get_progress_for_repre( + progress = self.sync_server.get_progress_for_repre( repre_doc, active_site, remote_site @@ -740,14 +721,7 @@ class SceneInventoryView(QtWidgets.QTreeView): if label: version = versions_by_label[label] - for item in items: - try: - update_container(item, version) - except AssertionError: - self._show_version_error_dialog(version, [item]) - log.warning("Update failed", exc_info=True) - # refresh model when done - self.data_changed.emit() + self._update_containers(items, version) def _show_switch_dialog(self, items): """Display Switch dialog""" @@ -782,9 +756,9 @@ class SceneInventoryView(QtWidgets.QTreeView): Args: version: str or int or None """ - if not version: + if version == -1: version_str = "latest" - elif version == "hero": + elif isinstance(version, HeroVersionType): version_str = "hero" elif isinstance(version, int): version_str = "v{:03d}".format(version) @@ -841,10 +815,43 @@ class SceneInventoryView(QtWidgets.QTreeView): return # Trigger update to latest - for item in outdated_items: - try: - update_container(item, -1) - except AssertionError: - self._show_version_error_dialog(None, [item]) - log.warning("Update failed", exc_info=True) - self.data_changed.emit() + self._update_containers(outdated_items, version=-1) + + def _update_containers(self, items, version): + """Helper to update items to given version (or version per item) + + If at least one item is specified this will always try to refresh + the inventory even if errors occurred on any of the items. + + Arguments: + items (list): Items to update + version (int or list): Version to set to. + This can be a list specifying a version for each item. + Like `update_container` version -1 sets the latest version + and HeroTypeVersion instances set the hero version. + + """ + + if isinstance(version, (list, tuple)): + # We allow a unique version to be specified per item. In that case + # the length must match with the items + assert len(items) == len(version), ( + "Number of items mismatches number of versions: " + "{} items - {} versions".format(len(items), len(version)) + ) + versions = version + else: + # Repeat the same version infinitely + versions = itertools.repeat(version) + + # Trigger update to latest + try: + for item, item_version in zip(items, versions): + try: + update_container(item, item_version) + except AssertionError: + self._show_version_error_dialog(item_version, [item]) + log.warning("Update failed", exc_info=True) + finally: + # Always update the scene inventory view, even if errors occurred + self.data_changed.emit() diff --git a/openpype/tools/settings/__init__.py b/openpype/tools/settings/__init__.py index a5b1ea51a5..04f64e13f1 100644 --- a/openpype/tools/settings/__init__.py +++ b/openpype/tools/settings/__init__.py @@ -1,7 +1,8 @@ import sys -from qtpy import QtWidgets, QtGui +from qtpy import QtGui from openpype import style +from openpype.tools.utils import get_openpype_qt_app from .lib import ( BTN_FIXED_SIZE, CHILD_OFFSET @@ -24,9 +25,7 @@ def main(user_role=None): user_role, ", ".join(allowed_roles) )) - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = get_openpype_qt_app() app.setWindowIcon(QtGui.QIcon(style.app_icon_path())) widget = MainWidget(user_role) diff --git a/openpype/tools/settings/local_settings/projects_widget.py b/openpype/tools/settings/local_settings/projects_widget.py index 4a4148d7cd..f2b6535115 100644 --- a/openpype/tools/settings/local_settings/projects_widget.py +++ b/openpype/tools/settings/local_settings/projects_widget.py @@ -267,25 +267,26 @@ class SitesWidget(QtWidgets.QWidget): self.input_objects = {} def _get_sites_inputs(self): - sync_server_module = ( - self.modules_manager.modules_by_name["sync_server"] - ) + output = [] + if self._project_name is None: + return output + + sync_server_module = self.modules_manager.modules_by_name.get( + "sync_server") + if sync_server_module is None or not sync_server_module.enabled: + return output site_configs = sync_server_module.get_all_site_configs( self._project_name, local_editable_only=True) - roots_entity = ( - self.project_settings[PROJECT_ANATOMY_KEY][LOCAL_ROOTS_KEY] - ) site_names = [self.active_site_widget.current_text(), self.remote_site_widget.current_text()] - output = [] for site_name in site_names: if not site_name: continue site_inputs = [] - site_config = site_configs[site_name] + site_config = site_configs.get(site_name, {}) for root_name, path_entity in site_config.get("root", {}).items(): if not path_entity: continue @@ -350,9 +351,6 @@ class SitesWidget(QtWidgets.QWidget): def refresh(self): self._clear_widgets() - if self._project_name is None: - return - # Site label for site_name, site_inputs in self._get_sites_inputs(): site_widget = QtWidgets.QWidget(self.content_widget) diff --git a/openpype/tools/settings/settings/item_widgets.py b/openpype/tools/settings/settings/item_widgets.py index 117eca7d6b..2fd13cbbd8 100644 --- a/openpype/tools/settings/settings/item_widgets.py +++ b/openpype/tools/settings/settings/item_widgets.py @@ -4,6 +4,7 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype.widgets.sliders import NiceSlider from openpype.tools.settings import CHILD_OFFSET +from openpype.tools.utils import MultiSelectionComboBox from openpype.settings.entities.exceptions import BaseInvalidValue from .widgets import ( @@ -15,7 +16,6 @@ from .widgets import ( SettingsNiceCheckbox, SettingsLineEdit ) -from .multiselection_combobox import MultiSelectionComboBox from .wrapper_widgets import ( WrapperWidget, CollapsibleWrapper, diff --git a/openpype/tools/standalonepublish/widgets/widget_asset.py b/openpype/tools/standalonepublish/widgets/widget_asset.py index 5da25a0c3e..669366dd1d 100644 --- a/openpype/tools/standalonepublish/widgets/widget_asset.py +++ b/openpype/tools/standalonepublish/widgets/widget_asset.py @@ -2,6 +2,7 @@ import contextlib from qtpy import QtWidgets, QtCore import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_projects, get_project, @@ -181,7 +182,8 @@ class AssetWidget(QtWidgets.QWidget): filter = PlaceholderLineEdit() filter.textChanged.connect(proxy.setFilterFixedString) - filter.setPlaceholderText("Filter assets..") + filter.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) header.addWidget(filter) header.addWidget(refresh) diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py index f46e31786c..306c43e85d 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py @@ -5,6 +5,8 @@ import clique import subprocess import openpype.lib from qtpy import QtWidgets, QtCore + +from openpype.lib import get_ffprobe_data from . import DropEmpty, ComponentsList, ComponentItem @@ -269,26 +271,8 @@ class DropDataFrame(QtWidgets.QFrame): self._process_data(data) def load_data_with_probe(self, filepath): - ffprobe_path = openpype.lib.get_ffmpeg_tool_path("ffprobe") - args = [ - "\"{}\"".format(ffprobe_path), - '-v', 'quiet', - '-print_format json', - '-show_format', - '-show_streams', - '"{}"'.format(filepath) - ] - ffprobe_p = subprocess.Popen( - ' '.join(args), - stdout=subprocess.PIPE, - shell=True - ) - ffprobe_output = ffprobe_p.communicate()[0] - if ffprobe_p.returncode != 0: - raise RuntimeError( - 'Failed on ffprobe: check if ffprobe path is set in PATH env' - ) - return json.loads(ffprobe_output)['streams'][0] + ffprobe_data = get_ffprobe_data(filepath) + return ffprobe_data["streams"][0] def get_file_data(self, data): filepath = data['files'][0] diff --git a/openpype/tools/standalonepublish/widgets/widget_family.py b/openpype/tools/standalonepublish/widgets/widget_family.py index 8c18a93a00..73dc2122db 100644 --- a/openpype/tools/standalonepublish/widgets/widget_family.py +++ b/openpype/tools/standalonepublish/widgets/widget_family.py @@ -10,6 +10,7 @@ from openpype.client import ( ) from openpype.settings import get_project_settings from openpype.pipeline import LegacyCreator +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, TaskNotSetError, @@ -299,7 +300,15 @@ class FamilyWidget(QtWidgets.QWidget): project_name = self.dbcon.active_project() asset_name = self.asset_name subset_name = self.input_result.text() - version = 1 + plugin = self.list_families.currentItem().data(PluginRole) + family = plugin.family.rsplit(".", 1)[-1] + version = get_versioning_start( + project_name, + "standalonepublisher", + task_name=self.dbcon.Session["AVALON_TASK"], + family=family, + subset=subset_name + ) asset_doc = None subset_doc = None diff --git a/openpype/tools/tray/pype_info_widget.py b/openpype/tools/tray/pype_info_widget.py index c616ad4dba..dc222b79b5 100644 --- a/openpype/tools/tray/pype_info_widget.py +++ b/openpype/tools/tray/pype_info_widget.py @@ -2,11 +2,14 @@ import os import json import collections +import ayon_api from qtpy import QtCore, QtGui, QtWidgets from openpype import style from openpype import resources +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import get_local_settings +from openpype.lib import get_openpype_execute_args from openpype.lib.pype_info import ( get_all_current_info, get_openpype_info, @@ -327,8 +330,9 @@ class PypeInfoSubWidget(QtWidgets.QWidget): main_layout.addWidget(self._create_openpype_info_widget(), 0) main_layout.addWidget(self._create_separator(), 0) main_layout.addWidget(self._create_workstation_widget(), 0) - main_layout.addWidget(self._create_separator(), 0) - main_layout.addWidget(self._create_local_settings_widget(), 0) + if not AYON_SERVER_ENABLED: + main_layout.addWidget(self._create_separator(), 0) + main_layout.addWidget(self._create_local_settings_widget(), 0) main_layout.addWidget(self._create_separator(), 0) main_layout.addWidget(self._create_environ_widget(), 1) @@ -425,31 +429,59 @@ class PypeInfoSubWidget(QtWidgets.QWidget): def _create_openpype_info_widget(self): """Create widget with information about OpenPype application.""" - # Get pype info data - pype_info = get_openpype_info() - # Modify version key/values - version_value = "{} ({})".format( - pype_info.pop("version", self.not_applicable), - pype_info.pop("version_type", self.not_applicable) - ) - pype_info["version_value"] = version_value - # Prepare label mapping - key_label_mapping = { - "version_value": "Running version:", - "build_verison": "Build version:", - "executable": "OpenPype executable:", - "pype_root": "OpenPype location:", - "mongo_url": "OpenPype Mongo URL:" - } - # Prepare keys order - keys_order = [ - "version_value", - "build_verison", - "executable", - "pype_root", - "mongo_url" - ] - for key in pype_info.keys(): + if AYON_SERVER_ENABLED: + executable_args = get_openpype_execute_args() + username = "N/A" + user_info = ayon_api.get_user() + if user_info: + username = user_info.get("name") or username + full_name = user_info.get("attrib", {}).get("fullName") + if full_name: + username = "{} ({})".format(full_name, username) + info_values = { + "executable": executable_args[-1], + "server_url": os.environ["AYON_SERVER_URL"], + "username": username + } + key_label_mapping = { + "executable": "AYON Executable:", + "server_url": "AYON Server:", + "username": "AYON Username:" + } + # Prepare keys order + keys_order = [ + "server_url", + "username", + "executable", + ] + + else: + # Get pype info data + info_values = get_openpype_info() + # Modify version key/values + version_value = "{} ({})".format( + info_values.pop("version", self.not_applicable), + info_values.pop("version_type", self.not_applicable) + ) + info_values["version_value"] = version_value + # Prepare label mapping + key_label_mapping = { + "version_value": "Running version:", + "build_verison": "Build version:", + "executable": "OpenPype executable:", + "pype_root": "OpenPype location:", + "mongo_url": "OpenPype Mongo URL:" + } + # Prepare keys order + keys_order = [ + "version_value", + "build_verison", + "executable", + "pype_root", + "mongo_url" + ] + + for key in info_values.keys(): if key not in keys_order: keys_order.append(key) @@ -466,9 +498,9 @@ class PypeInfoSubWidget(QtWidgets.QWidget): info_layout.addWidget(title_label, 0, 0, 1, 2) for key in keys_order: - if key not in pype_info: + if key not in info_values: continue - value = pype_info[key] + value = info_values[key] label = key_label_mapping.get(key, key) row = info_layout.rowCount() info_layout.addWidget( diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py index fdc0a8094d..a5876ca721 100644 --- a/openpype/tools/tray/pype_tray.py +++ b/openpype/tools/tray/pype_tray.py @@ -8,6 +8,7 @@ import platform from qtpy import QtCore, QtGui, QtWidgets import openpype.version +from openpype import AYON_SERVER_ENABLED from openpype import resources, style from openpype.lib import ( Logger, @@ -35,7 +36,8 @@ from openpype.settings import ( from openpype.tools.utils import ( WrappedCallbackItem, paint_image_with_color, - get_warning_pixmap + get_warning_pixmap, + get_openpype_qt_app, ) from .pype_info_widget import PypeInfoWidget @@ -589,6 +591,11 @@ class TrayManager: self.tray_widget.showMessage(*args, **kwargs) def _add_version_item(self): + if AYON_SERVER_ENABLED: + login_action = QtWidgets.QAction("Login", self.tray_widget) + login_action.triggered.connect(self._on_ayon_login) + self.tray_widget.menu.addAction(login_action) + subversion = os.environ.get("OPENPYPE_SUBVERSION") client_name = os.environ.get("OPENPYPE_CLIENT") @@ -614,6 +621,19 @@ class TrayManager: self._restart_action = restart_action + def _on_ayon_login(self): + self.execute_in_main_thread(self._show_ayon_login) + + def _show_ayon_login(self): + from ayon_common.connection.credentials import change_user_ui + + result = change_user_ui() + if result.shutdown: + self.exit() + + elif result.restart or result.token_changed: + self.restart() + def _on_restart_action(self): self.restart(use_expected_version=True) @@ -839,37 +859,7 @@ class PypeTrayStarter(QtCore.QObject): def main(): - log = Logger.get_logger(__name__) - app = QtWidgets.QApplication.instance() - - high_dpi_scale_attr = None - if not app: - # 'AA_EnableHighDpiScaling' must be set before app instance creation - high_dpi_scale_attr = getattr( - QtCore.Qt, "AA_EnableHighDpiScaling", None - ) - if high_dpi_scale_attr is not None: - QtWidgets.QApplication.setAttribute(high_dpi_scale_attr) - - app = QtWidgets.QApplication([]) - - if high_dpi_scale_attr is None: - log.debug(( - "Attribute 'AA_EnableHighDpiScaling' was not set." - " UI quality may be affected." - )) - - for attr_name in ( - "AA_UseHighDpiPixmaps", - ): - attr = getattr(QtCore.Qt, attr_name, None) - if attr is None: - log.debug(( - "Missing QtCore.Qt attribute \"{}\"." - " UI quality may be affected." - ).format(attr_name)) - else: - app.setAttribute(attr) + app = get_openpype_qt_app() starter = PypeTrayStarter(app) diff --git a/openpype/tools/traypublisher/window.py b/openpype/tools/traypublisher/window.py index 3ac1b4c4ad..a1ed38dcc0 100644 --- a/openpype/tools/traypublisher/window.py +++ b/openpype/tools/traypublisher/window.py @@ -17,7 +17,7 @@ from openpype.pipeline import install_host from openpype.hosts.traypublisher.api import TrayPublisherHost from openpype.tools.publisher.control_qt import QtPublisherController from openpype.tools.publisher.window import PublisherWindow -from openpype.tools.utils import PlaceholderLineEdit +from openpype.tools.utils import PlaceholderLineEdit, get_openpype_qt_app from openpype.tools.utils.constants import PROJECT_NAME_ROLE from openpype.tools.utils.models import ( ProjectModel, @@ -263,9 +263,7 @@ def main(): host = TrayPublisherHost() install_host(host) - app_instance = QtWidgets.QApplication.instance() - if app_instance is None: - app_instance = QtWidgets.QApplication([]) + app_instance = get_openpype_qt_app() if platform.system().lower() == "windows": import ctypes diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index 10bd527692..50d50f467a 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -15,8 +15,15 @@ from .widgets import ( IconButton, PixmapButton, SeparatorWidget, + VerticalExpandButton, + SquareButton, + RefreshButton, + GoToCurrentButton, +) +from .views import ( + DeselectableTreeView, + TreeView, ) -from .views import DeselectableTreeView from .error_dialog import ErrorMessageBox from .lib import ( WrappedCallbackItem, @@ -25,6 +32,7 @@ from .lib import ( set_style_property, DynamicQThread, qt_app_context, + get_openpype_qt_app, get_asset_icon, get_asset_icon_by_name, get_asset_icon_name_from_doc, @@ -37,6 +45,8 @@ from .models import ( from .overlay_messages import ( MessageOverlayObject, ) +from .multiselection_combobox import MultiSelectionComboBox +from .thumbnail_paint_widget import ThumbnailPainterWidget __all__ = ( @@ -58,7 +68,13 @@ __all__ = ( "PixmapButton", "SeparatorWidget", + "VerticalExpandButton", + "SquareButton", + "RefreshButton", + "GoToCurrentButton", + "DeselectableTreeView", + "TreeView", "ErrorMessageBox", @@ -68,6 +84,7 @@ __all__ = ( "set_style_property", "DynamicQThread", "qt_app_context", + "get_openpype_qt_app", "get_asset_icon", "get_asset_icon_by_name", "get_asset_icon_name_from_doc", @@ -76,4 +93,8 @@ __all__ = ( "RecursiveSortFilterProxyModel", "MessageOverlayObject", + + "MultiSelectionComboBox", + + "ThumbnailPainterWidget", ) diff --git a/openpype/tools/utils/assets_widget.py b/openpype/tools/utils/assets_widget.py index ffbdd995d6..a45d762c73 100644 --- a/openpype/tools/utils/assets_widget.py +++ b/openpype/tools/utils/assets_widget.py @@ -5,6 +5,7 @@ import qtpy from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_project, get_assets, @@ -607,7 +608,8 @@ class AssetsWidget(QtWidgets.QWidget): refresh_btn.setToolTip("Refresh items") filter_input = PlaceholderLineEdit(header_widget) - filter_input.setPlaceholderText("Filter assets..") + filter_input.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) # Header header_layout = QtWidgets.QHBoxLayout(header_widget) diff --git a/openpype/tools/utils/constants.py b/openpype/tools/utils/constants.py index 99f2602ee3..77324762b3 100644 --- a/openpype/tools/utils/constants.py +++ b/openpype/tools/utils/constants.py @@ -5,6 +5,12 @@ UNCHECKED_INT = getattr(QtCore.Qt.Unchecked, "value", 0) PARTIALLY_CHECKED_INT = getattr(QtCore.Qt.PartiallyChecked, "value", 1) CHECKED_INT = getattr(QtCore.Qt.Checked, "value", 2) +# Checkbox state +try: + ITEM_IS_USER_TRISTATE = QtCore.Qt.ItemIsUserTristate +except AttributeError: + ITEM_IS_USER_TRISTATE = QtCore.Qt.ItemIsTristate + DEFAULT_PROJECT_LABEL = "< Default >" PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 101 PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 102 diff --git a/openpype/tools/utils/delegates.py b/openpype/tools/utils/delegates.py index c71c87f9b0..c51323e556 100644 --- a/openpype/tools/utils/delegates.py +++ b/openpype/tools/utils/delegates.py @@ -24,9 +24,12 @@ class VersionDelegate(QtWidgets.QStyledItemDelegate): lock = False def __init__(self, dbcon, *args, **kwargs): - self.dbcon = dbcon + self._dbcon = dbcon super(VersionDelegate, self).__init__(*args, **kwargs) + def get_project_name(self): + return self._dbcon.active_project() + def displayText(self, value, locale): if isinstance(value, HeroVersionType): return lib.format_version(value, True) @@ -120,7 +123,7 @@ class VersionDelegate(QtWidgets.QStyledItemDelegate): "Version is not integer" ) - project_name = self.dbcon.active_project() + project_name = self.get_project_name() # Add all available versions to the editor parent_id = item["version_document"]["parent"] version_docs = [ diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py index ac242d24d2..29c8c0ba8e 100644 --- a/openpype/tools/utils/host_tools.py +++ b/openpype/tools/utils/host_tools.py @@ -6,11 +6,13 @@ use singleton approach with global functions (using helper anyway). import os import pyblish.api + +from openpype import AYON_SERVER_ENABLED from openpype.host import IWorkfileHost, ILoadHost from openpype.lib import Logger from openpype.pipeline import ( registered_host, - legacy_io, + get_current_asset_name, ) from .lib import qt_app_context @@ -46,17 +48,29 @@ class HostToolsHelper: self._log = Logger.get_logger(self.__class__.__name__) return self._log + def _init_ayon_workfiles_tool(self, parent): + from openpype.tools.ayon_workfiles.widgets import WorkfilesToolWindow + + workfiles_window = WorkfilesToolWindow(parent=parent) + self._workfiles_tool = workfiles_window + + def _init_openpype_workfiles_tool(self, parent): + from openpype.tools.workfiles.app import Window + + # Host validation + host = registered_host() + IWorkfileHost.validate_workfile_methods(host) + + workfiles_window = Window(parent=parent) + self._workfiles_tool = workfiles_window + def get_workfiles_tool(self, parent): """Create, cache and return workfiles tool window.""" if self._workfiles_tool is None: - from openpype.tools.workfiles.app import Window - - # Host validation - host = registered_host() - IWorkfileHost.validate_workfile_methods(host) - - workfiles_window = Window(parent=parent) - self._workfiles_tool = workfiles_window + if AYON_SERVER_ENABLED: + self._init_ayon_workfiles_tool(parent) + else: + self._init_openpype_workfiles_tool(parent) return self._workfiles_tool @@ -72,12 +86,22 @@ class HostToolsHelper: def get_loader_tool(self, parent): """Create, cache and return loader tool window.""" if self._loader_tool is None: - from openpype.tools.loader import LoaderWindow - host = registered_host() ILoadHost.validate_load_methods(host) + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_loader.ui import LoaderWindow + from openpype.tools.ayon_loader import LoaderController - loader_window = LoaderWindow(parent=parent or self._parent) + controller = LoaderController(host=host) + loader_window = LoaderWindow( + controller=controller, + parent=parent or self._parent + ) + + else: + from openpype.tools.loader import LoaderWindow + + loader_window = LoaderWindow(parent=parent or self._parent) self._loader_tool = loader_window return self._loader_tool @@ -95,8 +119,8 @@ class HostToolsHelper: if use_context is None: use_context = False - if use_context: - context = {"asset": legacy_io.Session["AVALON_ASSET"]} + if not AYON_SERVER_ENABLED and use_context: + context = {"asset": get_current_asset_name()} loader_tool.set_context(context, refresh=True) else: loader_tool.refresh() @@ -147,14 +171,23 @@ class HostToolsHelper: def get_scene_inventory_tool(self, parent): """Create, cache and return scene inventory tool window.""" if self._scene_inventory_tool is None: - from openpype.tools.sceneinventory import SceneInventoryWindow - host = registered_host() ILoadHost.validate_load_methods(host) - scene_inventory_window = SceneInventoryWindow( - parent=parent or self._parent - ) + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_sceneinventory.window import ( + SceneInventoryWindow) + + scene_inventory_window = SceneInventoryWindow( + parent=parent or self._parent + ) + + else: + from openpype.tools.sceneinventory import SceneInventoryWindow + + scene_inventory_window = SceneInventoryWindow( + parent=parent or self._parent + ) self._scene_inventory_tool = scene_inventory_window return self._scene_inventory_tool @@ -173,6 +206,9 @@ class HostToolsHelper: def get_library_loader_tool(self, parent): """Create, cache and return library loader tool window.""" + if AYON_SERVER_ENABLED: + return self.get_loader_tool(parent) + if self._library_loader_tool is None: from openpype.tools.libraryloader import LibraryLoaderWindow @@ -185,6 +221,9 @@ class HostToolsHelper: def show_library_loader(self, parent=None): """Loader tool for loading representations from library project.""" + if AYON_SERVER_ENABLED: + return self.show_loader(parent) + with qt_app_context(): library_loader_tool = self.get_library_loader_tool(parent) library_loader_tool.show() diff --git a/openpype/tools/utils/images/__init__.py b/openpype/tools/utils/images/__init__.py new file mode 100644 index 0000000000..3f437fcc8c --- /dev/null +++ b/openpype/tools/utils/images/__init__.py @@ -0,0 +1,56 @@ +import os +from qtpy import QtGui + +IMAGES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__))) + + +def get_image_path(filename): + """Get image path from './images'. + + Returns: + Union[str, None]: Path to image file or None if not found. + """ + + path = os.path.join(IMAGES_DIR, filename) + if os.path.exists(path): + return path + return None + + +def get_image(filename): + """Load image from './images' as QImage. + + Returns: + Union[QtGui.QImage, None]: QImage or None if not found. + """ + + path = get_image_path(filename) + if path: + return QtGui.QImage(path) + return None + + +def get_pixmap(filename): + """Load image from './images' as QPixmap. + + Returns: + Union[QtGui.QPixmap, None]: QPixmap or None if not found. + """ + + path = get_image_path(filename) + if path: + return QtGui.QPixmap(path) + return None + + +def get_icon(filename): + """Load image from './images' as QIcon. + + Returns: + Union[QtGui.QIcon, None]: QIcon or None if not found. + """ + + pix = get_pixmap(filename) + if pix: + return QtGui.QIcon(pix) + return None diff --git a/openpype/tools/utils/images/thumbnail.png b/openpype/tools/utils/images/thumbnail.png new file mode 100644 index 0000000000..adea862e5b Binary files /dev/null and b/openpype/tools/utils/images/thumbnail.png differ diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 58ece7c68f..723e71e7aa 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -14,11 +14,16 @@ from openpype.client import ( from openpype.style import ( get_default_entity_icon_color, get_objected_colors, + get_app_icon_path, ) from openpype.resources import get_image_path from openpype.lib import filter_profiles, Logger from openpype.settings import get_project_settings -from openpype.pipeline import registered_host +from openpype.pipeline import ( + registered_host, + get_current_context, + get_current_host_name, +) from .constants import CHECKED_INT, UNCHECKED_INT @@ -46,7 +51,6 @@ def checkstate_enum_to_int(state): return 2 - def center_window(window): """Move window to center of it's screen.""" @@ -149,6 +153,40 @@ def qt_app_context(): yield app +def get_openpype_qt_app(): + """Main Qt application initialized for OpenPype processed. + + This function should be used only inside OpenPype process and never inside + other processes. + """ + + app = QtWidgets.QApplication.instance() + if app is None: + for attr_name in ( + "AA_EnableHighDpiScaling", + "AA_UseHighDpiPixmaps", + ): + attr = getattr(QtCore.Qt, attr_name, None) + if attr is not None: + QtWidgets.QApplication.setAttribute(attr) + + policy = os.getenv("QT_SCALE_FACTOR_ROUNDING_POLICY") + if ( + hasattr( + QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy" + ) + and not policy + ): + QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( + QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough + ) + + app = QtWidgets.QApplication(sys.argv) + + app.setWindowIcon(QtGui.QIcon(get_app_icon_path())) + return app + + class SharedObjects: jobs = {} icons = {} @@ -496,10 +534,11 @@ class FamilyConfigCache: return # Update the icons from the project configuration - project_name = os.environ.get("AVALON_PROJECT") - asset_name = os.environ.get("AVALON_ASSET") - task_name = os.environ.get("AVALON_TASK") - host_name = os.environ.get("AVALON_APP") + context = get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] + host_name = get_current_host_name() if not all((project_name, asset_name, task_name)): return @@ -725,20 +764,23 @@ def create_qthread(func, *args, **kwargs): def get_repre_icons(): """Returns a dict {'provider_name': QIcon}""" + icons = {} try: from openpype_modules import sync_server except Exception: # Backwards compatibility - from openpype.modules import sync_server + try: + from openpype.modules import sync_server + except Exception: + return icons resource_path = os.path.join( os.path.dirname(sync_server.sync_server_module.__file__), "providers", "resources" ) - icons = {} if not os.path.exists(resource_path): print("No icons for Site Sync found") - return {} + return icons for file_name in os.listdir(resource_path): if file_name and not file_name.endswith("png"): @@ -752,61 +794,6 @@ def get_repre_icons(): return icons -def get_progress_for_repre(doc, active_site, remote_site): - """ - Calculates average progress for representation. - - If site has created_dt >> fully available >> progress == 1 - - Could be calculated in aggregate if it would be too slow - Args: - doc(dict): representation dict - Returns: - (dict) with active and remote sites progress - {'studio': 1.0, 'gdrive': -1} - gdrive site is not present - -1 is used to highlight the site should be added - {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not - uploaded yet - """ - progress = {active_site: -1, - remote_site: -1} - if not doc: - return progress - - files = {active_site: 0, remote_site: 0} - doc_files = doc.get("files") or [] - for doc_file in doc_files: - if not isinstance(doc_file, dict): - continue - - sites = doc_file.get("sites") or [] - for site in sites: - if ( - # Pype 2 compatibility - not isinstance(site, dict) - # Check if site name is one of progress sites - or site["name"] not in progress - ): - continue - - files[site["name"]] += 1 - norm_progress = max(progress[site["name"]], 0) - if site.get("created_dt"): - progress[site["name"]] = norm_progress + 1 - elif site.get("progress"): - progress[site["name"]] = norm_progress + site["progress"] - else: # site exists, might be failed, do not add again - progress[site["name"]] = 0 - - # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 - avg_progress = {} - avg_progress[active_site] = \ - progress[active_site] / max(files[active_site], 1) - avg_progress[remote_site] = \ - progress[remote_site] / max(files[remote_site], 1) - return avg_progress - - def is_sync_loader(loader): return is_remove_site_loader(loader) or is_add_site_loader(loader) diff --git a/openpype/tools/settings/settings/multiselection_combobox.py b/openpype/tools/utils/multiselection_combobox.py similarity index 80% rename from openpype/tools/settings/settings/multiselection_combobox.py rename to openpype/tools/utils/multiselection_combobox.py index 896be3c06c..34361fca17 100644 --- a/openpype/tools/settings/settings/multiselection_combobox.py +++ b/openpype/tools/utils/multiselection_combobox.py @@ -1,5 +1,14 @@ from qtpy import QtCore, QtGui, QtWidgets -from openpype.tools.utils.lib import checkstate_int_to_enum + +from .lib import ( + checkstate_int_to_enum, + checkstate_enum_to_int, +) +from .constants import ( + CHECKED_INT, + UNCHECKED_INT, + ITEM_IS_USER_TRISTATE, +) class ComboItemDelegate(QtWidgets.QStyledItemDelegate): @@ -30,7 +39,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): QtCore.Qt.Key_PageDown, QtCore.Qt.Key_PageUp, QtCore.Qt.Key_Home, - QtCore.Qt.Key_End + QtCore.Qt.Key_End, } top_bottom_padding = 2 @@ -52,12 +61,25 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): self._block_mouse_release_timer = QtCore.QTimer(self, singleShot=True) self._initial_mouse_pos = None self._separator = separator - self.placeholder_text = placeholder - self.delegate = ComboItemDelegate(self) - self.setItemDelegate(self.delegate) + self._placeholder_text = placeholder + delegate = ComboItemDelegate(self) + self.setItemDelegate(delegate) - self.lines = {} - self.item_height = None + self._lines = {} + self._item_height = None + self._custom_text = None + self._delegate = delegate + + def get_placeholder_text(self): + return self._placeholder_text + + def set_placeholder_text(self, text): + self._placeholder_text = text + self._update_size_hint() + + def set_custom_text(self, text): + self._custom_text = text + self._update_size_hint() def focusInEvent(self, event): self.focused_in.emit() @@ -127,30 +149,30 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): return if state == QtCore.Qt.Unchecked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT elif event.type() == QtCore.QEvent.KeyPress: # TODO: handle QtCore.Qt.Key_Enter, Key_Return? if event.key() == QtCore.Qt.Key_Space: - # toggle the current items check state if ( index_flags & QtCore.Qt.ItemIsUserCheckable - and index_flags & QtCore.Qt.ItemIsTristate + and index_flags & ITEM_IS_USER_TRISTATE ): - new_state = QtCore.Qt.CheckState((int(state) + 1) % 3) + new_state = (checkstate_enum_to_int(state) + 1) % 3 elif index_flags & QtCore.Qt.ItemIsUserCheckable: + # toggle the current items check state if state != QtCore.Qt.Checked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT if new_state is not None: model.setData(current_index, new_state, QtCore.Qt.CheckStateRole) self.view().update(current_index) - self.update_size_hint() + self._update_size_hint() self.value_changed.emit() return True @@ -174,25 +196,33 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): self.initStyleOption(option) painter.drawComplexControl(QtWidgets.QStyle.CC_ComboBox, option) - # draw the icon and text items = self.checked_items_text() - if not items: - option.currentText = self.placeholder_text + # draw the icon and text + draw_text = True + combotext = None + if self._custom_text is not None: + combotext = self._custom_text + elif not items: + combotext = self._placeholder_text + else: + draw_text = False + if draw_text: + option.currentText = combotext option.palette.setCurrentColorGroup(QtGui.QPalette.Disabled) painter.drawControl(QtWidgets.QStyle.CE_ComboBoxLabel, option) return font_metricts = self.fontMetrics() - if self.item_height is None: + if self._item_height is None: self.updateGeometry() self.update() return - for line, items in self.lines.items(): + for line, items in self._lines.items(): top_y = ( option.rect.top() - + (line * self.item_height) + + (line * self._item_height) + self.top_bottom_margins ) left_x = option.rect.left() + self.left_offset @@ -202,7 +232,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): label_rect.moveTop(top_y) label_rect.moveLeft(left_x) - label_rect.setHeight(self.item_height) + label_rect.setHeight(self._item_height) label_rect.setWidth( label_rect.width() + self.left_right_padding ) @@ -231,14 +261,18 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): def resizeEvent(self, *args, **kwargs): super(MultiSelectionComboBox, self).resizeEvent(*args, **kwargs) - self.update_size_hint() + self._update_size_hint() - def update_size_hint(self): - self.lines = {} + def _update_size_hint(self): + if self._custom_text is not None: + self.update() + return + self._lines = {} items = self.checked_items_text() if not items: self.update() + self.repaint() return option = QtWidgets.QStyleOptionComboBox() @@ -249,10 +283,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): QtWidgets.QStyle.SC_ComboBoxArrow ) total_width = option.rect.width() - btn_rect.width() - font_metricts = self.fontMetrics() line = 0 - self.lines = {line: []} + self._lines = {line: []} font_metricts = self.fontMetrics() default_left_x = 0 + self.left_offset @@ -263,18 +296,18 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): right_x = left_x + width if right_x > total_width: left_x = int(default_left_x) - if self.lines.get(line): + if self._lines.get(line): line += 1 - self.lines[line] = [item] + self._lines[line] = [item] left_x += width else: - self.lines[line] = [item] + self._lines[line] = [item] line += 1 else: - if line in self.lines: - self.lines[line].append(item) + if line in self._lines: + self._lines[line].append(item) else: - self.lines[line] = [item] + self._lines[line] = [item] left_x = left_x + width + self.item_spacing self.update() @@ -282,18 +315,20 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): def sizeHint(self): value = super(MultiSelectionComboBox, self).sizeHint() - lines = len(self.lines) - if lines == 0: - lines = 1 + lines = 1 + if self._custom_text is None: + lines = len(self._lines) + if lines == 0: + lines = 1 - if self.item_height is None: - self.item_height = ( + if self._item_height is None: + self._item_height = ( self.fontMetrics().height() + (2 * self.top_bottom_padding) + (2 * self.top_bottom_margins) ) value.setHeight( - (lines * self.item_height) + (lines * self._item_height) + (2 * self.top_bottom_margins) ) return value @@ -305,11 +340,11 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): for idx in range(self.count()): value = self.itemData(idx, role=QtCore.Qt.UserRole) if value in values: - check_state = QtCore.Qt.Checked + check_state = CHECKED_INT else: - check_state = QtCore.Qt.Unchecked + check_state = UNCHECKED_INT self.setItemData(idx, check_state, QtCore.Qt.CheckStateRole) - self.update_size_hint() + self._update_size_hint() def value(self): items = list() diff --git a/openpype/tools/utils/tasks_widget.py b/openpype/tools/utils/tasks_widget.py index 8c0505223e..b554ed50d3 100644 --- a/openpype/tools/utils/tasks_widget.py +++ b/openpype/tools/utils/tasks_widget.py @@ -75,7 +75,7 @@ class TasksModel(QtGui.QStandardItemModel): def set_asset_id(self, asset_id): asset_doc = None - if self._context_is_valid(): + if asset_id and self._context_is_valid(): project_name = self._get_current_project() asset_doc = get_asset_by_id( project_name, asset_id, fields=["data.tasks"] diff --git a/openpype/tools/utils/thumbnail_paint_widget.py b/openpype/tools/utils/thumbnail_paint_widget.py new file mode 100644 index 0000000000..130942aaf0 --- /dev/null +++ b/openpype/tools/utils/thumbnail_paint_widget.py @@ -0,0 +1,366 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.style import get_objected_colors + +from .lib import paint_image_with_color +from .images import get_image + + +class ThumbnailPainterWidget(QtWidgets.QWidget): + """Widget for painting of thumbnails. + + The widget use is to paint thumbnail or multiple thumbnails in a defined + area. Is not meant to show them in a grid but in overlay. + + It is expected that there is a logic that will provide thumbnails to + paint and set them using 'set_current_thumbnails' or + 'set_current_thumbnail_paths'. + """ + + width_ratio = 3.0 + height_ratio = 2.0 + border_width = 1 + max_thumbnails = 3 + offset_sep = 4 + checker_boxes_count = 20 + + def __init__(self, parent): + super(ThumbnailPainterWidget, self).__init__(parent) + + border_color = get_objected_colors("bg-buttons").get_qcolor() + thumbnail_bg_color = get_objected_colors("bg-view").get_qcolor() + + default_image = get_image("thumbnail.png") + default_pix = paint_image_with_color(default_image, border_color) + + self._border_color = border_color + self._thumbnail_bg_color = thumbnail_bg_color + self._default_pix = default_pix + + self._cached_pix = None + self._current_pixes = None + self._has_pixes = False + + self._bg_color = QtCore.Qt.transparent + self._use_checker = True + self._checker_color_1 = QtGui.QColor(89, 89, 89) + self._checker_color_2 = QtGui.QColor(188, 187, 187) + + def set_background_color(self, color): + self._bg_color = color + self.repaint() + + def set_use_checkboard(self, use_checker): + if self._use_checker is use_checker: + return + self._use_checker = use_checker + self.repaint() + + def set_checker_colors(self, color_1, color_2): + self._checker_color_1 = color_1 + self._checker_color_2 = color_2 + self.repaint() + + def set_border_color(self, color): + """Change border color. + + Args: + color (QtGui.QColor): Color to set. + """ + + self._border_color = color + self._default_pix = None + self.clear_cache() + + def set_thumbnail_bg_color(self, color): + """Change background color. + + Args: + color (QtGui.QColor): Color to set. + """ + + self._thumbnail_bg_color = color + self.clear_cache() + + @property + def has_pixes(self): + """Has set thumbnails. + + Returns: + bool: True if widget has thumbnails to paint. + """ + + return self._has_pixes + + def clear_cache(self): + """Clear cache of resized thumbnails and repaint widget.""" + + self._cached_pix = None + self.repaint() + + def set_current_thumbnails(self, pixmaps=None): + """Set current thumbnails. + + Args: + pixmaps (Optional[List[QtGui.QPixmap]]): List of pixmaps. + """ + + self._current_pixes = pixmaps or None + self._has_pixes = self._current_pixes is not None + self.clear_cache() + + def set_current_thumbnail_paths(self, thumbnail_paths=None): + """Set current thumbnails. + + Set current thumbnails using paths to a files. + + Args: + thumbnail_paths (Optional[List[str]]): List of paths to thumbnail + sources. + """ + + pixes = [] + if thumbnail_paths: + for thumbnail_path in thumbnail_paths: + pixes.append(QtGui.QPixmap(thumbnail_path)) + + self.set_current_thumbnails(pixes) + + def paintEvent(self, event): + if self._cached_pix is None: + self._cache_pix() + + painter = QtGui.QPainter() + painter.begin(self) + painter.setRenderHint(QtGui.QPainter.Antialiasing) + painter.drawPixmap(0, 0, self._cached_pix) + painter.end() + + def resizeEvent(self, event): + self._cached_pix = None + super(ThumbnailPainterWidget, self).resizeEvent(event) + + def _get_default_pix(self): + if self._default_pix is None: + default_image = get_image("thumbnail") + default_pix = paint_image_with_color( + default_image, self._border_color) + self._default_pix = default_pix + return self._default_pix + + def _paint_tile(self, width, height): + if not self._use_checker: + tile_pix = QtGui.QPixmap(width, width) + tile_pix.fill(self._bg_color) + return tile_pix + + checker_size = int(float(width) / self.checker_boxes_count) + if checker_size < 1: + checker_size = 1 + + checker_pix = QtGui.QPixmap(checker_size * 2, checker_size * 2) + checker_pix.fill(QtCore.Qt.transparent) + checker_painter = QtGui.QPainter() + checker_painter.begin(checker_pix) + checker_painter.setPen(QtCore.Qt.NoPen) + checker_painter.setBrush(self._checker_color_1) + checker_painter.drawRect( + 0, 0, checker_pix.width(), checker_pix.height() + ) + checker_painter.setBrush(self._checker_color_2) + checker_painter.drawRect( + 0, 0, checker_size, checker_size + ) + checker_painter.drawRect( + checker_size, checker_size, checker_size, checker_size + ) + checker_painter.end() + return checker_pix + + def _paint_default_pix(self, pix_width, pix_height): + full_border_width = 2 * self.border_width + width = pix_width - full_border_width + height = pix_height - full_border_width + if width > 100: + width = int(width * 0.6) + height = int(height * 0.6) + + scaled_pix = self._get_default_pix().scaled( + width, + height, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + pos_x = int( + (pix_width - scaled_pix.width()) / 2 + ) + pos_y = int( + (pix_height - scaled_pix.height()) / 2 + ) + new_pix = QtGui.QPixmap(pix_width, pix_height) + new_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(new_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + + pix_painter.setRenderHints(render_hints) + pix_painter.drawPixmap(pos_x, pos_y, scaled_pix) + pix_painter.end() + return new_pix + + def _draw_thumbnails(self, thumbnails, pix_width, pix_height): + full_border_width = 2 * self.border_width + + checker_pix = self._paint_tile(pix_width, pix_height) + + backgrounded_images = [] + for src_pix in thumbnails: + scaled_pix = src_pix.scaled( + pix_width - full_border_width, + pix_height - full_border_width, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + pos_x = int( + (pix_width - scaled_pix.width()) / 2 + ) + pos_y = int( + (pix_height - scaled_pix.height()) / 2 + ) + + new_pix = QtGui.QPixmap(pix_width, pix_height) + new_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(new_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + pix_painter.setRenderHints(render_hints) + + tiled_rect = QtCore.QRectF( + pos_x, pos_y, scaled_pix.width(), scaled_pix.height() + ) + pix_painter.drawTiledPixmap( + tiled_rect, + checker_pix, + QtCore.QPointF(0.0, 0.0) + ) + pix_painter.drawPixmap(pos_x, pos_y, scaled_pix) + pix_painter.end() + backgrounded_images.append(new_pix) + return backgrounded_images + + def _paint_dash_line(self, painter, rect): + pen = QtGui.QPen() + pen.setWidth(1) + pen.setBrush(QtCore.Qt.darkGray) + pen.setStyle(QtCore.Qt.DashLine) + + new_rect = rect.adjusted(1, 1, -1, -1) + painter.setPen(pen) + painter.setBrush(QtCore.Qt.transparent) + # painter.drawRect(rect) + painter.drawRect(new_rect) + + def _cache_pix(self): + rect = self.rect() + rect_width = rect.width() + rect_height = rect.height() + + pix_x_offset = 0 + pix_y_offset = 0 + expected_height = int( + (rect_width / self.width_ratio) * self.height_ratio + ) + if expected_height > rect_height: + expected_height = rect_height + expected_width = int( + (rect_height / self.height_ratio) * self.width_ratio + ) + pix_x_offset = (rect_width - expected_width) / 2 + else: + expected_width = rect_width + pix_y_offset = (rect_height - expected_height) / 2 + + if self._current_pixes is None: + used_default_pix = True + pixes_to_draw = None + pixes_len = 1 + else: + used_default_pix = False + pixes_to_draw = self._current_pixes + if len(pixes_to_draw) > self.max_thumbnails: + pixes_to_draw = pixes_to_draw[:-self.max_thumbnails] + pixes_len = len(pixes_to_draw) + + width_offset, height_offset = self._get_pix_offset_size( + expected_width, expected_height, pixes_len + ) + pix_width = expected_width - width_offset + pix_height = expected_height - height_offset + + if used_default_pix: + thumbnail_images = [self._paint_default_pix(pix_width, pix_height)] + else: + thumbnail_images = self._draw_thumbnails( + pixes_to_draw, pix_width, pix_height + ) + + if pixes_len == 1: + width_offset_part = 0 + height_offset_part = 0 + else: + width_offset_part = int(float(width_offset) / (pixes_len - 1)) + height_offset_part = int(float(height_offset) / (pixes_len - 1)) + full_width_offset = width_offset + pix_x_offset + + final_pix = QtGui.QPixmap(rect_width, rect_height) + final_pix.fill(QtCore.Qt.transparent) + + bg_pen = QtGui.QPen() + bg_pen.setWidth(self.border_width) + bg_pen.setColor(self._border_color) + + final_painter = QtGui.QPainter() + final_painter.begin(final_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + + final_painter.setRenderHints(render_hints) + + final_painter.setBrush(QtGui.QBrush(self._thumbnail_bg_color)) + final_painter.setPen(bg_pen) + final_painter.drawRect(rect) + + for idx, pix in enumerate(thumbnail_images): + x_offset = full_width_offset - (width_offset_part * idx) + y_offset = (height_offset_part * idx) + pix_y_offset + final_painter.drawPixmap(x_offset, y_offset, pix) + + # Draw drop enabled dashes + if used_default_pix: + self._paint_dash_line(final_painter, rect) + + final_painter.end() + + self._cached_pix = final_pix + + def _get_pix_offset_size(self, width, height, image_count): + if image_count == 1: + return 0, 0 + + part_width = width / self.offset_sep + part_height = height / self.offset_sep + return part_width, part_height diff --git a/openpype/tools/utils/views.py b/openpype/tools/utils/views.py index 01919d6745..596a47ede9 100644 --- a/openpype/tools/utils/views.py +++ b/openpype/tools/utils/views.py @@ -1,4 +1,6 @@ from openpype.resources import get_image_path +from openpype.tools.flickcharm import FlickCharm + from qtpy import QtWidgets, QtCore, QtGui, QtSvg @@ -57,3 +59,63 @@ class TreeViewSpinner(QtWidgets.QTreeView): self.paint_empty(event) else: super(TreeViewSpinner, self).paintEvent(event) + + +class TreeView(QtWidgets.QTreeView): + """Ultimate TreeView with flick charm and double click signals. + + Tree view have deselectable mode, which allows to deselect items by + clicking on item area without any items. + + Todos: + Add refresh animation. + """ + + double_clicked = QtCore.Signal(QtGui.QMouseEvent) + + def __init__(self, *args, **kwargs): + super(TreeView, self).__init__(*args, **kwargs) + self._deselectable = False + + self._flick_charm_activated = False + self._flick_charm = FlickCharm(parent=self) + self._before_flick_scroll_mode = None + + def is_deselectable(self): + return self._deselectable + + def set_deselectable(self, deselectable): + self._deselectable = deselectable + + deselectable = property(is_deselectable, set_deselectable) + + def mousePressEvent(self, event): + if self._deselectable: + index = self.indexAt(event.pos()) + if not index.isValid(): + # clear the selection + self.clearSelection() + # clear the current index + self.setCurrentIndex(QtCore.QModelIndex()) + super(TreeView, self).mousePressEvent(event) + + def mouseDoubleClickEvent(self, event): + self.double_clicked.emit(event) + + return super(TreeView, self).mouseDoubleClickEvent(event) + + def activate_flick_charm(self): + if self._flick_charm_activated: + return + self._flick_charm_activated = True + self._before_flick_scroll_mode = self.verticalScrollMode() + self._flick_charm.activateOn(self) + self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) + + def deactivate_flick_charm(self): + if not self._flick_charm_activated: + return + self._flick_charm_activated = False + self._flick_charm.deactivateFrom(self) + if self._before_flick_scroll_mode is not None: + self.setVerticalScrollMode(self._before_flick_scroll_mode) diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index 5a8104611b..9223afecaa 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -6,10 +6,13 @@ import qtawesome from openpype.style import ( get_objected_colors, - get_style_image_path + get_style_image_path, + get_default_tools_icon_color, ) from openpype.lib.attribute_definitions import AbstractAttrDef +from .lib import get_qta_icon_by_name_and_color + log = logging.getLogger(__name__) @@ -410,6 +413,18 @@ class PixmapButtonPainter(QtWidgets.QWidget): self._pixmap = pixmap self._cached_pixmap = None + self._disabled = False + + def resizeEvent(self, event): + super(PixmapButtonPainter, self).resizeEvent(event) + self._cached_pixmap = None + self.repaint() + + def set_enabled(self, enabled): + if self._disabled != enabled: + return + self._disabled = not enabled + self.repaint() def set_pixmap(self, pixmap): self._pixmap = pixmap @@ -444,6 +459,8 @@ class PixmapButtonPainter(QtWidgets.QWidget): if self._cached_pixmap is None: self._cache_pixmap() + if self._disabled: + painter.setOpacity(0.5) painter.drawPixmap(0, 0, self._cached_pixmap) painter.end() @@ -464,6 +481,10 @@ class PixmapButton(ClickableFrame): layout.setContentsMargins(*args) self._update_painter_geo() + def setEnabled(self, enabled): + self._button_painter.set_enabled(enabled) + super(PixmapButton, self).setEnabled(enabled) + def set_pixmap(self, pixmap): self._button_painter.set_pixmap(pixmap) @@ -759,3 +780,77 @@ class SeparatorWidget(QtWidgets.QFrame): self._orientation = orientation self._set_size(self._size) + + +def get_refresh_icon(): + return get_qta_icon_by_name_and_color( + "fa.refresh", get_default_tools_icon_color() + ) + + +def get_go_to_current_icon(): + return get_qta_icon_by_name_and_color( + "fa.arrow-down", get_default_tools_icon_color() + ) + + +class VerticalExpandButton(QtWidgets.QPushButton): + """Button which is expanding vertically. + + By default, button is a little bit smaller than other widgets like + QLineEdit. This button is expanding vertically to match size of + other widgets, next to it. + """ + + def __init__(self, parent=None): + super(VerticalExpandButton, self).__init__(parent) + + sp = self.sizePolicy() + sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + self.setSizePolicy(sp) + + +class SquareButton(QtWidgets.QPushButton): + """Make button square shape. + + Change width to match height on resize. + """ + + def __init__(self, *args, **kwargs): + super(SquareButton, self).__init__(*args, **kwargs) + + sp = self.sizePolicy() + sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + sp.setHorizontalPolicy(QtWidgets.QSizePolicy.Minimum) + self.setSizePolicy(sp) + self._ideal_width = None + + def showEvent(self, event): + super(SquareButton, self).showEvent(event) + self._ideal_width = self.height() + self.updateGeometry() + + def resizeEvent(self, event): + super(SquareButton, self).resizeEvent(event) + self._ideal_width = self.height() + self.updateGeometry() + + def sizeHint(self): + sh = super(SquareButton, self).sizeHint() + ideal_width = self._ideal_width + if ideal_width is None: + ideal_width = sh.height() + sh.setWidth(ideal_width) + return sh + + +class RefreshButton(VerticalExpandButton): + def __init__(self, parent=None): + super(RefreshButton, self).__init__(parent) + self.setIcon(get_refresh_icon()) + + +class GoToCurrentButton(VerticalExpandButton): + def __init__(self, parent=None): + super(GoToCurrentButton, self).__init__(parent) + self.setIcon(get_go_to_current_icon()) diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index 2f338cf516..e4715a0340 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -21,6 +21,7 @@ from openpype.pipeline import ( registered_host, legacy_io, Anatomy, + get_current_project_name, ) from openpype.pipeline.context_tools import ( compute_session_changes, @@ -99,7 +100,7 @@ class FilesWidget(QtWidgets.QWidget): self._task_type = None # Pype's anatomy object for current project - project_name = legacy_io.Session["AVALON_PROJECT"] + project_name = get_current_project_name() self.anatomy = Anatomy(project_name) self.project_name = project_name # Template key used to get work template from anatomy templates diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py index 9f1d1060da..7052eaed06 100644 --- a/openpype/tools/workfiles/save_as_dialog.py +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -12,6 +12,7 @@ from openpype.pipeline import ( from openpype.pipeline.workfile import get_last_workfile_with_version from openpype.pipeline.template_data import get_template_data_with_names from openpype.tools.utils import PlaceholderLineEdit +from openpype.pipeline import version_start, get_current_host_name log = logging.getLogger(__name__) @@ -218,7 +219,15 @@ class SaveAsDialog(QtWidgets.QDialog): # Version number input version_input = QtWidgets.QSpinBox(version_widget) - version_input.setMinimum(1) + version_input.setMinimum( + version_start.get_versioning_start( + self.data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) + ) version_input.setMaximum(9999) # Last version checkbox @@ -420,7 +429,13 @@ class SaveAsDialog(QtWidgets.QDialog): )[1] if version is None: - version = 1 + version = version_start.get_versioning_start( + data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/tools/workfiles/window.py b/openpype/tools/workfiles/window.py index 53f8894665..50c39d4a40 100644 --- a/openpype/tools/workfiles/window.py +++ b/openpype/tools/workfiles/window.py @@ -15,7 +15,12 @@ from openpype.client.operations import ( ) from openpype import style from openpype import resources -from openpype.pipeline import Anatomy +from openpype.pipeline import ( + Anatomy, + get_current_project_name, + get_current_asset_name, + get_current_task_name, +) from openpype.pipeline import legacy_io from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget from openpype.tools.utils.tasks_widget import TasksWidget @@ -285,8 +290,8 @@ class Window(QtWidgets.QWidget): if use_context is None or use_context is True: context = { - "asset": legacy_io.Session["AVALON_ASSET"], - "task": legacy_io.Session["AVALON_TASK"] + "asset": get_current_asset_name(), + "task": get_current_task_name() } self.set_context(context) @@ -296,7 +301,7 @@ class Window(QtWidgets.QWidget): @property def project_name(self): - return legacy_io.Session["AVALON_PROJECT"] + return get_current_project_name() def showEvent(self, event): super(Window, self).showEvent(event) @@ -325,7 +330,7 @@ class Window(QtWidgets.QWidget): workfile_doc = None if asset_id and task_name and filepath: filename = os.path.split(filepath)[1] - project_name = legacy_io.active_project() + project_name = self.project_name workfile_doc = get_workfile_info( project_name, asset_id, task_name, filename ) @@ -356,7 +361,7 @@ class Window(QtWidgets.QWidget): if not update_data: return - project_name = legacy_io.active_project() + project_name = self.project_name session = OperationsSession() session.update_entity( @@ -373,7 +378,7 @@ class Window(QtWidgets.QWidget): return filename = os.path.split(filepath)[1] - project_name = legacy_io.active_project() + project_name = self.project_name return get_workfile_info( project_name, asset_id, task_name, filename ) @@ -385,7 +390,7 @@ class Window(QtWidgets.QWidget): workdir, filename = os.path.split(filepath) - project_name = legacy_io.active_project() + project_name = self.project_name asset_id = self.assets_widget.get_selected_asset_id() task_name = self.tasks_widget.get_selected_task_name() diff --git a/openpype/vendor/python/common/ayon_api/__init__.py b/openpype/vendor/python/common/ayon_api/__init__.py new file mode 100644 index 0000000000..dc3d361f46 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/__init__.py @@ -0,0 +1,402 @@ +from .version import __version__ +from .utils import ( + TransferProgress, + slugify_string, + create_dependency_package_basename, +) +from .server_api import ( + ServerAPI, +) + +from ._api import ( + GlobalServerAPI, + ServiceContext, + + init_service, + get_service_name, + get_service_addon_name, + get_service_addon_version, + get_service_addon_settings, + + is_connection_created, + create_connection, + close_connection, + change_token, + set_environments, + get_server_api_connection, + get_site_id, + set_site_id, + get_client_version, + set_client_version, + get_default_settings_variant, + set_default_settings_variant, + get_sender, + set_sender, + + get_base_url, + get_rest_url, + + raw_get, + raw_post, + raw_put, + raw_patch, + raw_delete, + + get, + post, + put, + patch, + delete, + + get_timeout, + set_timeout, + get_max_retries, + set_max_retries, + + get_event, + get_events, + dispatch_event, + update_event, + enroll_event_job, + + download_file, + upload_file, + + query_graphql, + + get_addons_info, + get_addon_url, + download_addon_private_file, + + get_installers, + create_installer, + update_installer, + delete_installer, + download_installer, + upload_installer, + + get_dependencies_info, + update_dependency_info, + get_dependency_packages, + create_dependency_package, + update_dependency_package, + delete_dependency_package, + + download_dependency_package, + upload_dependency_package, + + upload_addon_zip, + + get_bundles, + create_bundle, + update_bundle, + delete_bundle, + + get_info, + get_server_version, + get_server_version_tuple, + get_user, + get_users, + + get_attributes_for_type, + get_attributes_fields_for_type, + get_default_fields_for_type, + + get_project_anatomy_preset, + get_project_anatomy_presets, + get_project_roots_by_site, + get_project_roots_for_site, + + get_addon_site_settings_schema, + get_addon_settings_schema, + + get_addon_studio_settings, + get_addon_project_settings, + get_addon_settings, + get_bundle_settings, + get_addons_studio_settings, + get_addons_project_settings, + get_addons_settings, + + get_secrets, + get_secret, + save_secret, + delete_secret, + + get_project_names, + get_projects, + get_project, + create_project, + update_project, + delete_project, + + get_folder_by_id, + get_folder_by_name, + get_folder_by_path, + get_folders, + get_folders_hierarchy, + + get_tasks, + get_task_by_id, + get_task_by_name, + + get_folder_ids_with_products, + get_product_by_id, + get_product_by_name, + get_products, + get_product_types, + get_project_product_types, + get_product_type_names, + + get_version_by_id, + get_version_by_name, + version_is_latest, + get_versions, + get_hero_version_by_product_id, + get_hero_version_by_id, + get_hero_versions, + get_last_versions, + get_last_version_by_product_id, + get_last_version_by_product_name, + get_representation_by_id, + get_representation_by_name, + get_representations, + get_representations_parents, + get_representation_parents, + get_repre_ids_by_context_filters, + + get_workfiles_info, + get_workfile_info, + get_workfile_info_by_id, + + get_thumbnail_by_id, + get_thumbnail, + get_folder_thumbnail, + get_version_thumbnail, + get_workfile_thumbnail, + create_thumbnail, + update_thumbnail, + + get_full_link_type_name, + get_link_types, + get_link_type, + create_link_type, + delete_link_type, + make_sure_link_type_exists, + + create_link, + delete_link, + get_entities_links, + get_folder_links, + get_folders_links, + get_task_links, + get_tasks_links, + get_product_links, + get_products_links, + get_version_links, + get_versions_links, + get_representations_links, + get_representation_links, + + send_batch_operations, +) + + +__all__ = ( + "__version__", + + "TransferProgress", + "slugify_string", + "create_dependency_package_basename", + + "ServerAPI", + + "GlobalServerAPI", + "ServiceContext", + + "init_service", + "get_service_name", + "get_service_addon_name", + "get_service_addon_version", + "get_service_addon_settings", + + "is_connection_created", + "create_connection", + "close_connection", + "change_token", + "set_environments", + "get_server_api_connection", + "get_site_id", + "set_site_id", + "get_client_version", + "set_client_version", + "get_default_settings_variant", + "set_default_settings_variant", + "get_sender", + "set_sender", + + "get_base_url", + "get_rest_url", + + "raw_get", + "raw_post", + "raw_put", + "raw_patch", + "raw_delete", + + "get", + "post", + "put", + "patch", + "delete", + + "get_timeout", + "set_timeout", + "get_max_retries", + "set_max_retries", + + "get_event", + "get_events", + "dispatch_event", + "update_event", + "enroll_event_job", + + "download_file", + "upload_file", + + "query_graphql", + + "get_addons_info", + "get_addon_url", + "download_addon_private_file", + + "get_installers", + "create_installer", + "update_installer", + "delete_installer", + "download_installer", + "upload_installer", + + "get_dependencies_info", + "update_dependency_info", + "get_dependency_packages", + "create_dependency_package", + "update_dependency_package", + "delete_dependency_package", + + "download_dependency_package", + "upload_dependency_package", + + "upload_addon_zip", + + "get_bundles", + "create_bundle", + "update_bundle", + "delete_bundle", + + "get_info", + "get_server_version", + "get_server_version_tuple", + "get_user", + "get_users", + + "get_attributes_for_type", + "get_attributes_fields_for_type", + "get_default_fields_for_type", + + "get_project_anatomy_preset", + "get_project_anatomy_presets", + "get_project_roots_by_site", + "get_project_roots_for_site", + + "get_addon_site_settings_schema", + "get_addon_settings_schema", + "get_addon_studio_settings", + "get_addon_project_settings", + "get_addon_settings", + "get_bundle_settings", + "get_addons_studio_settings", + "get_addons_project_settings", + "get_addons_settings", + + "get_secrets", + "get_secret", + "save_secret", + "delete_secret", + + "get_project_names", + "get_projects", + "get_project", + "create_project", + "update_project", + "delete_project", + + "get_folder_by_id", + "get_folder_by_name", + "get_folder_by_path", + "get_folders", + + "get_tasks", + "get_task_by_id", + "get_task_by_name", + + "get_folder_ids_with_products", + "get_product_by_id", + "get_product_by_name", + "get_products", + "get_product_types", + "get_project_product_types", + "get_product_type_names", + + "get_version_by_id", + "get_version_by_name", + "version_is_latest", + "get_versions", + "get_hero_version_by_product_id", + "get_hero_version_by_id", + "get_hero_versions", + "get_last_versions", + "get_last_version_by_product_id", + "get_last_version_by_product_name", + "get_representation_by_id", + "get_representation_by_name", + "get_representations", + "get_representations_parents", + "get_representation_parents", + "get_repre_ids_by_context_filters", + + "get_workfiles_info", + "get_workfile_info", + "get_workfile_info_by_id", + + "get_thumbnail_by_id", + "get_thumbnail", + "get_folder_thumbnail", + "get_version_thumbnail", + "get_workfile_thumbnail", + "create_thumbnail", + "update_thumbnail", + + "get_full_link_type_name", + "get_link_types", + "get_link_type", + "create_link_type", + "delete_link_type", + "make_sure_link_type_exists", + + "create_link", + "delete_link", + "get_entities_links", + "get_folder_links", + "get_folders_links", + "get_task_links", + "get_tasks_links", + "get_product_links", + "get_products_links", + "get_version_links", + "get_versions_links", + "get_representations_links", + "get_representation_links", + + "send_batch_operations", +) diff --git a/openpype/vendor/python/common/ayon_api/_api.py b/openpype/vendor/python/common/ayon_api/_api.py new file mode 100644 index 0000000000..22e137d6e5 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/_api.py @@ -0,0 +1,1264 @@ +"""Singleton based server api for direct access. + +This implementation will be probably the most used part of package. Gives +option to have singleton connection to Server URL based on environment variable +values. All public functions and classes are imported in '__init__.py' so +they're available directly in top module import. +""" + +import os +import socket + +from .constants import ( + SERVER_URL_ENV_KEY, + SERVER_API_ENV_KEY, +) +from .server_api import ServerAPI +from .exceptions import FailedServiceInit + + +class GlobalServerAPI(ServerAPI): + """Extended server api which also handles storing tokens and url. + + Created object expect to have set environment variables + 'AYON_SERVER_URL'. Also is expecting filled 'AYON_API_KEY' + but that can be filled afterwards with calling 'login' method. + """ + + def __init__( + self, + site_id=None, + client_version=None, + default_settings_variant=None, + ssl_verify=None, + cert=None, + ): + url = self.get_url() + token = self.get_token() + + super(GlobalServerAPI, self).__init__( + url, + token, + site_id, + client_version, + default_settings_variant, + ssl_verify, + cert, + # We want to make sure that server and api key validation + # happens all the time in 'GlobalServerAPI'. + create_session=False, + ) + self.validate_server_availability() + self.create_session() + + def login(self, username, password): + """Login to the server or change user. + + If user is the same as current user and token is available the + login is skipped. + """ + + previous_token = self._access_token + super(GlobalServerAPI, self).login(username, password) + if self.has_valid_token and previous_token != self._access_token: + os.environ[SERVER_API_ENV_KEY] = self._access_token + + @staticmethod + def get_url(): + return os.environ.get(SERVER_URL_ENV_KEY) + + @staticmethod + def get_token(): + return os.environ.get(SERVER_API_ENV_KEY) + + @staticmethod + def set_environments(url, token): + """Change url and token environemnts in currently running process. + + Args: + url (str): New server url. + token (str): User's token. + """ + + os.environ[SERVER_URL_ENV_KEY] = url or "" + os.environ[SERVER_API_ENV_KEY] = token or "" + + +class GlobalContext: + """Singleton connection holder. + + Goal is to avoid create connection on import which can be dangerous in + some cases. + """ + + _connection = None + + @classmethod + def is_connection_created(cls): + return cls._connection is not None + + @classmethod + def change_token(cls, url, token): + GlobalServerAPI.set_environments(url, token) + if cls._connection is None: + return + + if cls._connection.get_base_url() == url: + cls._connection.set_token(token) + else: + cls.close_connection() + + @classmethod + def close_connection(cls): + if cls._connection is not None: + cls._connection.close_session() + cls._connection = None + + @classmethod + def create_connection(cls, *args, **kwargs): + if cls._connection is not None: + cls.close_connection() + cls._connection = GlobalServerAPI(*args, **kwargs) + return cls._connection + + @classmethod + def get_server_api_connection(cls): + if cls._connection is None: + cls.create_connection() + return cls._connection + + +class ServiceContext: + """Helper for services running under server. + + When service is running from server the process receives information about + connection from environment variables. This class helps to initialize the + values without knowing environment variables (that may change over time). + + All what must be done is to call 'init_service' function/method. The + arguments are for cases when the service is running in specific environment + and their values are e.g. loaded from private file or for testing purposes. + """ + + token = None + server_url = None + addon_name = None + addon_version = None + service_name = None + + @classmethod + def init_service( + cls, + token=None, + server_url=None, + addon_name=None, + addon_version=None, + service_name=None, + connect=True + ): + token = token or os.environ.get("AYON_API_KEY") + server_url = server_url or os.environ.get("AYON_SERVER_URL") + if not server_url: + raise FailedServiceInit("URL to server is not set") + + if not token: + raise FailedServiceInit( + "Token to server {} is not set".format(server_url) + ) + + addon_name = addon_name or os.environ.get("AYON_ADDON_NAME") + addon_version = addon_version or os.environ.get("AYON_ADDON_VERSION") + service_name = service_name or os.environ.get("AYON_SERVICE_NAME") + + cls.token = token + cls.server_url = server_url + cls.addon_name = addon_name + cls.addon_version = addon_version + cls.service_name = service_name or socket.gethostname() + + # Make sure required environments for GlobalServerAPI are set + GlobalServerAPI.set_environments(cls.server_url, cls.token) + + if connect: + print("Connecting to server \"{}\"".format(server_url)) + con = GlobalContext.get_server_api_connection() + user = con.get_user() + print("Logged in as user \"{}\"".format(user["name"])) + + +def init_service(*args, **kwargs): + """Initialize current connection from service. + + The service expect specific environment variables. The variables must all + be set to make the connection work as a service. + """ + + ServiceContext.init_service(*args, **kwargs) + + +def get_service_addon_name(): + """Name of addon which initialized service connection. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Name of addon or None. + """ + + return ServiceContext.addon_name + + +def get_service_addon_version(): + """Version of addon which initialized service connection. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Version of addon or None. + """ + + return ServiceContext.addon_version + + +def get_service_name(): + """Name of service. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Name of service if service was registered. + """ + + return ServiceContext.service_name + + +def get_service_addon_settings(): + """Addon settings of service which initialized service. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Dict[str, Any]: Addon settings. + + Raises: + ValueError: When service was not initialized. + """ + + addon_name = get_service_addon_name() + addon_version = get_service_addon_version() + if addon_name is None or addon_version is None: + raise ValueError("Service is not initialized") + return get_addon_settings(addon_name, addon_version) + + +def is_connection_created(): + """Is global connection created. + + Returns: + bool: True if connection was connected. + """ + + return GlobalContext.is_connection_created() + + +def create_connection(site_id=None, client_version=None): + """Create global connection. + + Args: + site_id (str): Machine site id/name. + client_version (str): Desktop app version. + + Returns: + GlobalServerAPI: Created connection. + """ + + return GlobalContext.create_connection(site_id, client_version) + + +def close_connection(): + """Close global connection if is connected.""" + + GlobalContext.close_connection() + + +def change_token(url, token): + """Change connection token for url. + + This function can be also used to change url. + + Args: + url (str): Server url. + token (str): API key token. + """ + + GlobalContext.change_token(url, token) + + +def set_environments(url, token): + """Set global environments for global connection. + + Args: + url (Union[str, None]): Url to server or None to unset environments. + token (Union[str, None]): API key token to be used for connection. + """ + + GlobalServerAPI.set_environments(url, token) + + +def get_server_api_connection(): + """Access to global scope object of GlobalServerAPI. + + This access expect to have set environment variables 'AYON_SERVER_URL' + and 'AYON_API_KEY'. + + Returns: + GlobalServerAPI: Object of connection to server. + """ + + return GlobalContext.get_server_api_connection() + + +def get_site_id(): + con = get_server_api_connection() + return con.get_site_id() + + +def set_site_id(site_id): + """Set site id of already connected client connection. + + Site id is human-readable machine id used in AYON desktop application. + + Args: + site_id (Union[str, None]): Site id used in connection. + """ + + con = get_server_api_connection() + con.set_site_id(site_id) + + +def get_client_version(): + """Version of client used to connect to server. + + Client version is AYON client build desktop application. + + Returns: + str: Client version string used in connection. + """ + + con = get_server_api_connection() + return con.get_client_version() + + +def set_client_version(client_version): + """Set version of already connected client connection. + + Client version is version of AYON desktop application. + + Args: + client_version (Union[str, None]): Client version string. + """ + + con = get_server_api_connection() + con.set_client_version(client_version) + + +def get_default_settings_variant(): + """Default variant used for settings. + + Returns: + Union[str, None]: name of variant or None. + """ + + con = get_server_api_connection() + return con.get_client_version() + + +def set_default_settings_variant(variant): + """Change default variant for addon settings. + + Note: + It is recommended to set only 'production' or 'staging' variants + as default variant. + + Args: + variant (Union[str, None]): Settings variant name. + """ + + con = get_server_api_connection() + return con.set_default_settings_variant(variant) + + +def get_sender(): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + con = get_server_api_connection() + return con.get_sender() + + +def set_sender(sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + con = get_server_api_connection() + return con.set_sender(sender) + + +def get_base_url(): + con = get_server_api_connection() + return con.get_base_url() + + +def get_rest_url(): + con = get_server_api_connection() + return con.get_rest_url() + + +def raw_get(*args, **kwargs): + con = get_server_api_connection() + return con.raw_get(*args, **kwargs) + + +def raw_post(*args, **kwargs): + con = get_server_api_connection() + return con.raw_post(*args, **kwargs) + + +def raw_put(*args, **kwargs): + con = get_server_api_connection() + return con.raw_put(*args, **kwargs) + + +def raw_patch(*args, **kwargs): + con = get_server_api_connection() + return con.raw_patch(*args, **kwargs) + + +def raw_delete(*args, **kwargs): + con = get_server_api_connection() + return con.raw_delete(*args, **kwargs) + + +def get(*args, **kwargs): + con = get_server_api_connection() + return con.get(*args, **kwargs) + + +def post(*args, **kwargs): + con = get_server_api_connection() + return con.post(*args, **kwargs) + + +def put(*args, **kwargs): + con = get_server_api_connection() + return con.put(*args, **kwargs) + + +def patch(*args, **kwargs): + con = get_server_api_connection() + return con.patch(*args, **kwargs) + + +def delete(*args, **kwargs): + con = get_server_api_connection() + return con.delete(*args, **kwargs) + + +def get_timeout(*args, **kwargs): + con = get_server_api_connection() + return con.get_timeout(*args, **kwargs) + + +def set_timeout(*args, **kwargs): + con = get_server_api_connection() + return con.set_timeout(*args, **kwargs) + + +def get_max_retries(*args, **kwargs): + con = get_server_api_connection() + return con.get_max_retries(*args, **kwargs) + + +def set_max_retries(*args, **kwargs): + con = get_server_api_connection() + return con.set_max_retries(*args, **kwargs) + + +def get_event(*args, **kwargs): + con = get_server_api_connection() + return con.get_event(*args, **kwargs) + + +def get_events(*args, **kwargs): + con = get_server_api_connection() + return con.get_events(*args, **kwargs) + + +def dispatch_event(*args, **kwargs): + con = get_server_api_connection() + return con.dispatch_event(*args, **kwargs) + + +def update_event(*args, **kwargs): + con = get_server_api_connection() + return con.update_event(*args, **kwargs) + + +def enroll_event_job(*args, **kwargs): + con = get_server_api_connection() + return con.enroll_event_job(*args, **kwargs) + + +def download_file(*args, **kwargs): + con = get_server_api_connection() + return con.download_file(*args, **kwargs) + + +def upload_file(*args, **kwargs): + con = get_server_api_connection() + return con.upload_file(*args, **kwargs) + + +def query_graphql(*args, **kwargs): + con = get_server_api_connection() + return con.query_graphql(*args, **kwargs) + + +def get_users(*args, **kwargs): + con = get_server_api_connection() + return con.get_users(*args, **kwargs) + + +def get_user(*args, **kwargs): + con = get_server_api_connection() + return con.get_user(*args, **kwargs) + + +def get_attributes_for_type(*args, **kwargs): + con = get_server_api_connection() + return con.get_attributes_for_type(*args, **kwargs) + + +def get_addons_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_info(*args, **kwargs) + + +def get_addon_url(addon_name, addon_version, *subpaths): + con = get_server_api_connection() + return con.get_addon_url(addon_name, addon_version, *subpaths) + + +def download_addon_private_file(*args, **kwargs): + con = get_server_api_connection() + return con.download_addon_private_file(*args, **kwargs) + + +def get_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_info(*args, **kwargs) + + +def get_server_version(*args, **kwargs): + con = get_server_api_connection() + return con.get_server_version(*args, **kwargs) + + +def get_server_version_tuple(*args, **kwargs): + con = get_server_api_connection() + return con.get_server_version_tuple(*args, **kwargs) + + +# Installers +def get_installers(*args, **kwargs): + con = get_server_api_connection() + return con.get_installers(*args, **kwargs) + + +def create_installer(*args, **kwargs): + con = get_server_api_connection() + return con.create_installer(*args, **kwargs) + + +def update_installer(*args, **kwargs): + con = get_server_api_connection() + return con.update_installer(*args, **kwargs) + + +def delete_installer(*args, **kwargs): + con = get_server_api_connection() + return con.delete_installer(*args, **kwargs) + + +def download_installer(*args, **kwargs): + con = get_server_api_connection() + con.download_installer(*args, **kwargs) + + +def upload_installer(*args, **kwargs): + con = get_server_api_connection() + con.upload_installer(*args, **kwargs) + + +# Dependency packages +def get_dependencies_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_dependencies_info(*args, **kwargs) + + +def update_dependency_info(*args, **kwargs): + con = get_server_api_connection() + return con.update_dependency_info(*args, **kwargs) + + +def download_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.download_dependency_package(*args, **kwargs) + + +def upload_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.upload_dependency_package(*args, **kwargs) + + +def get_dependency_packages(*args, **kwargs): + con = get_server_api_connection() + return con.get_dependency_packages(*args, **kwargs) + + +def create_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.create_dependency_package(*args, **kwargs) + + +def update_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.update_dependency_package(*args, **kwargs) + + +def delete_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.delete_dependency_package(*args, **kwargs) + + +def upload_addon_zip(*args, **kwargs): + con = get_server_api_connection() + return con.upload_addon_zip(*args, **kwargs) + + +def get_project_anatomy_presets(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_anatomy_presets(*args, **kwargs) + + +def get_bundles(*args, **kwargs): + con = get_server_api_connection() + return con.get_bundles(*args, **kwargs) + + +def create_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.create_bundle(*args, **kwargs) + + +def update_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.update_bundle(*args, **kwargs) + + +def delete_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.delete_bundle(*args, **kwargs) + + +def get_project_anatomy_preset(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_anatomy_preset(*args, **kwargs) + + +def get_project_roots_by_site(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_roots_by_site(*args, **kwargs) + + +def get_project_roots_for_site(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_roots_for_site(*args, **kwargs) + + +def get_addon_settings_schema(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_settings_schema(*args, **kwargs) + + +def get_addon_site_settings_schema(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_site_settings_schema(*args, **kwargs) + + +def get_addon_studio_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_studio_settings(*args, **kwargs) + + +def get_addon_project_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_project_settings(*args, **kwargs) + + +def get_addon_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_settings(*args, **kwargs) + + +def get_addon_site_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_site_settings(*args, **kwargs) + + +def get_bundle_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_bundle_settings(*args, **kwargs) + + +def get_addons_studio_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_studio_settings(*args, **kwargs) + + +def get_addons_project_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_project_settings(*args, **kwargs) + + +def get_addons_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_settings(*args, **kwargs) + + +def get_secrets(*args, **kwargs): + con = get_server_api_connection() + return con.get_secrets(*args, **kwargs) + + +def get_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def save_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def delete_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def get_project_names(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_names(*args, **kwargs) + + +def get_project(*args, **kwargs): + con = get_server_api_connection() + return con.get_project(*args, **kwargs) + + +def get_projects(*args, **kwargs): + con = get_server_api_connection() + return con.get_projects(*args, **kwargs) + + +def get_folders(*args, **kwargs): + con = get_server_api_connection() + return con.get_folders(*args, **kwargs) + + +def get_folders_hierarchy(*args, **kwargs): + con = get_server_api_connection() + return con.get_folders_hierarchy(*args, **kwargs) + + +def get_tasks(*args, **kwargs): + con = get_server_api_connection() + return con.get_tasks(*args, **kwargs) + + +def get_task_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_id(*args, **kwargs) + + +def get_task_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_name(*args, **kwargs) + + +def get_folder_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_id(*args, **kwargs) + + +def get_folder_by_path(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_path(*args, **kwargs) + + +def get_folder_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_name(*args, **kwargs) + + +def get_folder_ids_with_products(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_ids_with_products(*args, **kwargs) + + +def get_product_types(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_types(*args, **kwargs) + + +def get_project_product_types(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_product_types(*args, **kwargs) + + +def get_product_type_names(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_type_names(*args, **kwargs) + + +def get_products(*args, **kwargs): + con = get_server_api_connection() + return con.get_products(*args, **kwargs) + + +def get_product_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_by_id(*args, **kwargs) + + +def get_product_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_by_name(*args, **kwargs) + + +def get_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_versions(*args, **kwargs) + + +def get_version_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_version_by_id(*args, **kwargs) + + +def get_version_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_version_by_name(*args, **kwargs) + + +def get_hero_version_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_version_by_id(*args, **kwargs) + + +def get_hero_version_by_product_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_version_by_product_id(*args, **kwargs) + + +def get_hero_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_versions(*args, **kwargs) + + +def get_last_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_versions(*args, **kwargs) + + +def get_last_version_by_product_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_version_by_product_id(*args, **kwargs) + + +def get_last_version_by_product_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_version_by_product_name(*args, **kwargs) + + +def version_is_latest(*args, **kwargs): + con = get_server_api_connection() + return con.version_is_latest(*args, **kwargs) + + +def get_representations(*args, **kwargs): + con = get_server_api_connection() + return con.get_representations(*args, **kwargs) + + +def get_representation_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_by_id(*args, **kwargs) + + +def get_representation_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_by_name(*args, **kwargs) + + +def get_representation_parents(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_parents(*args, **kwargs) + + +def get_representations_parents(*args, **kwargs): + con = get_server_api_connection() + return con.get_representations_parents(*args, **kwargs) + + +def get_repre_ids_by_context_filters(*args, **kwargs): + con = get_server_api_connection() + return con.get_repre_ids_by_context_filters(*args, **kwargs) + + +def get_workfiles_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfiles_info(*args, **kwargs) + + +def get_workfile_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfile_info(*args, **kwargs) + + +def get_workfile_info_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfile_info_by_id(*args, **kwargs) + + +def create_project( + project_name, + project_code, + library_project=False, + preset_name=None +): + con = get_server_api_connection() + return con.create_project( + project_name, + project_code, + library_project, + preset_name + ) + + +def update_project(project_name, *args, **kwargs): + con = get_server_api_connection() + return con.update_project(project_name, *args, **kwargs) + + +def delete_project(project_name): + con = get_server_api_connection() + return con.delete_project(project_name) + + +def get_thumbnail_by_id(project_name, thumbnail_id): + con = get_server_api_connection() + con.get_thumbnail_by_id(project_name, thumbnail_id) + + +def get_thumbnail(project_name, entity_type, entity_id, thumbnail_id=None): + con = get_server_api_connection() + con.get_thumbnail(project_name, entity_type, entity_id, thumbnail_id) + + +def get_folder_thumbnail(project_name, folder_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_folder_thumbnail(project_name, folder_id, thumbnail_id) + + +def get_version_thumbnail(project_name, version_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_version_thumbnail(project_name, version_id, thumbnail_id) + + +def get_workfile_thumbnail(project_name, workfile_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_workfile_thumbnail(project_name, workfile_id, thumbnail_id) + + +def create_thumbnail(project_name, src_filepath, thumbnail_id=None): + con = get_server_api_connection() + return con.create_thumbnail(project_name, src_filepath, thumbnail_id) + + +def update_thumbnail(project_name, thumbnail_id, src_filepath): + con = get_server_api_connection() + return con.update_thumbnail(project_name, thumbnail_id, src_filepath) + + +def get_attributes_fields_for_type(entity_type): + con = get_server_api_connection() + return con.get_attributes_fields_for_type(entity_type) + + +def get_default_fields_for_type(entity_type): + con = get_server_api_connection() + return con.get_default_fields_for_type(entity_type) + + +def get_full_link_type_name(link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.get_full_link_type_name( + link_type_name, input_type, output_type) + + +def get_link_types(project_name): + con = get_server_api_connection() + return con.get_link_types(project_name) + + +def get_link_type(project_name, link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.get_link_type( + project_name, link_type_name, input_type, output_type) + + +def create_link_type( + project_name, link_type_name, input_type, output_type, data=None): + con = get_server_api_connection() + return con.create_link_type( + project_name, link_type_name, input_type, output_type, data=data) + + +def delete_link_type(project_name, link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.delete_link_type( + project_name, link_type_name, input_type, output_type) + + +def make_sure_link_type_exists( + project_name, link_type_name, input_type, output_type, data=None +): + con = get_server_api_connection() + return con.make_sure_link_type_exists( + project_name, link_type_name, input_type, output_type, data=data + ) + + +def create_link( + project_name, + link_type_name, + input_id, + input_type, + output_id, + output_type +): + con = get_server_api_connection() + return con.create_link( + project_name, + link_type_name, + input_id, input_type, + output_id, output_type + ) + + +def delete_link(project_name, link_id): + con = get_server_api_connection() + return con.delete_link(project_name, link_id) + + +def get_entities_links( + project_name, + entity_type, + entity_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_entities_links( + project_name, + entity_type, + entity_ids, + link_types, + link_direction + ) + + +def get_folders_links( + project_name, + folder_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_folders_links( + project_name, + folder_ids, + link_types, + link_direction + ) + + +def get_folder_links( + project_name, + folder_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_folder_links( + project_name, + folder_id, + link_types, + link_direction + ) + + +def get_tasks_links( + project_name, + task_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_tasks_links( + project_name, + task_ids, + link_types, + link_direction + ) + + +def get_task_links( + project_name, + task_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_task_links( + project_name, + task_id, + link_types, + link_direction + ) + + +def get_products_links( + project_name, + product_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_products_links( + project_name, + product_ids, + link_types, + link_direction + ) + + +def get_product_links( + project_name, + product_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_product_links( + project_name, + product_id, + link_types, + link_direction + ) + + +def get_versions_links( + project_name, + version_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_versions_links( + project_name, + version_ids, + link_types, + link_direction + ) + + +def get_version_links( + project_name, + version_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_version_links( + project_name, + version_id, + link_types, + link_direction + ) + + +def get_representations_links( + project_name, + representation_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_representations_links( + project_name, + representation_ids, + link_types, + link_direction + ) + + +def get_representation_links( + project_name, + representation_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_representation_links( + project_name, + representation_id, + link_types, + link_direction + ) + + +def send_batch_operations( + project_name, + operations, + can_fail=False, + raise_on_fail=True +): + con = get_server_api_connection() + return con.send_batch_operations( + project_name, + operations, + can_fail=can_fail, + raise_on_fail=raise_on_fail + ) diff --git a/openpype/vendor/python/common/ayon_api/constants.py b/openpype/vendor/python/common/ayon_api/constants.py new file mode 100644 index 0000000000..eaeb77b607 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/constants.py @@ -0,0 +1,137 @@ +# Environments where server url and api key are stored for global connection +SERVER_URL_ENV_KEY = "AYON_SERVER_URL" +SERVER_API_ENV_KEY = "AYON_API_KEY" +SERVER_TIMEOUT_ENV_KEY = "AYON_SERVER_TIMEOUT" +SERVER_RETRIES_ENV_KEY = "AYON_SERVER_RETRIES" + +# Backwards compatibility +SERVER_TOKEN_ENV_KEY = SERVER_API_ENV_KEY + +# --- User --- +DEFAULT_USER_FIELDS = { + "accessGroups", + "defaultAccessGroups", + "name", + "isService", + "isManager", + "isGuest", + "isAdmin", + "createdAt", + "active", + "hasPassword", + "updatedAt", + "apiKeyPreview", + "attrib.avatarUrl", + "attrib.email", + "attrib.fullName", +} + +# --- Product types --- +DEFAULT_PRODUCT_TYPE_FIELDS = { + "name", + "icon", + "color", +} + +# --- Project --- +DEFAULT_PROJECT_FIELDS = { + "active", + "name", + "code", + "config", + "createdAt", +} + +# --- Folders --- +DEFAULT_FOLDER_FIELDS = { + "id", + "name", + "label", + "folderType", + "path", + "parentId", + "active", + "thumbnailId", +} + +# --- Tasks --- +DEFAULT_TASK_FIELDS = { + "id", + "name", + "label", + "taskType", + "folderId", + "active", + "assignees", +} + +# --- Products --- +DEFAULT_PRODUCT_FIELDS = { + "id", + "name", + "folderId", + "active", + "productType", +} + +# --- Versions --- +DEFAULT_VERSION_FIELDS = { + "id", + "name", + "version", + "productId", + "taskId", + "active", + "author", + "thumbnailId", + "createdAt", + "updatedAt", +} + +# --- Representations --- +DEFAULT_REPRESENTATION_FIELDS = { + "id", + "name", + "context", + "createdAt", + "active", + "versionId", +} + +REPRESENTATION_FILES_FIELDS = { + "files.name", + "files.hash", + "files.id", + "files.path", + "files.size", +} + +# --- Workfile info --- +DEFAULT_WORKFILE_INFO_FIELDS = { + "active", + "createdAt", + "createdBy", + "id", + "name", + "path", + "projectName", + "taskId", + "thumbnailId", + "updatedAt", + "updatedBy", +} + +DEFAULT_EVENT_FIELDS = { + "id", + "hash", + "createdAt", + "dependsOn", + "description", + "project", + "retries", + "sender", + "status", + "topic", + "updatedAt", + "user", +} diff --git a/openpype/vendor/python/common/ayon_api/entity_hub.py b/openpype/vendor/python/common/ayon_api/entity_hub.py new file mode 100644 index 0000000000..b9b017bac5 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/entity_hub.py @@ -0,0 +1,2647 @@ +import re +import copy +import collections +from abc import ABCMeta, abstractmethod + +import six +from ._api import get_server_api_connection +from .utils import create_entity_id, convert_entity_id, slugify_string + +UNKNOWN_VALUE = object() +PROJECT_PARENT_ID = object() +_NOT_SET = object() + + +class EntityHub(object): + """Helper to create, update or remove entities in project. + + The hub is a guide to operation with folder entities and update of project. + Project entity must already exist on server (can be only updated). + + Object is caching entities queried from server. They won't be required once + they were queried, so it is recommended to create new hub or clear cache + frequently. + + Todos: + Listen to server events about entity changes to be able update already + queried entities. + + Args: + project_name (str): Name of project where changes will happen. + connection (ServerAPI): Connection to server with logged user. + allow_data_changes (bool): This option gives ability to change 'data' + key on entities. This is not recommended as 'data' may be use for + secure information and would also slow down server queries. Content + of 'data' key can't be received only GraphQl. + """ + + def __init__( + self, project_name, connection=None, allow_data_changes=False + ): + if not connection: + connection = get_server_api_connection() + self._connection = connection + + self._project_name = project_name + self._entities_by_id = {} + self._entities_by_parent_id = collections.defaultdict(list) + self._project_entity = UNKNOWN_VALUE + + self._allow_data_changes = allow_data_changes + + self._path_reset_queue = None + + @property + def allow_data_changes(self): + """Entity hub allows changes of 'data' key on entities. + + Data are private and not all users may have access to them. Also to get + 'data' for entity is required to use REST api calls, which means to + query each entity on-by-one from server. + + Returns: + bool: Data changes are allowed. + """ + + return self._allow_data_changes + + @property + def project_name(self): + """Project name which is maintained by hub. + + Returns: + str: Name of project. + """ + + return self._project_name + + @property + def project_entity(self): + """Project entity. + + Returns: + ProjectEntity: Project entity. + """ + + if self._project_entity is UNKNOWN_VALUE: + self.fill_project_from_server() + return self._project_entity + + def get_attributes_for_type(self, entity_type): + """Get attributes available for a type. + + Attributes are based on entity types. + + Todos: + Use attribute schema to validate values on entities. + + Args: + entity_type (Literal["project", "folder", "task"]): Entity type + for which should be attributes received. + + Returns: + Dict[str, Dict[str, Any]]: Attribute schemas that are available + for entered entity type. + """ + + return self._connection.get_attributes_for_type(entity_type) + + def get_entity_by_id(self, entity_id): + """Receive entity by its id without entity type. + + The entity must be already existing in cached objects. + + Args: + entity_id (str): Id of entity. + + Returns: + Union[BaseEntity, None]: Entity object or None. + """ + + return self._entities_by_id.get(entity_id) + + def get_folder_by_id(self, entity_id, allow_query=True): + """Get folder entity by id. + + Args: + entity_id (str): Id of folder entity. + allow_query (bool): Try to query entity from server if is not + available in cache. + + Returns: + Union[FolderEntity, None]: Object of folder or 'None'. + """ + + if allow_query: + return self.get_or_query_entity_by_id(entity_id, ["folder"]) + return self._entities_by_id.get(entity_id) + + def get_task_by_id(self, entity_id, allow_query=True): + """Get task entity by id. + + Args: + entity_id (str): Id of task entity. + allow_query (bool): Try to query entity from server if is not + available in cache. + + Returns: + Union[TaskEntity, None]: Object of folder or 'None'. + """ + + if allow_query: + return self.get_or_query_entity_by_id(entity_id, ["task"]) + return self._entities_by_id.get(entity_id) + + def get_or_query_entity_by_id(self, entity_id, entity_types): + """Get or query entity based on it's id and possible entity types. + + This is a helper function when entity id is known but entity type may + have multiple possible options. + + Args: + entity_id (str): Entity id. + entity_types (Iterable[str]): Possible entity types that can the id + represent. e.g. '["folder", "project"]' + """ + + existing_entity = self._entities_by_id.get(entity_id) + if existing_entity is not None: + return existing_entity + + if not entity_types: + return None + + entity_data = None + for entity_type in entity_types: + if entity_type == "folder": + entity_data = self._connection.get_folder_by_id( + self.project_name, + entity_id, + fields=self._get_folder_fields(), + own_attributes=True + ) + elif entity_type == "task": + entity_data = self._connection.get_task_by_id( + self.project_name, + entity_id, + own_attributes=True + ) + else: + raise ValueError( + "Unknonwn entity type \"{}\"".format(entity_type) + ) + + if entity_data: + break + + if not entity_data: + return None + + if entity_type == "folder": + return self.add_folder(entity_data) + elif entity_type == "task": + return self.add_task(entity_data) + + return None + + @property + def entities(self): + """Iterator over available entities. + + Returns: + Iterator[BaseEntity]: All queried/created entities cached in hub. + """ + + for entity in self._entities_by_id.values(): + yield entity + + def add_new_folder(self, *args, created=True, **kwargs): + """Create folder object and add it to entity hub. + + Args: + folder_type (str): Type of folder. Folder type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Folder label. + path (Optional[str]): Folder path. Path consist of all parent names + with slash('/') used as separator. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + + Returns: + FolderEntity: Added folder entity. + """ + + folder_entity = FolderEntity( + *args, **kwargs, created=created, entity_hub=self + ) + self.add_entity(folder_entity) + return folder_entity + + def add_new_task(self, *args, created=True, **kwargs): + """Create folder object and add it to entity hub. + + Args: + task_type (str): Type of task. Task type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Folder label. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + + Returns: + TaskEntity: Added task entity. + """ + + task_entity = TaskEntity( + *args, **kwargs, created=created, entity_hub=self + ) + self.add_entity(task_entity) + return task_entity + + def add_folder(self, folder): + """Create folder object and add it to entity hub. + + Args: + folder (Dict[str, Any]): Folder entity data. + + Returns: + FolderEntity: Added folder entity. + """ + + folder_entity = FolderEntity.from_entity_data(folder, entity_hub=self) + self.add_entity(folder_entity) + return folder_entity + + def add_task(self, task): + """Create task object and add it to entity hub. + + Args: + task (Dict[str, Any]): Task entity data. + + Returns: + TaskEntity: Added task entity. + """ + + task_entity = TaskEntity.from_entity_data(task, entity_hub=self) + self.add_entity(task_entity) + return task_entity + + def add_entity(self, entity): + """Add entity to hub cache. + + Args: + entity (BaseEntity): Entity that should be added to hub's cache. + """ + + self._entities_by_id[entity.id] = entity + parent_children = self._entities_by_parent_id[entity.parent_id] + if entity not in parent_children: + parent_children.append(entity) + + if entity.parent_id is PROJECT_PARENT_ID: + return + + parent = self._entities_by_id.get(entity.parent_id) + if parent is not None: + parent.add_child(entity.id) + + def folder_path_reseted(self, folder_id): + """Method called from 'FolderEntity' on path reset. + + This should reset cache of folder paths on all children entities. + + The path cache is always propagated from top to bottom so if an entity + has not cached path it means that any children can't have it cached. + """ + + if self._path_reset_queue is not None: + self._path_reset_queue.append(folder_id) + return + + self._path_reset_queue = collections.deque() + self._path_reset_queue.append(folder_id) + while self._path_reset_queue: + children = self._entities_by_parent_id[folder_id] + for child in children: + # Get child path but don't trigger cache + path = child.get_path(False) + if path is not None: + # Reset it's path cache if is set + child.reset_path() + else: + self._path_reset_queue.append(child.id) + + self._path_reset_queue = None + + def unset_entity_parent(self, entity_id, parent_id): + entity = self._entities_by_id.get(entity_id) + parent = self._entities_by_id.get(parent_id) + children_ids = UNKNOWN_VALUE + if parent is not None: + children_ids = parent.get_children_ids(False) + + has_set_parent = False + if entity is not None: + has_set_parent = entity.parent_id == parent_id + + new_parent_id = None + if has_set_parent: + entity.parent_id = new_parent_id + + if children_ids is not UNKNOWN_VALUE and entity_id in children_ids: + parent.remove_child(entity_id) + + if entity is None or not has_set_parent: + self.reset_immutable_for_hierarchy_cache(parent_id) + return + + orig_parent_children = self._entities_by_parent_id[parent_id] + if entity in orig_parent_children: + orig_parent_children.remove(entity) + + new_parent_children = self._entities_by_parent_id[new_parent_id] + if entity not in new_parent_children: + new_parent_children.append(entity) + self.reset_immutable_for_hierarchy_cache(parent_id) + + def set_entity_parent(self, entity_id, parent_id, orig_parent_id=_NOT_SET): + parent = self._entities_by_id.get(parent_id) + entity = self._entities_by_id.get(entity_id) + if entity is None: + if parent is not None: + children_ids = parent.get_children_ids(False) + if ( + children_ids is not UNKNOWN_VALUE + and entity_id in children_ids + ): + parent.remove_child(entity_id) + self.reset_immutable_for_hierarchy_cache(parent.id) + return + + if orig_parent_id is _NOT_SET: + orig_parent_id = entity.parent_id + if orig_parent_id == parent_id: + return + + orig_parent_children = self._entities_by_parent_id[orig_parent_id] + if entity in orig_parent_children: + orig_parent_children.remove(entity) + self.reset_immutable_for_hierarchy_cache(orig_parent_id) + + orig_parent = self._entities_by_id.get(orig_parent_id) + if orig_parent is not None: + orig_parent.remove_child(entity_id) + + parent_children = self._entities_by_parent_id[parent_id] + if entity not in parent_children: + parent_children.append(entity) + + entity.parent_id = parent_id + if parent is None or parent.get_children_ids(False) is UNKNOWN_VALUE: + return + + parent.add_child(entity_id) + self.reset_immutable_for_hierarchy_cache(parent_id) + + def _query_entity_children(self, entity): + folder_fields = self._get_folder_fields() + tasks = [] + folders = [] + if entity.entity_type == "project": + folders = list(self._connection.get_folders( + entity["name"], + parent_ids=[entity.id], + fields=folder_fields, + own_attributes=True + )) + + elif entity.entity_type == "folder": + folders = list(self._connection.get_folders( + self.project_entity["name"], + parent_ids=[entity.id], + fields=folder_fields, + own_attributes=True + )) + + tasks = list(self._connection.get_tasks( + self.project_entity["name"], + folder_ids=[entity.id], + own_attributes=True + )) + + children_ids = { + child.id + for child in self._entities_by_parent_id[entity.id] + } + for folder in folders: + folder_entity = self._entities_by_id.get(folder["id"]) + if folder_entity is not None: + if folder_entity.parent_id == entity.id: + children_ids.add(folder_entity.id) + continue + + folder_entity = self.add_folder(folder) + children_ids.add(folder_entity.id) + + for task in tasks: + task_entity = self._entities_by_id.get(task["id"]) + if task_entity is not None: + if task_entity.parent_id == entity.id: + children_ids.add(task_entity.id) + continue + + task_entity = self.add_task(task) + children_ids.add(task_entity.id) + + entity.fill_children_ids(children_ids) + + def get_entity_children(self, entity, allow_query=True): + children_ids = entity.get_children_ids(allow_query=False) + if children_ids is not UNKNOWN_VALUE: + return entity.get_children() + + if children_ids is UNKNOWN_VALUE and not allow_query: + return UNKNOWN_VALUE + + self._query_entity_children(entity) + + return entity.get_children() + + def delete_entity(self, entity): + parent_id = entity.parent_id + if parent_id is None: + return + + parent = self._entities_by_id.get(parent_id) + if parent is not None: + parent.remove_child(entity.id) + + def reset_immutable_for_hierarchy_cache( + self, entity_id, bottom_to_top=True + ): + if bottom_to_top is None or entity_id is None: + return + + reset_queue = collections.deque() + reset_queue.append(entity_id) + if bottom_to_top: + while reset_queue: + entity_id = reset_queue.popleft() + entity = self.get_entity_by_id(entity_id) + if entity is None: + continue + entity.reset_immutable_for_hierarchy_cache(None) + reset_queue.append(entity.parent_id) + else: + while reset_queue: + entity_id = reset_queue.popleft() + entity = self.get_entity_by_id(entity_id) + if entity is None: + continue + entity.reset_immutable_for_hierarchy_cache(None) + for child in self._entities_by_parent_id[entity.id]: + reset_queue.append(child.id) + + def fill_project_from_server(self): + """Query project data from server and create project entity. + + This method will invalidate previous object of Project entity. + + Returns: + ProjectEntity: Entity that was updated with server data. + + Raises: + ValueError: When project was not found on server. + """ + + project_name = self.project_name + project = self._connection.get_project( + project_name, + own_attributes=True + ) + if not project: + raise ValueError( + "Project \"{}\" was not found.".format(project_name) + ) + + self._project_entity = ProjectEntity( + project["code"], + parent_id=PROJECT_PARENT_ID, + entity_id=project["name"], + library=project["library"], + folder_types=project["folderTypes"], + task_types=project["taskTypes"], + statuses=project["statuses"], + name=project["name"], + attribs=project["ownAttrib"], + data=project["data"], + active=project["active"], + entity_hub=self + ) + self.add_entity(self._project_entity) + return self._project_entity + + def _get_folder_fields(self): + folder_fields = set( + self._connection.get_default_fields_for_type("folder") + ) + folder_fields.add("hasProducts") + if self._allow_data_changes: + folder_fields.add("data") + return folder_fields + + def query_entities_from_server(self): + """Query whole project at once.""" + + project_entity = self.fill_project_from_server() + + folder_fields = self._get_folder_fields() + + folders = self._connection.get_folders( + project_entity.name, + fields=folder_fields, + own_attributes=True + ) + tasks = self._connection.get_tasks( + project_entity.name, + own_attributes=True + ) + folders_by_parent_id = collections.defaultdict(list) + for folder in folders: + parent_id = folder["parentId"] + folders_by_parent_id[parent_id].append(folder) + + tasks_by_parent_id = collections.defaultdict(list) + for task in tasks: + parent_id = task["folderId"] + tasks_by_parent_id[parent_id].append(task) + + lock_queue = collections.deque() + hierarchy_queue = collections.deque() + hierarchy_queue.append((None, project_entity)) + while hierarchy_queue: + item = hierarchy_queue.popleft() + parent_id, parent_entity = item + + lock_queue.append(parent_entity) + + children_ids = set() + for folder in folders_by_parent_id[parent_id]: + folder_entity = self.add_folder(folder) + children_ids.add(folder_entity.id) + folder_entity.has_published_content = folder["hasProducts"] + hierarchy_queue.append((folder_entity.id, folder_entity)) + + for task in tasks_by_parent_id[parent_id]: + task_entity = self.add_task(task) + lock_queue.append(task_entity) + children_ids.add(task_entity.id) + + parent_entity.fill_children_ids(children_ids) + + # Lock entities when all are added to hub + # - lock only entities added in this method + while lock_queue: + entity = lock_queue.popleft() + entity.lock() + + def lock(self): + if self._project_entity is None: + return + + for entity in self._entities_by_id.values(): + entity.lock() + + def _get_top_entities(self): + all_ids = set(self._entities_by_id.keys()) + return [ + entity + for entity in self._entities_by_id.values() + if entity.parent_id not in all_ids + ] + + def _split_entities(self): + top_entities = self._get_top_entities() + entities_queue = collections.deque(top_entities) + removed_entity_ids = [] + created_entity_ids = [] + other_entity_ids = [] + while entities_queue: + entity = entities_queue.popleft() + removed = entity.removed + if removed: + removed_entity_ids.append(entity.id) + elif entity.created: + created_entity_ids.append(entity.id) + else: + other_entity_ids.append(entity.id) + + for child in tuple(self._entities_by_parent_id[entity.id]): + if removed: + self.unset_entity_parent(child.id, entity.id) + entities_queue.append(child) + return created_entity_ids, other_entity_ids, removed_entity_ids + + def _get_update_body(self, entity, changes=None): + if changes is None: + changes = entity.changes + + if not changes: + return None + return { + "type": "update", + "entityType": entity.entity_type, + "entityId": entity.id, + "data": changes + } + + def _get_create_body(self, entity): + return { + "type": "create", + "entityType": entity.entity_type, + "entityId": entity.id, + "data": entity.to_create_body_data() + } + + def _get_delete_body(self, entity): + return { + "type": "delete", + "entityType": entity.entity_type, + "entityId": entity.id + } + + def _pre_commit_types_changes( + self, project_changes, orig_types, changes_key, post_changes + ): + """Compare changes of types on a project. + + Compare old and new types. Change project changes content if some old + types were removed. In that case the final change of types will + happen when all other entities have changed. + + Args: + project_changes (dict[str, Any]): Project changes. + orig_types (list[dict[str, Any]]): Original types. + changes_key (Literal[folderTypes, taskTypes]): Key of type changes + in project changes. + post_changes (dict[str, Any]): An object where post changes will + be stored. + """ + + if changes_key not in project_changes: + return + + new_types = project_changes[changes_key] + + orig_types_by_name = { + type_info["name"]: type_info + for type_info in orig_types + } + new_names = { + type_info["name"] + for type_info in new_types + } + diff_names = set(orig_types_by_name) - new_names + if not diff_names: + return + + # Create copy of folder type changes to post changes + # - post changes will be commited at the end + post_changes[changes_key] = copy.deepcopy(new_types) + + for type_name in diff_names: + new_types.append(orig_types_by_name[type_name]) + + def _pre_commit_project(self): + """Some project changes cannot be committed before hierarchy changes. + + It is not possible to change folder types or task types if there are + existing hierarchy items using the removed types. For that purposes + is first committed union of all old and new types and post changes + are prepared when all existing entities are changed. + + Returns: + dict[str, Any]: Changes that will be committed after hierarchy + changes. + """ + + project_changes = self.project_entity.changes + + post_changes = {} + if not project_changes: + return post_changes + + self._pre_commit_types_changes( + project_changes, + self.project_entity.get_orig_folder_types(), + "folderType", + post_changes + ) + self._pre_commit_types_changes( + project_changes, + self.project_entity.get_orig_task_types(), + "taskType", + post_changes + ) + self._connection.update_project(self.project_name, **project_changes) + return post_changes + + def commit_changes(self): + """Commit any changes that happened on entities. + + Todos: + Use Operations Session instead of known operations body. + """ + + post_project_changes = self._pre_commit_project() + self.project_entity.lock() + + project_changes = self.project_entity.changes + if project_changes: + response = self._connection.patch( + "projects/{}".format(self.project_name), + **project_changes + ) + response.raise_for_status() + + self.project_entity.lock() + + operations_body = [] + + created_entity_ids, other_entity_ids, removed_entity_ids = ( + self._split_entities() + ) + processed_ids = set() + for entity_id in other_entity_ids: + if entity_id in processed_ids: + continue + + entity = self._entities_by_id[entity_id] + changes = entity.changes + processed_ids.add(entity_id) + if not changes: + continue + + bodies = [self._get_update_body(entity, changes)] + # Parent was created and was not yet added to operations body + parent_queue = collections.deque() + parent_queue.append(entity.parent_id) + while parent_queue: + # Make sure entity's parents are created + parent_id = parent_queue.popleft() + if ( + parent_id is UNKNOWN_VALUE + or parent_id in processed_ids + or parent_id not in created_entity_ids + ): + continue + + parent = self._entities_by_id.get(parent_id) + processed_ids.add(parent.id) + bodies.append(self._get_create_body(parent)) + parent_queue.append(parent.id) + + operations_body.extend(reversed(bodies)) + + for entity_id in created_entity_ids: + if entity_id in processed_ids: + continue + entity = self._entities_by_id[entity_id] + processed_ids.add(entity_id) + operations_body.append(self._get_create_body(entity)) + + for entity_id in reversed(removed_entity_ids): + if entity_id in processed_ids: + continue + + entity = self._entities_by_id.pop(entity_id) + parent_children = self._entities_by_parent_id[entity.parent_id] + if entity in parent_children: + parent_children.remove(entity) + + if not entity.created: + operations_body.append(self._get_delete_body(entity)) + + self._connection.send_batch_operations( + self.project_name, operations_body + ) + if post_project_changes: + self._connection.update_project( + self.project_name, **post_project_changes) + + self.lock() + + +class AttributeValue(object): + def __init__(self, value): + self._value = value + self._origin_value = copy.deepcopy(value) + + def get_value(self): + return self._value + + def set_value(self, value): + self._value = value + + value = property(get_value, set_value) + + @property + def changed(self): + return self._value != self._origin_value + + def lock(self): + self._origin_value = copy.deepcopy(self._value) + + +class Attributes(object): + """Object representing attribs of entity. + + Todos: + This could be enhanced to know attribute schema and validate values + based on the schema. + + Args: + attrib_keys (Iterable[str]): Keys that are available in attribs of the + entity. + values (Union[None, Dict[str, Any]]): Values of attributes. + """ + + def __init__(self, attrib_keys, values=UNKNOWN_VALUE): + if values in (UNKNOWN_VALUE, None): + values = {} + self._attributes = { + key: AttributeValue(values.get(key)) + for key in attrib_keys + } + + def __contains__(self, key): + return key in self._attributes + + def __getitem__(self, key): + return self._attributes[key].value + + def __setitem__(self, key, value): + self._attributes[key].set_value(value) + + def __iter__(self): + for key in self._attributes: + yield key + + def keys(self): + return self._attributes.keys() + + def values(self): + for attribute in self._attributes.values(): + yield attribute.value + + def items(self): + for key, attribute in self._attributes.items(): + yield key, attribute.value + + def get(self, key, default=None): + """Get value of attribute. + + Args: + key (str): Attribute name. + default (Any): Default value to return when attribute was not + found. + """ + + attribute = self._attributes.get(key) + if attribute is None: + return default + return attribute.value + + def set(self, key, value): + """Change value of attribute. + + Args: + key (str): Attribute name. + value (Any): New value of the attribute. + """ + + self[key] = value + + def get_attribute(self, key): + """Access to attribute object. + + Args: + key (str): Name of attribute. + + Returns: + AttributeValue: Object of attribute value. + + Raises: + KeyError: When attribute is not available. + """ + + return self._attributes[key] + + def lock(self): + for attribute in self._attributes.values(): + attribute.lock() + + @property + def changes(self): + """Attribute value changes. + + Returns: + Dict[str, Any]: Key mapping with new values. + """ + + return { + attr_key: attribute.value + for attr_key, attribute in self._attributes.items() + if attribute.changed + } + + def to_dict(self, ignore_none=True): + output = {} + for key, value in self.items(): + if ( + value is UNKNOWN_VALUE + or (ignore_none and value is None) + ): + continue + + output[key] = value + return output + + +@six.add_metaclass(ABCMeta) +class BaseEntity(object): + """Object representation of entity from server which is capturing changes. + + All data on created object are expected as "current data" on server entity + unless the entity has set 'created' to 'True'. So if new data should be + stored to server entity then fill entity with server data first and + then change them. + + Calling 'lock' method will mark entity as "saved" and all changes made on + entity are set as "current data" on server. + + Args: + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + def __init__( + self, + entity_id=None, + parent_id=UNKNOWN_VALUE, + name=UNKNOWN_VALUE, + attribs=UNKNOWN_VALUE, + data=UNKNOWN_VALUE, + thumbnail_id=UNKNOWN_VALUE, + active=UNKNOWN_VALUE, + entity_hub=None, + created=None + ): + if entity_hub is None: + raise ValueError("Missing required kwarg 'entity_hub'") + + self._entity_hub = entity_hub + + if created is None: + created = entity_id is None + + entity_id = self._prepare_entity_id(entity_id) + + if data is None: + data = {} + + children_ids = UNKNOWN_VALUE + if created: + children_ids = set() + + if not created and parent_id is UNKNOWN_VALUE: + raise ValueError("Existing entity is missing parent id.") + + # These are public without any validation at this moment + # may change in future (e.g. name will have regex validation) + self._entity_id = entity_id + + self._parent_id = parent_id + self._name = name + self.active = active + self._created = created + self._thumbnail_id = thumbnail_id + self._attribs = Attributes( + self._get_attributes_for_type(self.entity_type), + attribs + ) + self._data = data + self._children_ids = children_ids + + self._orig_parent_id = parent_id + self._orig_name = name + self._orig_data = copy.deepcopy(data) + self._orig_thumbnail_id = thumbnail_id + self._orig_active = active + + self._immutable_for_hierarchy_cache = None + + def __repr__(self): + return "<{} - {}>".format(self.__class__.__name__, self.id) + + def __getitem__(self, item): + return getattr(self, item) + + def __setitem__(self, item, value): + return setattr(self, item, value) + + def _prepare_entity_id(self, entity_id): + entity_id = convert_entity_id(entity_id) + if entity_id is None: + entity_id = create_entity_id() + return entity_id + + @property + def id(self): + """Access to entity id under which is entity available on server. + + Returns: + str: Entity id. + """ + + return self._entity_id + + @property + def removed(self): + return self._parent_id is None + + @property + def orig_parent_id(self): + return self._orig_parent_id + + @property + def attribs(self): + """Entity attributes based on server configuration. + + Returns: + Attributes: Attributes object handling changes and values of + attributes on entity. + """ + + return self._attribs + + @property + def data(self): + """Entity custom data that are not stored by any deterministic model. + + Be aware that 'data' can't be queried using GraphQl and cannot be + updated partially. + + Returns: + Dict[str, Any]: Custom data on entity. + """ + + return self._data + + @property + def project_name(self): + """Quick access to project from entity hub. + + Returns: + str: Name of project under which entity lives. + """ + + return self._entity_hub.project_name + + @property + @abstractmethod + def entity_type(self): + """Entity type coresponding to server. + + Returns: + Literal[project, folder, task]: Entity type. + """ + + pass + + @property + @abstractmethod + def parent_entity_types(self): + """Entity type coresponding to server. + + Returns: + Iterable[str]: Possible entity types of parent. + """ + + pass + + @property + @abstractmethod + def changes(self): + """Receive entity changes. + + Returns: + Union[Dict[str, Any], None]: All values that have changed on + entity. New entity must return None. + """ + + pass + + @classmethod + @abstractmethod + def from_entity_data(cls, entity_data, entity_hub): + """Create entity based on queried data from server. + + Args: + entity_data (Dict[str, Any]): Entity data from server. + entity_hub (EntityHub): Hub which handle the entity. + + Returns: + BaseEntity: Object of the class. + """ + + pass + + @abstractmethod + def to_create_body_data(self): + """Convert object of entity to data for server on creation. + + Returns: + Dict[str, Any]: Entity data. + """ + + pass + + @property + def immutable_for_hierarchy(self): + """Entity is immutable for hierarchy changes. + + Hierarchy changes can be considered as change of name or parents. + + Returns: + bool: Entity is immutable for hierarchy changes. + """ + + if self._immutable_for_hierarchy_cache is not None: + return self._immutable_for_hierarchy_cache + + immutable_for_hierarchy = self._immutable_for_hierarchy + if immutable_for_hierarchy is not None: + self._immutable_for_hierarchy_cache = immutable_for_hierarchy + return self._immutable_for_hierarchy_cache + + for child in self._entity_hub.get_entity_children(self): + if child.immutable_for_hierarchy: + self._immutable_for_hierarchy_cache = True + return self._immutable_for_hierarchy_cache + + self._immutable_for_hierarchy_cache = False + return self._immutable_for_hierarchy_cache + + @property + def _immutable_for_hierarchy(self): + """Override this method to define if entity object is immutable. + + This property was added to define immutable state of Folder entities + which is used in property 'immutable_for_hierarchy'. + + Returns: + Union[bool, None]: Bool to explicitly telling if is immutable or + not otherwise None. + """ + + return None + + @property + def has_cached_immutable_hierarchy(self): + return self._immutable_for_hierarchy_cache is not None + + def reset_immutable_for_hierarchy_cache(self, bottom_to_top=True): + """Clear cache of immutable hierarchy property. + + This is used when entity changed parent or a child was added. + + Args: + bottom_to_top (bool): Reset cache from top hierarchy to bottom or + from bottom hierarchy to top. + """ + + self._immutable_for_hierarchy_cache = None + self._entity_hub.reset_immutable_for_hierarchy_cache( + self.id, bottom_to_top + ) + + def _get_default_changes(self): + """Collect changes of common data on entity. + + Returns: + Dict[str, Any]: Changes on entity. Key and it's new value. + """ + + changes = {} + if self._orig_name != self._name: + changes["name"] = self._name + + if self._entity_hub.allow_data_changes: + if self._orig_data != self._data: + changes["data"] = self._data + + if self._orig_thumbnail_id != self._thumbnail_id: + changes["thumbnailId"] = self._thumbnail_id + + if self._orig_active != self.active: + changes["active"] = self.active + + attrib_changes = self.attribs.changes + if attrib_changes: + changes["attrib"] = attrib_changes + return changes + + def _get_attributes_for_type(self, entity_type): + return self._entity_hub.get_attributes_for_type(entity_type) + + def lock(self): + """Lock entity as 'saved' so all changes are discarded.""" + + self._orig_parent_id = self._parent_id + self._orig_name = self._name + self._orig_data = copy.deepcopy(self._data) + self._orig_thumbnail_id = self.thumbnail_id + self._attribs.lock() + + self._immutable_for_hierarchy_cache = None + self._created = False + + def _get_entity_by_id(self, entity_id): + return self._entity_hub.get_entity_by_id(entity_id) + + def get_name(self): + return self._name + + def set_name(self, name): + self._name = name + + name = property(get_name, set_name) + + def get_parent_id(self): + """Parent entity id. + + Returns: + Union[str, None]: Id of parent entity or none if is not set. + """ + + return self._parent_id + + def set_parent_id(self, parent_id): + """Change parent by id. + + Args: + parent_id (Union[str, None]): Id of new parent for entity. + + Raises: + ValueError: If parent was not found by id. + TypeError: If validation of parent does not pass. + """ + + if parent_id != self._parent_id: + orig_parent_id = self._parent_id + self._parent_id = parent_id + self._entity_hub.set_entity_parent( + self.id, parent_id, orig_parent_id + ) + + parent_id = property(get_parent_id, set_parent_id) + + def get_parent(self, allow_query=True): + """Parent entity. + + Returns: + Union[BaseEntity, None]: Parent object. + """ + + parent = self._entity_hub.get_entity_by_id(self._parent_id) + if parent is not None: + return parent + + if not allow_query: + return self._parent_id + + if self._parent_id is UNKNOWN_VALUE: + return self._parent_id + + return self._entity_hub.get_or_query_entity_by_id( + self._parent_id, self.parent_entity_types + ) + + def set_parent(self, parent): + """Change parent object. + + Args: + parent (BaseEntity): New parent for entity. + + Raises: + TypeError: If validation of parent does not pass. + """ + + parent_id = None + if parent is not None: + parent_id = parent.id + self._entity_hub.set_entity_parent(self.id, parent_id) + + parent = property(get_parent, set_parent) + + def get_children_ids(self, allow_query=True): + """Access to children objects. + + Todos: + Children should be maybe handled by EntityHub instead of entities + themselves. That would simplify 'set_entity_parent', + 'unset_entity_parent' and other logic related to changing + hierarchy. + + Returns: + Union[List[str], Type[UNKNOWN_VALUE]]: Children iterator. + """ + + if self._children_ids is UNKNOWN_VALUE: + if not allow_query: + return self._children_ids + self._entity_hub.get_entity_children(self, True) + return set(self._children_ids) + + children_ids = property(get_children_ids) + + def get_children(self, allow_query=True): + """Access to children objects. + + Returns: + Union[List[BaseEntity], Type[UNKNOWN_VALUE]]: Children iterator. + """ + + if self._children_ids is UNKNOWN_VALUE: + if not allow_query: + return self._children_ids + return self._entity_hub.get_entity_children(self, True) + + return [ + self._entity_hub.get_entity_by_id(children_id) + for children_id in self._children_ids + ] + + children = property(get_children) + + def add_child(self, child): + """Add child entity. + + Args: + child (BaseEntity): Child object to add. + + Raises: + TypeError: When child object has invalid type to be children. + """ + + child_id = child + if isinstance(child_id, BaseEntity): + child_id = child.id + + if self._children_ids is not UNKNOWN_VALUE: + self._children_ids.add(child_id) + + self._entity_hub.set_entity_parent(child_id, self.id) + + def remove_child(self, child): + """Remove child entity. + + Is ignored if child is not in children. + + Args: + child (Union[str, BaseEntity]): Child object or child id to remove. + """ + + child_id = child + if isinstance(child_id, BaseEntity): + child_id = child.id + + if self._children_ids is not UNKNOWN_VALUE: + self._children_ids.discard(child_id) + self._entity_hub.unset_entity_parent(child_id, self.id) + + def get_thumbnail_id(self): + """Thumbnail id of entity. + + Returns: + Union[str, None]: Id of parent entity or none if is not set. + """ + + return self._thumbnail_id + + def set_thumbnail_id(self, thumbnail_id): + """Change thumbnail id. + + Args: + thumbnail_id (Union[str, None]): Id of thumbnail for entity. + """ + + self._thumbnail_id = thumbnail_id + + thumbnail_id = property(get_thumbnail_id, set_thumbnail_id) + + @property + def created(self): + """Entity is new. + + Returns: + bool: Entity is newly created. + """ + + return self._created + + def fill_children_ids(self, children_ids): + """Fill children ids on entity. + + Warning: + This is not an api call but is called from entity hub. + """ + + self._children_ids = set(children_ids) + + +class ProjectStatus: + """Project status class. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + index (Optional[int]): Index of the status. + project_statuses (Optional[_ProjectStatuses]): Project statuses + wrapper. + """ + + valid_states = ("not_started", "in_progress", "done", "blocked") + color_regex = re.compile(r"#([a-f0-9]{6})$") + default_state = "in_progress" + default_color = "#eeeeee" + + def __init__( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + index=None, + project_statuses=None, + is_new=None, + ): + short_name = short_name or "" + icon = icon or "" + state = state or self.default_state + color = color or self.default_color + self._name = name + self._short_name = short_name + self._icon = icon + self._slugified_name = None + self._state = None + self._color = None + self.set_state(state) + self.set_color(color) + + self._original_name = name + self._original_short_name = short_name + self._original_icon = icon + self._original_state = state + self._original_color = color + self._original_index = index + + self._index = index + self._project_statuses = project_statuses + if is_new is None: + is_new = index is None or project_statuses is None + self._is_new = is_new + + def __str__(self): + short_name = "" + if self.short_name: + short_name = "({})".format(self.short_name) + return "<{} {}{}>".format( + self.__class__.__name__, self.name, short_name + ) + + def __repr__(self): + return str(self) + + def __getitem__(self, key): + if key in { + "name", "short_name", "icon", "state", "color", "slugified_name" + }: + return getattr(self, key) + raise KeyError(key) + + def __setitem__(self, key, value): + if key in {"name", "short_name", "icon", "state", "color"}: + return setattr(self, key, value) + raise KeyError(key) + + def lock(self): + """Lock status. + + Changes were commited and current values are now the original values. + """ + + self._is_new = False + self._original_name = self.name + self._original_short_name = self.short_name + self._original_icon = self.icon + self._original_state = self.state + self._original_color = self.color + self._original_index = self.index + + @staticmethod + def slugify_name(name): + """Slugify status name for name comparison. + + Args: + name (str): Name of the status. + + Returns: + str: Slugified name. + """ + + return slugify_string(name.lower()) + + def get_project_statuses(self): + """Internal logic method. + + Returns: + _ProjectStatuses: Project statuses object. + """ + + return self._project_statuses + + def set_project_statuses(self, project_statuses): + """Internal logic method to change parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + self._project_statuses = project_statuses + + def unset_project_statuses(self, project_statuses): + """Internal logic method to unset parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + if self._project_statuses is project_statuses: + self._project_statuses = None + self._index = None + + @property + def changed(self): + """Status has changed. + + Returns: + bool: Status has changed. + """ + + return ( + self._is_new + or self._original_name != self._name + or self._original_short_name != self._short_name + or self._original_index != self._index + or self._original_state != self._state + or self._original_icon != self._icon + or self._original_color != self._color + ) + + def delete(self): + """Remove status from project statuses object.""" + + if self._project_statuses is not None: + self._project_statuses.remove(self) + + def get_index(self): + """Get index of status. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + return self._index + + def set_index(self, index, **kwargs): + """Change status index. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + if kwargs.get("from_parent"): + self._index = index + else: + self._project_statuses.set_status_index(self, index) + + def get_name(self): + """Status name. + + Returns: + str: Status name. + """ + + return self._name + + def set_name(self, name): + """Change status name. + + Args: + name (str): New status name. + """ + + if not isinstance(name, six.string_types): + raise TypeError("Name must be a string.") + if name == self._name: + return + self._name = name + self._slugified_name = None + + def get_short_name(self): + """Status short name 3 letters tops. + + Returns: + str: Status short name. + """ + + return self._short_name + + def set_short_name(self, short_name): + """Change status short name. + + Args: + short_name (str): New status short name. 3 letters tops. + """ + + if not isinstance(short_name, six.string_types): + raise TypeError("Short name must be a string.") + self._short_name = short_name + + def get_icon(self): + """Name of icon to use for status. + + Returns: + str: Name of the icon. + """ + + return self._icon + + def set_icon(self, icon): + """Change status icon name. + + Args: + icon (str): Name of the icon. + """ + + if icon is None: + icon = "" + if not isinstance(icon, six.string_types): + raise TypeError("Icon name must be a string.") + self._icon = icon + + @property + def slugified_name(self): + """Slugified and lowere status name. + + Can be used for comparison of existing statuses. e.g. 'In Progress' + vs. 'in-progress'. + + Returns: + str: Slugified and lower status name. + """ + + if self._slugified_name is None: + self._slugified_name = self.slugify_name(self.name) + return self._slugified_name + + def get_state(self): + """Get state of project status. + + Return: + Literal[not_started, in_progress, done, blocked]: General + state of status. + """ + + return self._state + + def set_state(self, state): + """Set color of project status. + + Args: + state (Literal[not_started, in_progress, done, blocked]): General + state of status. + """ + + if state not in self.valid_states: + raise ValueError("Invalid state '{}'".format(str(state))) + self._state = state + + def get_color(self): + """Get color of project status. + + Returns: + str: Status color. + """ + + return self._color + + def set_color(self, color): + """Set color of project status. + + Args: + color (str): Color in hex format. Example: '#ff0000'. + """ + + if not isinstance(color, six.string_types): + raise TypeError( + "Color must be string got '{}'".format(type(color))) + color = color.lower() + if self.color_regex.fullmatch(color) is None: + raise ValueError("Invalid color value '{}'".format(color)) + self._color = color + + name = property(get_name, set_name) + short_name = property(get_short_name, set_short_name) + project_statuses = property(get_project_statuses, set_project_statuses) + index = property(get_index, set_index) + state = property(get_state, set_state) + color = property(get_color, set_color) + icon = property(get_icon, set_icon) + + def _validate_other_p_statuses(self, other): + """Validate if other status can be used for move. + + To be able to work with other status, and position them in relation, + they must belong to same existing object of '_ProjectStatuses'. + + Args: + other (ProjectStatus): Other status to validate. + """ + + o_project_statuses = other.project_statuses + m_project_statuses = self.project_statuses + if o_project_statuses is None and m_project_statuses is None: + raise ValueError("Both statuses are not assigned to a project.") + + missing_status = None + if o_project_statuses is None: + missing_status = other + elif m_project_statuses is None: + missing_status = self + if missing_status is not None: + raise ValueError( + "Status '{}' is not assigned to a project.".format( + missing_status.name)) + if m_project_statuses is not o_project_statuses: + raise ValueError( + "Statuse are assigned to different projects." + " Cannot execute move." + ) + + def move_before(self, other): + """Move status before other status. + + Args: + other (ProjectStatus): Status to move before. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index) + + def move_after(self, other): + """Move status after other status. + + Args: + other (ProjectStatus): Status to move after. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index + 1) + + def to_data(self): + """Convert status to data. + + Returns: + dict[str, str]: Status data. + """ + + output = { + "name": self.name, + "shortName": self.short_name, + "state": self.state, + "icon": self.icon, + "color": self.color, + } + if ( + not self._is_new + and self._original_name + and self.name != self._original_name + ): + output["original_name"] = self._original_name + return output + + @classmethod + def from_data(cls, data, index=None, project_statuses=None): + """Create project status from data. + + Args: + data (dict[str, str]): Status data. + index (Optional[int]): Status index. + project_statuses (Optional[ProjectStatuses]): Project statuses + object which wraps the status for a project. + """ + + return cls( + data["name"], + data.get("shortName", data.get("short_name")), + data.get("state"), + data.get("icon"), + data.get("color"), + index=index, + project_statuses=project_statuses + ) + + +class _ProjectStatuses: + """Wrapper for project statuses. + + Supports basic methods to add, change or remove statuses from a project. + + To add new statuses use 'create' or 'add_status' methods. To change + statuses receive them by one of the getter methods and change their + values. + + Todos: + Validate if statuses are duplicated. + """ + + def __init__(self, statuses): + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + self._orig_status_length = len(self._statuses) + self._set_called = False + + def __len__(self): + return len(self._statuses) + + def __iter__(self): + """Iterate over statuses. + + Yields: + ProjectStatus: Project status. + """ + + for status in self._statuses: + yield status + + def create( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + ): + """Create project status. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + + Returns: + ProjectStatus: Created project status. + """ + + status = ProjectStatus( + name, short_name, state, icon, color, is_new=True + ) + self.append(status) + return status + + def lock(self): + """Lock statuses. + + Changes were commited and current values are now the original values. + """ + + self._orig_status_length = len(self._statuses) + self._set_called = False + for status in self._statuses: + status.lock() + + def to_data(self): + """Convert to project statuses data.""" + + return [ + status.to_data() + for status in self._statuses + ] + + def set(self, statuses): + """Explicitly override statuses. + + This method does not handle if statuses changed or not. + + Args: + statuses (list[dict[str, str]]): List of statuses data. + """ + + self._set_called = True + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + + @property + def changed(self): + """Statuses have changed. + + Returns: + bool: True if statuses changed, False otherwise. + """ + + if self._set_called: + return True + + # Check if status length changed + # - when all statuses are removed it is a changed + if self._orig_status_length != len(self._statuses): + return True + # Go through all statuses and check if any of them changed + for status in self._statuses: + if status.changed: + return True + return False + + def get(self, name, default=None): + """Get status by name. + + Args: + name (str): Status name. + default (Any): Default value of status is not found. + + Returns: + Union[ProjectStatus, Any]: Status or default value. + """ + + return next( + ( + status + for status in self._statuses + if status.name == name + ), + default + ) + + get_status_by_name = get + + def index(self, status, **kwargs): + """Get status index. + + Args: + status (ProjectStatus): Status to get index of. + default (Optional[Any]): Default value if status is not found. + + Returns: + Union[int, Any]: Status index. + + Raises: + ValueError: If status is not found and default value is not + defined. + """ + + output = next( + ( + idx + for idx, st in enumerate(self._statuses) + if st is status + ), + None + ) + if output is not None: + return output + + if "default" in kwargs: + return kwargs["default"] + raise ValueError("Status '{}' not found".format(status.name)) + + def get_status_by_slugified_name(self, name): + """Get status by slugified name. + + Args: + name (str): Status name. Is slugified before search. + + Returns: + Union[ProjectStatus, None]: Status or None if not found. + """ + + slugified_name = ProjectStatus.slugify_name(name) + return next( + ( + status + for status in self._statuses + if status.slugified_name == slugified_name + ), + None + ) + + def remove_by_name(self, name, ignore_missing=False): + """Remove status by name. + + Args: + name (str): Status name. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + ProjectStatus: Removed status. + """ + + matching_status = self.get(name) + if matching_status is None: + if ignore_missing: + return + raise ValueError( + "Status '{}' not found in project".format(name)) + return self.remove(matching_status) + + def remove(self, status, ignore_missing=False): + """Remove status. + + Args: + status (ProjectStatus): Status to remove. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + Union[ProjectStatus, None]: Removed status. + """ + + index = self.index(status, default=None) + if index is None: + if ignore_missing: + return None + raise ValueError("Status '{}' not in project".format(status)) + + return self.pop(index) + + def pop(self, index): + """Remove status by index. + + Args: + index (int): Status index. + + Returns: + ProjectStatus: Removed status. + """ + + status = self._statuses.pop(index) + status.unset_project_statuses(self) + for st in self._statuses[index:]: + st.set_index(st.index - 1, from_parent=True) + return status + + def insert(self, index, status): + """Insert status at index. + + Args: + index (int): Status index. + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + if not isinstance(status, ProjectStatus): + status = ProjectStatus.from_data(status) + + start_index = index + end_index = len(self._statuses) + 1 + matching_index = self.index(status, default=None) + if matching_index is not None: + if matching_index == index: + status.set_index(index, from_parent=True) + return + + self._statuses.pop(matching_index) + if matching_index < index: + start_index = matching_index + end_index = index + 1 + else: + end_index -= 1 + + status.set_project_statuses(self) + self._statuses.insert(index, status) + for idx, st in enumerate(self._statuses[start_index:end_index]): + st.set_index(start_index + idx, from_parent=True) + return status + + def append(self, status): + """Add new status to the end of the list. + + Args: + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + return self.insert(len(self._statuses), status) + + def set_status_index(self, status, index): + """Set status index. + + Args: + status (ProjectStatus): Status to set index. + index (int): New status index. + """ + + return self.insert(index, status) + + +class ProjectEntity(BaseEntity): + """Entity representing project on AYON server. + + Args: + project_code (str): Project code. + library (bool): Is project library project. + folder_types (list[dict[str, Any]]): Folder types definition. + task_types (list[dict[str, Any]]): Task types definition. + entity_id (Optional[str]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "project" + parent_entity_types = [] + # TODO These are hardcoded but maybe should be used from server??? + default_folder_type_icon = "folder" + default_task_type_icon = "task_alt" + + def __init__( + self, + project_code, + library, + folder_types, + task_types, + statuses, + *args, + **kwargs + ): + super(ProjectEntity, self).__init__(*args, **kwargs) + + self._project_code = project_code + self._library_project = library + self._folder_types = folder_types + self._task_types = task_types + self._statuses_obj = _ProjectStatuses(statuses) + + self._orig_project_code = project_code + self._orig_library_project = library + self._orig_folder_types = copy.deepcopy(folder_types) + self._orig_task_types = copy.deepcopy(task_types) + self._orig_statuses = copy.deepcopy(statuses) + + def _prepare_entity_id(self, entity_id): + if entity_id != self.project_name: + raise ValueError( + "Unexpected entity id value \"{}\". Expected \"{}\"".format( + entity_id, self.project_name)) + return entity_id + + def get_parent(self, *args, **kwargs): + return None + + def set_parent(self, parent): + raise ValueError( + "Parent of project cannot be set to {}".format(parent) + ) + + parent = property(get_parent, set_parent) + + def get_orig_folder_types(self): + return copy.deepcopy(self._orig_folder_types) + + def get_folder_types(self): + return copy.deepcopy(self._folder_types) + + def set_folder_types(self, folder_types): + new_folder_types = [] + for folder_type in folder_types: + if "icon" not in folder_type: + folder_type["icon"] = self.default_folder_type_icon + new_folder_types.append(folder_type) + self._folder_types = new_folder_types + + def get_orig_task_types(self): + return copy.deepcopy(self._orig_task_types) + + def get_task_types(self): + return copy.deepcopy(self._task_types) + + def set_task_types(self, task_types): + new_task_types = [] + for task_type in task_types: + if "icon" not in task_type: + task_type["icon"] = self.default_task_type_icon + new_task_types.append(task_type) + self._task_types = new_task_types + + def get_orig_statuses(self): + return copy.deepcopy(self._orig_statuses) + + def get_statuses(self): + return self._statuses_obj + + def set_statuses(self, statuses): + self._statuses_obj.set(statuses) + + folder_types = property(get_folder_types, set_folder_types) + task_types = property(get_task_types, set_task_types) + statuses = property(get_statuses, set_statuses) + + def lock(self): + super(ProjectEntity, self).lock() + self._orig_folder_types = copy.deepcopy(self._folder_types) + self._orig_task_types = copy.deepcopy(self._task_types) + self._statuses_obj.lock() + + @property + def changes(self): + changes = self._get_default_changes() + if self._orig_folder_types != self._folder_types: + changes["folderTypes"] = self.get_folder_types() + + if self._orig_task_types != self._task_types: + changes["taskTypes"] = self.get_task_types() + + if self._statuses_obj.changed: + changes["statuses"] = self._statuses_obj.to_data() + + return changes + + @classmethod + def from_entity_data(cls, project, entity_hub): + return cls( + project["code"], + parent_id=PROJECT_PARENT_ID, + entity_id=project["name"], + library=project["library"], + folder_types=project["folderTypes"], + task_types=project["taskTypes"], + name=project["name"], + attribs=project["ownAttrib"], + data=project["data"], + active=project["active"], + entity_hub=entity_hub + ) + + def to_create_body_data(self): + raise NotImplementedError( + "ProjectEntity does not support conversion to entity data" + ) + + +class FolderEntity(BaseEntity): + """Entity representing a folder on AYON server. + + Args: + folder_type (str): Type of folder. Folder type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + label (Optional[str]): Folder label. + path (Optional[str]): Folder path. Path consist of all parent names + with slash('/') used as separator. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "folder" + parent_entity_types = ["folder", "project"] + + def __init__(self, folder_type, *args, label=None, path=None, **kwargs): + super(FolderEntity, self).__init__(*args, **kwargs) + # Autofill project as parent of folder if is not yet set + # - this can be guessed only if folder was just created + if self.created and self._parent_id is UNKNOWN_VALUE: + self._parent_id = self.project_name + + self._folder_type = folder_type + self._label = label + + self._orig_folder_type = folder_type + self._orig_label = label + # Know if folder has any products + # - is used to know if folder allows hierarchy changes + self._has_published_content = False + self._path = path + + def get_folder_type(self): + return self._folder_type + + def set_folder_type(self, folder_type): + self._folder_type = folder_type + + folder_type = property(get_folder_type, set_folder_type) + + def get_label(self): + return self._label + + def set_label(self, label): + self._label = label + + label = property(get_label, set_label) + + def get_path(self, dynamic_value=True): + if not dynamic_value: + return self._path + + if self._path is None: + parent = self.parent + path = self.name + if parent.entity_type == "folder": + parent_path = parent.path + path = "/".join([parent_path, path]) + self._path = path + return self._path + + def reset_path(self): + self._path = None + self._entity_hub.folder_path_reseted(self.id) + + path = property(get_path) + + def get_has_published_content(self): + return self._has_published_content + + def set_has_published_content(self, has_published_content): + if self._has_published_content is has_published_content: + return + + self._has_published_content = has_published_content + # Reset immutable cache of parents + self._entity_hub.reset_immutable_for_hierarchy_cache(self.id) + + has_published_content = property( + get_has_published_content, set_has_published_content + ) + + @property + def _immutable_for_hierarchy(self): + if self.has_published_content: + return True + return None + + def lock(self): + super(FolderEntity, self).lock() + self._orig_folder_type = self._folder_type + + @property + def changes(self): + changes = self._get_default_changes() + + if self._orig_parent_id != self._parent_id: + parent_id = self._parent_id + if parent_id == self.project_name: + parent_id = None + changes["parentId"] = parent_id + + if self._orig_folder_type != self._folder_type: + changes["folderType"] = self._folder_type + + label = self._label + if self._name == label: + label = None + + if label != self._orig_label: + changes["label"] = label + + return changes + + @classmethod + def from_entity_data(cls, folder, entity_hub): + parent_id = folder["parentId"] + if parent_id is None: + parent_id = entity_hub.project_entity.id + return cls( + folder["folderType"], + label=folder["label"], + path=folder["path"], + entity_id=folder["id"], + parent_id=parent_id, + name=folder["name"], + data=folder.get("data"), + attribs=folder["ownAttrib"], + active=folder["active"], + thumbnail_id=folder["thumbnailId"], + created=False, + entity_hub=entity_hub + ) + + def to_create_body_data(self): + parent_id = self._parent_id + if parent_id is UNKNOWN_VALUE: + raise ValueError("Folder does not have set 'parent_id'") + + if parent_id == self.project_name: + parent_id = None + + if not self.name or self.name is UNKNOWN_VALUE: + raise ValueError("Folder does not have set 'name'") + + output = { + "name": self.name, + "folderType": self.folder_type, + "parentId": parent_id, + } + attrib = self.attribs.to_dict() + if attrib: + output["attrib"] = attrib + + if self.active is not UNKNOWN_VALUE: + output["active"] = self.active + + if self.thumbnail_id is not UNKNOWN_VALUE: + output["thumbnailId"] = self.thumbnail_id + + if self._entity_hub.allow_data_changes: + output["data"] = self._data + return output + + +class TaskEntity(BaseEntity): + """Entity representing a task on AYON server. + + Args: + task_type (str): Type of task. Task type must be available in config + of project task types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Task label. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "task" + parent_entity_types = ["folder"] + + def __init__(self, task_type, *args, label=None, **kwargs): + super(TaskEntity, self).__init__(*args, **kwargs) + + self._task_type = task_type + self._label = label + + self._orig_task_type = task_type + self._orig_label = label + + self._children_ids = set() + + def lock(self): + super(TaskEntity, self).lock() + self._orig_task_type = self._task_type + + def get_task_type(self): + return self._task_type + + def set_task_type(self, task_type): + self._task_type = task_type + + task_type = property(get_task_type, set_task_type) + + def get_label(self): + return self._label + + def set_label(self, label): + self._label = label + + label = property(get_label, set_label) + + def add_child(self, child): + raise ValueError("Task does not support to add children") + + @property + def changes(self): + changes = self._get_default_changes() + + if self._orig_parent_id != self._parent_id: + changes["folderId"] = self._parent_id + + if self._orig_task_type != self._task_type: + changes["taskType"] = self._task_type + + label = self._label + if self._name == label: + label = None + + if label != self._orig_label: + changes["label"] = label + + return changes + + @classmethod + def from_entity_data(cls, task, entity_hub): + return cls( + task["taskType"], + entity_id=task["id"], + label=task["label"], + parent_id=task["folderId"], + name=task["name"], + data=task.get("data"), + attribs=task["ownAttrib"], + active=task["active"], + created=False, + entity_hub=entity_hub + ) + + def to_create_body_data(self): + if self.parent_id is UNKNOWN_VALUE: + raise ValueError("Task does not have set 'parent_id'") + + output = { + "name": self.name, + "taskType": self.task_type, + "folderId": self.parent_id, + "attrib": self.attribs.to_dict(), + } + attrib = self.attribs.to_dict() + if attrib: + output["attrib"] = attrib + + if self.active is not UNKNOWN_VALUE: + output["active"] = self.active + + if ( + self._entity_hub.allow_data_changes + and self._data is not UNKNOWN_VALUE + ): + output["data"] = self._data + return output diff --git a/openpype/vendor/python/common/ayon_api/events.py b/openpype/vendor/python/common/ayon_api/events.py new file mode 100644 index 0000000000..aa256f6cfc --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/events.py @@ -0,0 +1,52 @@ +import copy + + +class ServerEvent(object): + def __init__( + self, + topic, + sender=None, + event_hash=None, + project_name=None, + username=None, + dependencies=None, + description=None, + summary=None, + payload=None, + finished=True, + store=True, + ): + if dependencies is None: + dependencies = [] + if payload is None: + payload = {} + if summary is None: + summary = {} + + self.topic = topic + self.sender = sender + self.event_hash = event_hash + self.project_name = project_name + self.username = username + self.dependencies = dependencies + self.description = description + self.summary = summary + self.payload = payload + self.finished = finished + self.store = store + + def to_data(self): + return { + "topic": self.topic, + "sender": self.sender, + "hash": self.event_hash, + "project": self.project_name, + "user": self.username, + "dependencies": copy.deepcopy(self.dependencies), + "description": self.description, + "description": self.description, + "summary": copy.deepcopy(self.summary), + "payload": self.payload, + "finished": self.finished, + "store": self.store + } diff --git a/openpype/vendor/python/common/ayon_api/exceptions.py b/openpype/vendor/python/common/ayon_api/exceptions.py new file mode 100644 index 0000000000..db4917e90a --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/exceptions.py @@ -0,0 +1,107 @@ +import copy + + +class UrlError(Exception): + """Url cannot be parsed as url. + + Exception may contain hints of possible fixes of url that can be used in + UI if needed. + """ + + def __init__(self, message, title, hints=None): + if hints is None: + hints = [] + + self.title = title + self.hints = hints + super(UrlError, self).__init__(message) + + +class ServerError(Exception): + pass + + +class UnauthorizedError(ServerError): + pass + + +class AuthenticationError(ServerError): + pass + + +class ServerNotReached(ServerError): + pass + + +class RequestError(Exception): + def __init__(self, message, response): + self.response = response + super(RequestError, self).__init__(message) + + +class HTTPRequestError(RequestError): + pass + + +class GraphQlQueryFailed(Exception): + def __init__(self, errors, query, variables): + if variables is None: + variables = {} + + error_messages = [] + for error in errors: + msg = error["message"] + path = error.get("path") + if path: + msg += " on item '{}'".format("/".join(path)) + locations = error.get("locations") + if locations: + _locations = [ + "Line {} Column {}".format( + location["line"], location["column"] + ) + for location in locations + ] + + msg += " ({})".format(" and ".join(_locations)) + error_messages.append(msg) + + message = "GraphQl query Failed" + if error_messages: + message = "{}: {}".format(message, " | ".join(error_messages)) + + self.errors = errors + self.query = query + self.variables = copy.deepcopy(variables) + super(GraphQlQueryFailed, self).__init__(message) + + +class MissingEntityError(Exception): + pass + + +class ProjectNotFound(MissingEntityError): + def __init__(self, project_name, message=None): + if not message: + message = "Project \"{}\" was not found".format(project_name) + self.project_name = project_name + super(ProjectNotFound, self).__init__(message) + + +class FolderNotFound(MissingEntityError): + def __init__(self, project_name, folder_id, message=None): + self.project_name = project_name + self.folder_id = folder_id + if not message: + message = ( + "Folder with id \"{}\" was not found in project \"{}\"" + ).format(folder_id, project_name) + super(FolderNotFound, self).__init__(message) + + +class FailedOperations(Exception): + pass + + +class FailedServiceInit(Exception): + pass diff --git a/openpype/vendor/python/common/ayon_api/graphql.py b/openpype/vendor/python/common/ayon_api/graphql.py new file mode 100644 index 0000000000..854f207a00 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/graphql.py @@ -0,0 +1,983 @@ +import copy +import numbers +from abc import ABCMeta, abstractmethod + +import six + +from .exceptions import GraphQlQueryFailed + +FIELD_VALUE = object() + + +def fields_to_dict(fields): + if not fields: + return None + + output = {} + for field in fields: + hierarchy = field.split(".") + last = hierarchy.pop(-1) + value = output + for part in hierarchy: + if value is FIELD_VALUE: + break + + if part not in value: + value[part] = {} + value = value[part] + + if value is not FIELD_VALUE: + value[last] = FIELD_VALUE + return output + + +class QueryVariable(object): + """Object representing single varible used in GraphQlQuery. + + Variable definition is in GraphQl query header but it's value is used + in fields. + + Args: + variable_name (str): Name of variable in query. + """ + + def __init__(self, variable_name): + self._variable_name = variable_name + self._name = "${}".format(variable_name) + + @property + def name(self): + """Name used in field filter.""" + + return self._name + + @property + def variable_name(self): + """Name of variable in query definition.""" + + return self._variable_name + + def __hash__(self): + return self._name.__hash__() + + def __str__(self): + return self._name + + def __format__(self, *args, **kwargs): + return self._name.__format__(*args, **kwargs) + + +class GraphQlQuery: + """GraphQl query which can have fields to query. + + Single use object which can be used only for one query. Object and children + objects keep track about paging and progress. + + Args: + name (str): Name of query. + """ + + offset = 2 + + def __init__(self, name): + self._name = name + self._variables = {} + self._children = [] + self._has_multiple_edge_fields = None + + @property + def indent(self): + """Indentation for preparation of query string. + + Returns: + int: Ident spaces. + """ + + return 0 + + @property + def child_indent(self): + """Indentation for preparation of query string used by children. + + Returns: + int: Ident spaces for children. + """ + + return self.indent + + @property + def need_query(self): + """Still need query from server. + + Needed for edges which use pagination. + + Returns: + bool: If still need query from server. + """ + + for child in self._children: + if child.need_query: + return True + return False + + @property + def has_multiple_edge_fields(self): + if self._has_multiple_edge_fields is None: + edge_counter = 0 + for child in self._children: + edge_counter += child.sum_edge_fields(2) + if edge_counter > 1: + break + self._has_multiple_edge_fields = edge_counter > 1 + + return self._has_multiple_edge_fields + + def add_variable(self, key, value_type, value=None): + """Add variable to query. + + Args: + key (str): Variable name. + value_type (str): Type of expected value in variables. This is + graphql type e.g. "[String!]", "Int", "Boolean", etc. + value (Any): Default value for variable. Can be changed later. + + Returns: + QueryVariable: Created variable object. + + Raises: + KeyError: If variable was already added before. + """ + + if key in self._variables: + raise KeyError( + "Variable \"{}\" was already set with type {}.".format( + key, value_type + ) + ) + + variable = QueryVariable(key) + self._variables[key] = { + "type": value_type, + "variable": variable, + "value": value + } + return variable + + def get_variable(self, key): + """Variable object. + + Args: + key (str): Variable name added to headers. + + Returns: + QueryVariable: Variable object used in query string. + """ + + return self._variables[key]["variable"] + + def get_variable_value(self, key, default=None): + """Get Current value of variable. + + Args: + key (str): Variable name. + default (Any): Default value if variable is available. + + Returns: + Any: Variable value. + """ + + variable_item = self._variables.get(key) + if variable_item: + return variable_item["value"] + return default + + def set_variable_value(self, key, value): + """Set value for variable. + + Args: + key (str): Variable name under which the value is stored. + value (Any): Variable value used in query. Variable is not used + if value is 'None'. + """ + + self._variables[key]["value"] = value + + def get_variables_values(self): + """Calculate variable values used that should be used in query. + + Variables with value set to 'None' are skipped. + + Returns: + Dict[str, Any]: Variable values by their name. + """ + + output = {} + for key, item in self._variables.items(): + value = item["value"] + if value is not None: + output[key] = item["value"] + + return output + + def add_obj_field(self, field): + """Add field object to children. + + Args: + field (BaseGraphQlQueryField): Add field to query children. + """ + + if field in self._children: + return + + self._children.append(field) + field.set_parent(self) + + def add_field_with_edges(self, name): + """Add field with edges to query. + + Args: + name (str): Field name e.g. 'tasks'. + + Returns: + GraphQlQueryEdgeField: Created field object. + """ + + item = GraphQlQueryEdgeField(name, self) + self.add_obj_field(item) + return item + + def add_field(self, name): + """Add field to query. + + Args: + name (str): Field name e.g. 'id'. + + Returns: + GraphQlQueryField: Created field object. + """ + + item = GraphQlQueryField(name, self) + self.add_obj_field(item) + return item + + def calculate_query(self): + """Calculate query string which is sent to server. + + Returns: + str: GraphQl string with variables and headers. + + Raises: + ValueError: Query has no fiels. + """ + + if not self._children: + raise ValueError("Missing fields to query") + + variables = [] + for item in self._variables.values(): + if item["value"] is None: + continue + + variables.append( + "{}: {}".format(item["variable"], item["type"]) + ) + + variables_str = "" + if variables: + variables_str = "({})".format(",".join(variables)) + header = "query {}{}".format(self._name, variables_str) + + output = [] + output.append(header + " {") + for field in self._children: + output.append(field.calculate_query()) + output.append("}") + + return "\n".join(output) + + def parse_result(self, data, output, progress_data): + """Parse data from response for output. + + Output is stored to passed 'output' variable. That's because of paging + during which objects must have access to both new and previous values. + + Args: + data (Dict[str, Any]): Data received using calculated query. + output (Dict[str, Any]): Where parsed data are stored. + """ + + if not data: + return + + for child in self._children: + child.parse_result(data, output, progress_data) + + def query(self, con): + """Do a query from server. + + Args: + con (ServerAPI): Connection to server with 'query' method. + + Returns: + Dict[str, Any]: Parsed output from GraphQl query. + """ + + progress_data = {} + output = {} + while self.need_query: + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql( + query_str, + self.get_variables_values() + ) + if response.errors: + raise GraphQlQueryFailed(response.errors, query_str, variables) + self.parse_result(response.data["data"], output, progress_data) + + return output + + def continuous_query(self, con): + """Do a query from server. + + Args: + con (ServerAPI): Connection to server with 'query' method. + + Returns: + Dict[str, Any]: Parsed output from GraphQl query. + """ + + progress_data = {} + if self.has_multiple_edge_fields: + output = {} + while self.need_query: + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql(query_str, variables) + if response.errors: + raise GraphQlQueryFailed( + response.errors, query_str, variables + ) + self.parse_result(response.data["data"], output, progress_data) + + yield output + + else: + while self.need_query: + output = {} + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql(query_str, variables) + if response.errors: + raise GraphQlQueryFailed( + response.errors, query_str, variables + ) + + self.parse_result(response.data["data"], output, progress_data) + + yield output + + +@six.add_metaclass(ABCMeta) +class BaseGraphQlQueryField(object): + """Field in GraphQl query. + + Args: + name (str): Name of field. + parent (Union[BaseGraphQlQueryField, GraphQlQuery]): Parent object of a + field. + """ + + def __init__(self, name, parent): + if isinstance(parent, GraphQlQuery): + query_item = parent + else: + query_item = parent.query_item + + self._name = name + self._parent = parent + + self._filters = {} + + self._children = [] + # Value is changed on first parse of result + self._need_query = True + + self._query_item = query_item + + self._path = None + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.path) + + def add_variable(self, key, value_type, value=None): + """Add variable to query. + + Args: + key (str): Variable name. + value_type (str): Type of expected value in variables. This is + graphql type e.g. "[String!]", "Int", "Boolean", etc. + value (Any): Default value for variable. Can be changed later. + + Returns: + QueryVariable: Created variable object. + + Raises: + KeyError: If variable was already added before. + """ + + return self._parent.add_variable(key, value_type, value) + + def get_variable(self, key): + """Variable object. + + Args: + key (str): Variable name added to headers. + + Returns: + QueryVariable: Variable object used in query string. + """ + + return self._parent.get_variable(key) + + @property + def need_query(self): + """Still need query from server. + + Needed for edges which use pagination. Look into children values too. + + Returns: + bool: If still need query from server. + """ + + if self._need_query: + return True + + for child in self._children_iter(): + if child.need_query: + return True + return False + + def _children_iter(self): + """Iterate over all children fields of object. + + Returns: + Iterator[BaseGraphQlQueryField]: Children fields. + """ + + for child in self._children: + yield child + + def sum_edge_fields(self, max_limit=None): + """Check how many edge fields query has. + + In case there are multiple edge fields or are nested the query can't + yield mid cursor results. + + Args: + max_limit (int): Skip rest of counting if counter is bigger then + entered number. + + Returns: + int: Counter edge fields + """ + + counter = 0 + if isinstance(self, GraphQlQueryEdgeField): + counter = 1 + + for child in self._children_iter(): + counter += child.sum_edge_fields(max_limit) + if max_limit is not None and counter >= max_limit: + break + return counter + + @property + def offset(self): + return self._query_item.offset + + @property + def indent(self): + return self._parent.child_indent + self.offset + + @property + @abstractmethod + def child_indent(self): + pass + + @property + def query_item(self): + return self._query_item + + @property + @abstractmethod + def has_edges(self): + pass + + @property + def child_has_edges(self): + for child in self._children_iter(): + if child.has_edges or child.child_has_edges: + return True + return False + + @property + def path(self): + """Field path for debugging purposes. + + Returns: + str: Field path in query. + """ + + if self._path is None: + if isinstance(self._parent, GraphQlQuery): + path = self._name + else: + path = "/".join((self._parent.path, self._name)) + self._path = path + return self._path + + def reset_cursor(self): + for child in self._children_iter(): + child.reset_cursor() + + def get_variable_value(self, *args, **kwargs): + return self._query_item.get_variable_value(*args, **kwargs) + + def set_variable_value(self, *args, **kwargs): + return self._query_item.set_variable_value(*args, **kwargs) + + def set_filter(self, key, value): + self._filters[key] = value + + def has_filter(self, key): + return key in self._filters + + def remove_filter(self, key): + self._filters.pop(key, None) + + def set_parent(self, parent): + if self._parent is parent: + return + self._parent = parent + parent.add_obj_field(self) + + def add_obj_field(self, field): + if field in self._children: + return + + self._children.append(field) + field.set_parent(self) + + def add_field_with_edges(self, name): + item = GraphQlQueryEdgeField(name, self) + self.add_obj_field(item) + return item + + def add_field(self, name): + item = GraphQlQueryField(name, self) + self.add_obj_field(item) + return item + + def _filter_value_to_str(self, value): + if isinstance(value, QueryVariable): + if self.get_variable_value(value.variable_name) is None: + return None + return str(value) + + if isinstance(value, numbers.Number): + return str(value) + + if isinstance(value, six.string_types): + return '"{}"'.format(value) + + if isinstance(value, (list, set, tuple)): + return "[{}]".format( + ", ".join( + self._filter_value_to_str(item) + for item in iter(value) + ) + ) + raise TypeError( + "Unknown type to convert '{}'".format(str(type(value))) + ) + + def get_filters(self): + """Receive filters for item. + + By default just use copy of set filters. + + Returns: + Dict[str, Any]: Fields filters. + """ + + return copy.deepcopy(self._filters) + + def _filters_to_string(self): + filters = self.get_filters() + if not filters: + return "" + + filter_items = [] + for key, value in filters.items(): + string_value = self._filter_value_to_str(value) + if string_value is None: + continue + + filter_items.append("{}: {}".format(key, string_value)) + + if not filter_items: + return "" + return "({})".format(", ".join(filter_items)) + + def _fake_children_parse(self): + """Mark children as they don't need query.""" + + for child in self._children_iter(): + child.parse_result({}, {}, {}) + + @abstractmethod + def calculate_query(self): + pass + + @abstractmethod + def parse_result(self, data, output, progress_data): + pass + + +class GraphQlQueryField(BaseGraphQlQueryField): + has_edges = False + + @property + def child_indent(self): + return self.indent + + def parse_result(self, data, output, progress_data): + if not isinstance(data, dict): + raise TypeError("{} Expected 'dict' type got '{}'".format( + self._name, str(type(data)) + )) + + self._need_query = False + value = data.get(self._name) + if value is None: + self._fake_children_parse() + if self._name in data: + output[self._name] = None + return + + if not self._children: + output[self._name] = value + return + + output_value = output.get(self._name) + if isinstance(value, dict): + if output_value is None: + output_value = {} + output[self._name] = output_value + + for child in self._children: + child.parse_result(value, output_value, progress_data) + return + + if output_value is None: + output_value = [] + output[self._name] = output_value + + if not value: + self._fake_children_parse() + return + + diff = len(value) - len(output_value) + if diff > 0: + for _ in range(diff): + output_value.append({}) + + for idx, item in enumerate(value): + item_value = output_value[idx] + for child in self._children: + child.parse_result(item, item_value, progress_data) + + def calculate_query(self): + offset = self.indent * " " + header = "{}{}{}".format( + offset, + self._name, + self._filters_to_string() + ) + if not self._children: + return header + + output = [] + output.append(header + " {") + + output.extend([ + field.calculate_query() + for field in self._children + ]) + output.append(offset + "}") + + return "\n".join(output) + + +class GraphQlQueryEdgeField(BaseGraphQlQueryField): + has_edges = True + + def __init__(self, *args, **kwargs): + super(GraphQlQueryEdgeField, self).__init__(*args, **kwargs) + self._cursor = None + self._edge_children = [] + + @property + def child_indent(self): + offset = self.offset * 2 + return self.indent + offset + + def _children_iter(self): + for child in super(GraphQlQueryEdgeField, self)._children_iter(): + yield child + + for child in self._edge_children: + yield child + + def add_obj_field(self, field): + if field in self._edge_children: + return + + super(GraphQlQueryEdgeField, self).add_obj_field(field) + + def add_obj_edge_field(self, field): + if field in self._edge_children or field in self._children: + return + + self._edge_children.append(field) + field.set_parent(self) + + def add_edge_field(self, name): + item = GraphQlQueryField(name, self) + self.add_obj_edge_field(item) + return item + + def reset_cursor(self): + # Reset cursor only for edges + self._cursor = None + self._need_query = True + + super(GraphQlQueryEdgeField, self).reset_cursor() + + def parse_result(self, data, output, progress_data): + if not isinstance(data, dict): + raise TypeError("{} Expected 'dict' type got '{}'".format( + self._name, str(type(data)) + )) + + value = data.get(self._name) + if value is None: + self._fake_children_parse() + self._need_query = False + return + + if self._name in output: + node_values = output[self._name] + else: + node_values = [] + output[self._name] = node_values + + handle_cursors = self.child_has_edges + if handle_cursors: + cursor_key = self._get_cursor_key() + if cursor_key in progress_data: + nodes_by_cursor = progress_data[cursor_key] + else: + nodes_by_cursor = {} + progress_data[cursor_key] = nodes_by_cursor + + page_info = value["pageInfo"] + new_cursor = page_info["endCursor"] + self._need_query = page_info["hasNextPage"] + edges = value["edges"] + # Fake result parse + if not edges: + self._fake_children_parse() + + for edge in edges: + if not handle_cursors: + edge_value = {} + node_values.append(edge_value) + else: + edge_cursor = edge["cursor"] + edge_value = nodes_by_cursor.get(edge_cursor) + if edge_value is None: + edge_value = {} + nodes_by_cursor[edge_cursor] = edge_value + node_values.append(edge_value) + + for child in self._edge_children: + child.parse_result(edge, edge_value, progress_data) + + for child in self._children: + child.parse_result(edge["node"], edge_value, progress_data) + + if not self._need_query: + return + + change_cursor = True + for child in self._children_iter(): + if child.need_query: + change_cursor = False + + if change_cursor: + for child in self._children_iter(): + child.reset_cursor() + self._cursor = new_cursor + + def _get_cursor_key(self): + return "{}/__cursor__".format(self.path) + + def get_filters(self): + filters = super(GraphQlQueryEdgeField, self).get_filters() + + filters["first"] = 300 + if self._cursor: + filters["after"] = self._cursor + return filters + + def calculate_query(self): + if not self._children and not self._edge_children: + raise ValueError("Missing child definitions for edges {}".format( + self.path + )) + + offset = self.indent * " " + header = "{}{}{}".format( + offset, + self._name, + self._filters_to_string() + ) + + output = [] + output.append(header + " {") + + edges_offset = offset + self.offset * " " + node_offset = edges_offset + self.offset * " " + output.append(edges_offset + "edges {") + for field in self._edge_children: + output.append(field.calculate_query()) + + if self._children: + output.append(node_offset + "node {") + + for field in self._children: + output.append( + field.calculate_query() + ) + + output.append(node_offset + "}") + if self.child_has_edges: + output.append(node_offset + "cursor") + + output.append(edges_offset + "}") + + # Add page information + output.append(edges_offset + "pageInfo {") + for page_key in ( + "endCursor", + "hasNextPage", + ): + output.append(node_offset + page_key) + output.append(edges_offset + "}") + output.append(offset + "}") + + return "\n".join(output) + + +INTROSPECTION_QUERY = """ + query IntrospectionQuery { + __schema { + queryType { name } + mutationType { name } + subscriptionType { name } + types { + ...FullType + } + directives { + name + description + locations + args { + ...InputValue + } + } + } + } + fragment FullType on __Type { + kind + name + description + fields(includeDeprecated: true) { + name + description + args { + ...InputValue + } + type { + ...TypeRef + } + isDeprecated + deprecationReason + } + inputFields { + ...InputValue + } + interfaces { + ...TypeRef + } + enumValues(includeDeprecated: true) { + name + description + isDeprecated + deprecationReason + } + possibleTypes { + ...TypeRef + } + } + fragment InputValue on __InputValue { + name + description + type { ...TypeRef } + defaultValue + } + fragment TypeRef on __Type { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + } + } + } + } + } + } + } + } +""" diff --git a/openpype/vendor/python/common/ayon_api/graphql_queries.py b/openpype/vendor/python/common/ayon_api/graphql_queries.py new file mode 100644 index 0000000000..2435fc8a17 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/graphql_queries.py @@ -0,0 +1,493 @@ +import collections + +from .graphql import FIELD_VALUE, GraphQlQuery + + +def fields_to_dict(fields): + if not fields: + return None + + output = {} + for field in fields: + hierarchy = field.split(".") + last = hierarchy.pop(-1) + value = output + for part in hierarchy: + if value is FIELD_VALUE: + break + + if part not in value: + value[part] = {} + value = value[part] + + if value is not FIELD_VALUE: + value[last] = FIELD_VALUE + return output + + +def add_links_fields(entity_field, nested_fields): + if "links" not in nested_fields: + return + links_fields = nested_fields.pop("links") + + link_edge_fields = { + "id", + "linkType", + "projectName", + "entityType", + "entityId", + "direction", + "description", + "author", + } + if isinstance(links_fields, dict): + simple_fields = set(links_fields) + simple_variant = len(simple_fields - link_edge_fields) == 0 + else: + simple_variant = True + simple_fields = link_edge_fields + + link_field = entity_field.add_field_with_edges("links") + + link_type_var = link_field.add_variable("linkTypes", "[String!]") + link_dir_var = link_field.add_variable("linkDirection", "String!") + link_field.set_filter("linkTypes", link_type_var) + link_field.set_filter("direction", link_dir_var) + + if simple_variant: + for key in simple_fields: + link_field.add_edge_field(key) + return + + query_queue = collections.deque() + for key, value in links_fields.items(): + if key in link_edge_fields: + link_field.add_edge_field(key) + continue + query_queue.append((key, value, link_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + + +def project_graphql_query(fields): + query = GraphQlQuery("ProjectQuery") + project_name_var = query.add_variable("projectName", "String!") + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, project_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def projects_graphql_query(fields): + query = GraphQlQuery("ProjectsQuery") + projects_field = query.add_field_with_edges("projects") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, projects_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def product_types_query(fields): + query = GraphQlQuery("ProductTypes") + product_types_field = query.add_field("productTypes") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, product_types_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + +def project_product_types_query(fields): + query = GraphQlQuery("ProjectProductTypes") + project_query = query.add_field("project") + project_name_var = query.add_variable("projectName", "String!") + project_query.set_filter("name", project_name_var) + product_types_field = project_query.add_field("productTypes") + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, product_types_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def folders_graphql_query(fields): + query = GraphQlQuery("FoldersQuery") + project_name_var = query.add_variable("projectName", "String!") + folder_ids_var = query.add_variable("folderIds", "[String!]") + parent_folder_ids_var = query.add_variable("parentFolderIds", "[String!]") + folder_paths_var = query.add_variable("folderPaths", "[String!]") + folder_names_var = query.add_variable("folderNames", "[String!]") + has_products_var = query.add_variable("folderHasProducts", "Boolean!") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + folders_field = project_field.add_field_with_edges("folders") + folders_field.set_filter("ids", folder_ids_var) + folders_field.set_filter("parentIds", parent_folder_ids_var) + folders_field.set_filter("names", folder_names_var) + folders_field.set_filter("paths", folder_paths_var) + folders_field.set_filter("hasProducts", has_products_var) + + nested_fields = fields_to_dict(fields) + add_links_fields(folders_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, folders_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def tasks_graphql_query(fields): + query = GraphQlQuery("TasksQuery") + project_name_var = query.add_variable("projectName", "String!") + task_ids_var = query.add_variable("taskIds", "[String!]") + task_names_var = query.add_variable("taskNames", "[String!]") + task_types_var = query.add_variable("taskTypes", "[String!]") + folder_ids_var = query.add_variable("folderIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + tasks_field = project_field.add_field_with_edges("tasks") + tasks_field.set_filter("ids", task_ids_var) + # WARNING: At moment when this been created 'names' filter is not supported + tasks_field.set_filter("names", task_names_var) + tasks_field.set_filter("taskTypes", task_types_var) + tasks_field.set_filter("folderIds", folder_ids_var) + + nested_fields = fields_to_dict(fields) + add_links_fields(tasks_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, tasks_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def products_graphql_query(fields): + query = GraphQlQuery("ProductsQuery") + + project_name_var = query.add_variable("projectName", "String!") + product_ids_var = query.add_variable("productIds", "[String!]") + product_names_var = query.add_variable("productNames", "[String!]") + folder_ids_var = query.add_variable("folderIds", "[String!]") + product_types_var = query.add_variable("productTypes", "[String!]") + statuses_var = query.add_variable("statuses", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + products_field = project_field.add_field_with_edges("products") + products_field.set_filter("ids", product_ids_var) + products_field.set_filter("names", product_names_var) + products_field.set_filter("folderIds", folder_ids_var) + products_field.set_filter("productTypes", product_types_var) + products_field.set_filter("statuses", statuses_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(products_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, products_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def versions_graphql_query(fields): + query = GraphQlQuery("VersionsQuery") + + project_name_var = query.add_variable("projectName", "String!") + product_ids_var = query.add_variable("productIds", "[String!]") + version_ids_var = query.add_variable("versionIds", "[String!]") + versions_var = query.add_variable("versions", "[Int!]") + hero_only_var = query.add_variable("heroOnly", "Boolean") + latest_only_var = query.add_variable("latestOnly", "Boolean") + hero_or_latest_only_var = query.add_variable( + "heroOrLatestOnly", "Boolean" + ) + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + products_field = project_field.add_field_with_edges("versions") + products_field.set_filter("ids", version_ids_var) + products_field.set_filter("productIds", product_ids_var) + products_field.set_filter("versions", versions_var) + products_field.set_filter("heroOnly", hero_only_var) + products_field.set_filter("latestOnly", latest_only_var) + products_field.set_filter("heroOrLatestOnly", hero_or_latest_only_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(products_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, products_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def representations_graphql_query(fields): + query = GraphQlQuery("RepresentationsQuery") + + project_name_var = query.add_variable("projectName", "String!") + repre_ids_var = query.add_variable("representationIds", "[String!]") + repre_names_var = query.add_variable("representationNames", "[String!]") + version_ids_var = query.add_variable("versionIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + repres_field = project_field.add_field_with_edges("representations") + repres_field.set_filter("ids", repre_ids_var) + repres_field.set_filter("versionIds", version_ids_var) + repres_field.set_filter("names", repre_names_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(repres_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, repres_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def representations_parents_qraphql_query( + version_fields, product_fields, folder_fields +): + query = GraphQlQuery("RepresentationsParentsQuery") + + project_name_var = query.add_variable("projectName", "String!") + repre_ids_var = query.add_variable("representationIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + repres_field = project_field.add_field_with_edges("representations") + repres_field.add_field("id") + repres_field.set_filter("ids", repre_ids_var) + version_field = repres_field.add_field("version") + + fields_queue = collections.deque() + for key, value in fields_to_dict(version_fields).items(): + fields_queue.append((key, value, version_field)) + + product_field = version_field.add_field("product") + for key, value in fields_to_dict(product_fields).items(): + fields_queue.append((key, value, product_field)) + + folder_field = product_field.add_field("folder") + for key, value in fields_to_dict(folder_fields).items(): + fields_queue.append((key, value, folder_field)) + + while fields_queue: + item = fields_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + fields_queue.append((k, v, field)) + + return query + + +def workfiles_info_graphql_query(fields): + query = GraphQlQuery("WorkfilesInfo") + project_name_var = query.add_variable("projectName", "String!") + workfiles_info_ids = query.add_variable("workfileIds", "[String!]") + task_ids_var = query.add_variable("taskIds", "[String!]") + paths_var = query.add_variable("paths", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + workfiles_field = project_field.add_field_with_edges("workfiles") + workfiles_field.set_filter("ids", workfiles_info_ids) + workfiles_field.set_filter("taskIds", task_ids_var) + workfiles_field.set_filter("paths", paths_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(workfiles_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, workfiles_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def events_graphql_query(fields): + query = GraphQlQuery("Events") + topics_var = query.add_variable("eventTopics", "[String!]") + projects_var = query.add_variable("projectNames", "[String!]") + states_var = query.add_variable("eventStates", "[String!]") + users_var = query.add_variable("eventUsers", "[String!]") + include_logs_var = query.add_variable("includeLogsFilter", "Boolean!") + + events_field = query.add_field_with_edges("events") + events_field.set_filter("topics", topics_var) + events_field.set_filter("projects", projects_var) + events_field.set_filter("states", states_var) + events_field.set_filter("users", users_var) + events_field.set_filter("includeLogs", include_logs_var) + + nested_fields = fields_to_dict(set(fields)) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, events_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def users_graphql_query(fields): + query = GraphQlQuery("Users") + names_var = query.add_variable("userNames", "[String!]") + + users_field = query.add_field_with_edges("users") + users_field.set_filter("names", names_var) + + nested_fields = fields_to_dict(set(fields)) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, users_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query diff --git a/openpype/vendor/python/common/ayon_api/operations.py b/openpype/vendor/python/common/ayon_api/operations.py new file mode 100644 index 0000000000..eb2ca8afe3 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/operations.py @@ -0,0 +1,775 @@ +import os +import copy +import collections +import uuid +from abc import ABCMeta, abstractmethod + +import six + +from ._api import get_server_api_connection +from .utils import create_entity_id, REMOVED_VALUE + + +def _create_or_convert_to_id(entity_id=None): + if entity_id is None: + return create_entity_id() + + # Validate if can be converted to uuid + uuid.UUID(entity_id) + return entity_id + + +def new_folder_entity( + name, + folder_type, + parent_id=None, + status=None, + tags=None, + attribs=None, + data=None, + thumbnail_id=None, + entity_id=None +): + """Create skeleton data of folder entity. + + Args: + name (str): Is considered as unique identifier of folder in project. + folder_type (str): Type of folder. + parent_id (Optional[str]): Parent folder id. + status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of folder. + data (Optional[Dict[str, Any]]): Custom folder data. Empty dictionary + is used if not passed. + thumbnail_id (Optional[str]): Thumbnail id related to folder. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of folder entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + if parent_id is not None: + parent_id = _create_or_convert_to_id(parent_id) + + output = { + "id": _create_or_convert_to_id(entity_id), + "name": name, + # This will be ignored + "folderType": folder_type, + "parentId": parent_id, + "data": data, + "attrib": attribs, + "thumbnailId": thumbnail_id + } + if status: + output["status"] = status + if tags: + output["tags"] = tags + return output + + +def new_product_entity( + name, + product_type, + folder_id, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of product entity. + + Args: + name (str): Is considered as unique identifier of + product under folder. + product_type (str): Product type. + folder_id (str): Parent folder id. + status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of product. + data (Optional[Dict[str, Any]]): product entity data. Empty dictionary + is used if not passed. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of product entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "name": name, + "productType": product_type, + "attrib": attribs, + "data": data, + "folderId": _create_or_convert_to_id(folder_id), + } + if status: + output["status"] = status + if tags: + output["tags"] = tags + return output + + +def new_version_entity( + version, + product_id, + task_id=None, + thumbnail_id=None, + author=None, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of version entity. + + Args: + version (int): Is considered as unique identifier of version + under product. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. + thumbnail_id (Optional[str]): Thumbnail related to version. + author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of version. + data (Optional[Dict[str, Any]]): Version entity custom data. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "version": int(version), + "productId": _create_or_convert_to_id(product_id), + "attrib": attribs, + "data": data + } + if task_id: + output["taskId"] = task_id + if thumbnail_id: + output["thumbnailId"] = thumbnail_id + if author: + output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_hero_version_entity( + version, + product_id, + task_id=None, + thumbnail_id=None, + author=None, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of hero version entity. + + Args: + version (int): Is considered as unique identifier of version + under product. Should be same as standard version if there is any. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. + thumbnail_id (Optional[str]): Thumbnail related to version. + author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of version. + data (Optional[Dict[str, Any]]): Version entity data. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "version": -abs(int(version)), + "productId": product_id, + "attrib": attribs, + "data": data + } + if task_id: + output["taskId"] = task_id + if thumbnail_id: + output["thumbnailId"] = thumbnail_id + if author: + output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_representation_entity( + name, + version_id, + files, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of representation entity. + + Args: + name (str): Representation name considered as unique identifier + of representation under version. + version_id (str): Parent version id. + files (list[dict[str, str]]): List of files in representation. + status (Optional[str]): Representation status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of representation. + data (Optional[Dict[str, Any]]): Representation entity data. + entity_id (Optional[str]): Predefined id of entity. New id is created + if not passed. + + Returns: + Dict[str, Any]: Skeleton of representation entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "versionId": _create_or_convert_to_id(version_id), + "files": files, + "name": name, + "data": data, + "attrib": attribs + } + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_workfile_info( + filepath, + task_id, + status=None, + tags=None, + attribs=None, + description=None, + data=None, + entity_id=None +): + """Create skeleton data of workfile info entity. + + Workfile entity is at this moment used primarily for artist notes. + + Args: + filepath (str): Rootless workfile filepath. + task_id (str): Task under which was workfile created. + status (Optional[str]): Workfile status. + tags (Optional[List[str]]): Workfile tags. + attribs (Options[dic[str, Any]]): Explicitly set attributes. + description (Optional[str]): Workfile description. + data (Optional[Dict[str, Any]]): Additional metadata. + entity_id (Optional[str]): Predefined id of entity. New id is created + if not passed. + + Returns: + Dict[str, Any]: Skeleton of workfile info entity. + """ + + if attribs is None: + attribs = {} + + if "extension" not in attribs: + attribs["extension"] = os.path.splitext(filepath)[-1] + + if description: + attribs["description"] = description + + if not data: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "taskId": task_id, + "path": filepath, + "data": data, + "attrib": attribs + } + if status: + output["status"] = status + + if tags: + output["tags"] = tags + return output + + +@six.add_metaclass(ABCMeta) +class AbstractOperation(object): + """Base operation class. + + Opration represent a call into database. The call can create, change or + remove data. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + """ + + def __init__(self, project_name, entity_type, session): + self._project_name = project_name + self._entity_type = entity_type + self._session = session + self._id = str(uuid.uuid4()) + + @property + def project_name(self): + return self._project_name + + @property + def id(self): + """Identifier of operation.""" + + return self._id + + @property + def entity_type(self): + return self._entity_type + + @property + @abstractmethod + def operation_name(self): + """Stringified type of operation.""" + + pass + + def to_data(self): + """Convert opration to data that can be converted to json or others. + + Returns: + Dict[str, Any]: Description of operation. + """ + + return { + "id": self._id, + "entity_type": self.entity_type, + "project_name": self.project_name, + "operation": self.operation_name + } + + +class CreateOperation(AbstractOperation): + """Opeartion to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data, session): + if not data: + data = {} + else: + data = copy.deepcopy(dict(data)) + + if "id" not in data: + data["id"] = create_entity_id() + + self._data = data + super(CreateOperation, self).__init__( + project_name, entity_type, session + ) + + def __setitem__(self, key, value): + self.set_value(key, value) + + def __getitem__(self, key): + return self.data[key] + + def set_value(self, key, value): + self.data[key] = value + + def get(self, key, *args, **kwargs): + return self.data.get(key, *args, **kwargs) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + @property + def entity_id(self): + return self._data["id"] + + @property + def data(self): + return self._data + + def to_data(self): + output = super(CreateOperation, self).to_data() + output["data"] = copy.deepcopy(self.data) + return output + + def to_server_operation(self): + return { + "id": self.id, + "type": "create", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": self._data + } + + +class UpdateOperation(AbstractOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + entity_id (str): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__( + self, project_name, entity_type, entity_id, update_data, session + ): + super(UpdateOperation, self).__init__( + project_name, entity_type, session + ) + + self._entity_id = entity_id + self._update_data = update_data + + @property + def entity_id(self): + return self._entity_id + + @property + def update_data(self): + return self._update_data + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_data(self): + changes = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + changes[key] = value + + output = super(UpdateOperation, self).to_data() + output.update({ + "entity_id": self.entity_id, + "changes": changes + }) + return output + + def to_server_operation(self): + if not self._update_data: + return None + + update_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + update_data[key] = value + + return { + "id": self.id, + "type": "update", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": update_data + } + + +class DeleteOperation(AbstractOperation): + """Opeartion to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + entity_id (str): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id, session): + self._entity_id = entity_id + + super(DeleteOperation, self).__init__( + project_name, entity_type, session + ) + + @property + def entity_id(self): + return self._entity_id + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_data(self): + output = super(DeleteOperation, self).to_data() + output["entity_id"] = self.entity_id + return output + + def to_server_operation(self): + return { + "id": self.id, + "type": self.operation_name, + "entityId": self.entity_id, + "entityType": self.entity_type, + } + + +class OperationsSession(object): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be sonsidered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and entity + values are not validated. + + All operations must be related to single project. + + Args: + project_name (str): Project name to which are operations related. + """ + + def __init__(self, con=None): + if con is None: + con = get_server_api_connection() + self._con = con + self._project_cache = {} + self._operations = [] + self._nested_operations = collections.defaultdict(list) + + @property + def con(self): + return self._con + + def get_project(self, project_name): + if project_name not in self._project_cache: + self._project_cache[project_name] = self.con.get_project( + project_name) + return copy.deepcopy(self._project_cache[project_name]) + + def __len__(self): + return len(self._operations) + + def add(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + if not isinstance( + operation, + (CreateOperation, UpdateOperation, DeleteOperation) + ): + raise TypeError("Expected Operation object got {}".format( + str(type(operation)) + )) + + self._operations.append(operation) + + def append(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + + self.add(operation) + + def extend(self, operations): + """Add operations to be processed. + + Args: + operations (List[BaseOperation]): Operations that should be + processed. + """ + + for operation in operations: + self.add(operation) + + def remove(self, operation): + """Remove operation.""" + + self._operations.remove(operation) + + def clear(self): + """Clear all registered operations.""" + + self._operations = [] + + def to_data(self): + return [ + operation.to_data() + for operation in self._operations + ] + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + for project_name, operations in operations_by_project.items(): + operations_body = [] + for operation in operations: + body = operation.to_server_operation() + if body is not None: + operations_body.append(body) + + self._con.send_batch_operations( + project_name, operations_body, can_fail=False + ) + + def create_entity(self, project_name, entity_type, data, nested_id=None): + """Fast access to 'CreateOperation'. + + Args: + project_name (str): On which project the creation happens. + entity_type (str): Which entity type will be created. + data (Dicst[str, Any]): Entity data. + nested_id (str): Id of other operation from which is triggered + operation -> Operations can trigger suboperations but they + must be added to operations list after it's parent is added. + + Returns: + CreateOperation: Object of update operation. + """ + + operation = CreateOperation( + project_name, entity_type, data, self + ) + + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + + return operation + + def update_entity( + self, project_name, entity_type, entity_id, update_data, nested_id=None + ): + """Fast access to 'UpdateOperation'. + + Returns: + UpdateOperation: Object of update operation. + """ + + operation = UpdateOperation( + project_name, entity_type, entity_id, update_data, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + def delete_entity( + self, project_name, entity_type, entity_id, nested_id=None + ): + """Fast access to 'DeleteOperation'. + + Returns: + DeleteOperation: Object of delete operation. + """ + + operation = DeleteOperation( + project_name, entity_type, entity_id, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation diff --git a/openpype/vendor/python/common/ayon_api/server_api.py b/openpype/vendor/python/common/ayon_api/server_api.py new file mode 100644 index 0000000000..511a239a83 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/server_api.py @@ -0,0 +1,6266 @@ +import os +import re +import io +import json +import time +import logging +import collections +import platform +import copy +import uuid +from contextlib import contextmanager +try: + from http import HTTPStatus +except ImportError: + HTTPStatus = None + +import requests +try: + # This should be used if 'requests' have it available + from requests.exceptions import JSONDecodeError as RequestsJSONDecodeError +except ImportError: + # Older versions of 'requests' don't have custom exception for json + # decode error + try: + from simplejson import JSONDecodeError as RequestsJSONDecodeError + except ImportError: + from json import JSONDecodeError as RequestsJSONDecodeError + +from .constants import ( + SERVER_TIMEOUT_ENV_KEY, + SERVER_RETRIES_ENV_KEY, + DEFAULT_PRODUCT_TYPE_FIELDS, + DEFAULT_PROJECT_FIELDS, + DEFAULT_FOLDER_FIELDS, + DEFAULT_TASK_FIELDS, + DEFAULT_PRODUCT_FIELDS, + DEFAULT_VERSION_FIELDS, + DEFAULT_REPRESENTATION_FIELDS, + REPRESENTATION_FILES_FIELDS, + DEFAULT_WORKFILE_INFO_FIELDS, + DEFAULT_EVENT_FIELDS, + DEFAULT_USER_FIELDS, +) +from .graphql import GraphQlQuery, INTROSPECTION_QUERY +from .graphql_queries import ( + project_graphql_query, + projects_graphql_query, + project_product_types_query, + product_types_query, + folders_graphql_query, + tasks_graphql_query, + products_graphql_query, + versions_graphql_query, + representations_graphql_query, + representations_parents_qraphql_query, + workfiles_info_graphql_query, + events_graphql_query, + users_graphql_query, +) +from .exceptions import ( + FailedOperations, + UnauthorizedError, + AuthenticationError, + ServerNotReached, + ServerError, + HTTPRequestError, +) +from .utils import ( + RepresentationParents, + prepare_query_string, + logout_from_server, + create_entity_id, + entity_data_json_default, + failed_json_default, + TransferProgress, + create_dependency_package_basename, + ThumbnailContent, +) + +PatternType = type(re.compile("")) +JSONDecodeError = getattr(json, "JSONDecodeError", ValueError) +# This should be collected from server schema +PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" +PROJECT_NAME_REGEX = re.compile( + "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) +) + +VERSION_REGEX = re.compile( + r"(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"(?:-(?P[a-zA-Z\d\-.]*))?" + r"(?:\+(?P[a-zA-Z\d\-.]*))?" +) + + +def _get_description(response): + if HTTPStatus is None: + return str(response.orig_response) + return HTTPStatus(response.status).description + + +class RequestType: + def __init__(self, name): + self.name = name + + def __hash__(self): + return self.name.__hash__() + + +class RequestTypes: + get = RequestType("GET") + post = RequestType("POST") + put = RequestType("PUT") + patch = RequestType("PATCH") + delete = RequestType("DELETE") + + +class RestApiResponse(object): + """API Response.""" + + def __init__(self, response, data=None): + if response is None: + status_code = 500 + else: + status_code = response.status_code + self._response = response + self.status = status_code + self._data = data + + @property + def text(self): + if self._response is None: + return self.detail + return self._response.text + + @property + def orig_response(self): + return self._response + + @property + def headers(self): + if self._response is None: + return {} + return self._response.headers + + @property + def data(self): + if self._data is None: + try: + self._data = self.orig_response.json() + except RequestsJSONDecodeError: + self._data = {} + return self._data + + @property + def content(self): + if self._response is None: + return b"" + return self._response.content + + @property + def content_type(self): + return self.headers.get("Content-Type") + + @property + def detail(self): + detail = self.get("detail") + if detail: + return detail + return _get_description(self) + + @property + def status_code(self): + return self.status + + def raise_for_status(self, message=None): + try: + self._response.raise_for_status() + except requests.exceptions.HTTPError as exc: + if message is None: + message = str(exc) + raise HTTPRequestError(message, exc.response) + + def __enter__(self, *args, **kwargs): + return self._response.__enter__(*args, **kwargs) + + def __contains__(self, key): + return key in self.data + + def __repr__(self): + return "<{} [{}]>".format(self.__class__.__name__, self.status) + + def __len__(self): + return 200 <= self.status < 400 + + def __getitem__(self, key): + return self.data[key] + + def get(self, key, default=None): + data = self.data + if isinstance(data, dict): + return self.data.get(key, default) + return default + + +class GraphQlResponse: + def __init__(self, data): + self.data = data + self.errors = data.get("errors") + + def __len__(self): + if self.errors: + return 0 + return 1 + + def __repr__(self): + if self.errors: + return "<{} errors={}>".format( + self.__class__.__name__, self.errors[0]['message'] + ) + return "<{}>".format(self.__class__.__name__) + + +def fill_own_attribs(entity): + if not entity or not entity.get("attrib"): + return + + attributes = set(entity["ownAttrib"]) + + own_attrib = {} + entity["ownAttrib"] = own_attrib + + for key, value in entity["attrib"].items(): + if key not in attributes: + own_attrib[key] = None + else: + own_attrib[key] = copy.deepcopy(value) + + +class _AsUserStack: + """Handle stack of users used over server api connection in service mode. + + ServerAPI can behave as other users if it is using special API key. + + Examples: + >>> stack = _AsUserStack() + >>> stack.set_default_username("DefaultName") + >>> print(stack.username) + DefaultName + >>> with stack.as_user("Other1"): + ... print(stack.username) + ... with stack.as_user("Other2"): + ... print(stack.username) + ... print(stack.username) + ... stack.clear() + ... print(stack.username) + Other1 + Other2 + Other1 + None + >>> print(stack.username) + None + >>> stack.set_default_username("DefaultName") + >>> print(stack.username) + DefaultName + """ + + def __init__(self): + self._users_by_id = {} + self._user_ids = [] + self._last_user = None + self._default_user = None + + def clear(self): + self._users_by_id = {} + self._user_ids = [] + self._last_user = None + self._default_user = None + + @property + def username(self): + # Use '_user_ids' for boolean check to have ability "unset" + # default user + if self._user_ids: + return self._last_user + return self._default_user + + def get_default_username(self): + return self._default_user + + def set_default_username(self, username=None): + self._default_user = username + + default_username = property(get_default_username, set_default_username) + + @contextmanager + def as_user(self, username): + self._last_user = username + user_id = uuid.uuid4().hex + self._user_ids.append(user_id) + self._users_by_id[user_id] = username + try: + yield + finally: + self._users_by_id.pop(user_id, None) + if not self._user_ids: + return + + # First check if is the user id the last one + was_last = self._user_ids[-1] == user_id + # Remove id from variables + if user_id in self._user_ids: + self._user_ids.remove(user_id) + + if not was_last: + return + + new_last_user = None + if self._user_ids: + new_last_user = self._users_by_id.get(self._user_ids[-1]) + self._last_user = new_last_user + + +class ServerAPI(object): + """Base handler of connection to server. + + Requires url to server which is used as base for api and graphql calls. + + Login cause that a session is used + + Args: + base_url (str): Example: http://localhost:5000 + token (Optional[str]): Access token (api key) to server. + site_id (Optional[str]): Unique name of site. Should be the same when + connection is created from the same machine under same user. + client_version (Optional[str]): Version of client application (used in + desktop client application). + default_settings_variant (Optional[Literal["production", "staging"]]): + Settings variant used by default if a method for settings won't + get any (by default is 'production'). + sender (Optional[str]): Sender of requests. Used in server logs and + propagated into events. + ssl_verify (Union[bool, str, None]): Verify SSL certificate + Looks for env variable value 'AYON_CA_FILE' by default. If not + available then 'True' is used. + cert (Optional[str]): Path to certificate file. Looks for env + variable value 'AYON_CERT_FILE' by default. + create_session (Optional[bool]): Create session for connection if + token is available. Default is True. + timeout (Optional[float]): Timeout for requests. + max_retries (Optional[int]): Number of retries for requests. + """ + _default_timeout = 10.0 + _default_max_retries = 3 + + def __init__( + self, + base_url, + token=None, + site_id=None, + client_version=None, + default_settings_variant=None, + sender=None, + ssl_verify=None, + cert=None, + create_session=True, + timeout=None, + max_retries=None, + ): + if not base_url: + raise ValueError("Invalid server URL {}".format(str(base_url))) + + base_url = base_url.rstrip("/") + self._base_url = base_url + self._rest_url = "{}/api".format(base_url) + self._graphql_url = "{}/graphql".format(base_url) + self._log = None + self._access_token = token + self._site_id = site_id + self._client_version = client_version + self._default_settings_variant = ( + default_settings_variant + or "production" + ) + self._sender = sender + + self._timeout = None + self._max_retries = None + + # Set timeout and max retries based on passed values + self.set_timeout(timeout) + self.set_max_retries(max_retries) + + if ssl_verify is None: + # Custom AYON env variable for CA file or 'True' + # - that should cover most default behaviors in 'requests' + # with 'certifi' + ssl_verify = os.environ.get("AYON_CA_FILE") or True + + if cert is None: + cert = os.environ.get("AYON_CERT_FILE") + + self._ssl_verify = ssl_verify + self._cert = cert + + self._access_token_is_service = None + self._token_is_valid = None + self._token_validation_started = False + self._server_available = None + self._server_version = None + self._server_version_tuple = None + + self._session = None + + self._base_functions_mapping = { + RequestTypes.get: requests.get, + RequestTypes.post: requests.post, + RequestTypes.put: requests.put, + RequestTypes.patch: requests.patch, + RequestTypes.delete: requests.delete + } + self._session_functions_mapping = {} + + # Attributes cache + self._attributes_schema = None + self._entity_type_attributes_cache = {} + + self._as_user_stack = _AsUserStack() + + # Create session + if self._access_token and create_session: + self.validate_server_availability() + self.create_session() + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + def get_base_url(self): + return self._base_url + + def get_rest_url(self): + return self._rest_url + + base_url = property(get_base_url) + rest_url = property(get_rest_url) + + def get_ssl_verify(self): + """Enable ssl verification. + + Returns: + bool: Current state of ssl verification. + """ + + return self._ssl_verify + + def set_ssl_verify(self, ssl_verify): + """Change ssl verification state. + + Args: + ssl_verify (Union[bool, str, None]): Enabled/disable + ssl verification, can be a path to file. + """ + + if self._ssl_verify == ssl_verify: + return + self._ssl_verify = ssl_verify + if self._session is not None: + self._session.verify = ssl_verify + + def get_cert(self): + """Current cert file used for connection to server. + + Returns: + Union[str, None]: Path to cert file. + """ + + return self._cert + + def set_cert(self, cert): + """Change cert file used for connection to server. + + Args: + cert (Union[str, None]): Path to cert file. + """ + + if cert == self._cert: + return + self._cert = cert + if self._session is not None: + self._session.cert = cert + + ssl_verify = property(get_ssl_verify, set_ssl_verify) + cert = property(get_cert, set_cert) + + @classmethod + def get_default_timeout(cls): + """Default value for requests timeout. + + First looks for environment variable SERVER_TIMEOUT_ENV_KEY which + can affect timeout value. If not available then use class + attribute '_default_timeout'. + + Returns: + float: Timeout value in seconds. + """ + + try: + return float(os.environ.get(SERVER_TIMEOUT_ENV_KEY)) + except (ValueError, TypeError): + pass + + return cls._default_timeout + + @classmethod + def get_default_max_retries(cls): + """Default value for requests max retries. + + First looks for environment variable SERVER_RETRIES_ENV_KEY, which + can affect max retries value. If not available then use class + attribute '_default_max_retries'. + + Returns: + int: Max retries value. + """ + + try: + return int(os.environ.get(SERVER_RETRIES_ENV_KEY)) + except (ValueError, TypeError): + pass + + return cls._default_max_retries + + def get_timeout(self): + """Current value for requests timeout. + + Returns: + float: Timeout value in seconds. + """ + + return self._timeout + + def set_timeout(self, timeout): + """Change timeout value for requests. + + Args: + timeout (Union[float, None]): Timeout value in seconds. + """ + + if timeout is None: + timeout = self.get_default_timeout() + self._timeout = float(timeout) + + def get_max_retries(self): + """Current value for requests max retries. + + Returns: + int: Max retries value. + """ + + return self._max_retries + + def set_max_retries(self, max_retries): + """Change max retries value for requests. + + Args: + max_retries (Union[int, None]): Max retries value. + """ + + if max_retries is None: + max_retries = self.get_default_max_retries() + self._max_retries = int(max_retries) + + timeout = property(get_timeout, set_timeout) + max_retries = property(get_max_retries, set_max_retries) + + @property + def access_token(self): + """Access token used for authorization to server. + + Returns: + Union[str, None]: Token string or None if not authorized yet. + """ + + return self._access_token + + def get_site_id(self): + """Site id used for connection. + + Site id tells server from which machine/site is connection created and + is used for default site overrides when settings are received. + + Returns: + Union[str, None]: Site id value or None if not filled. + """ + + return self._site_id + + def set_site_id(self, site_id): + """Change site id of connection. + + Behave as specific site for server. It affects default behavior of + settings getter methods. + + Args: + site_id (Union[str, None]): Site id value, or 'None' to unset. + """ + + if self._site_id == site_id: + return + self._site_id = site_id + # Recreate session on machine id change + self._update_session_headers() + + site_id = property(get_site_id, set_site_id) + + def get_client_version(self): + """Version of client used to connect to server. + + Client version is AYON client build desktop application. + + Returns: + str: Client version string used in connection. + """ + + return self._client_version + + def set_client_version(self, client_version): + """Set version of client used to connect to server. + + Client version is AYON client build desktop application. + + Args: + client_version (Union[str, None]): Client version string. + """ + + if self._client_version == client_version: + return + + self._client_version = client_version + self._update_session_headers() + + client_version = property(get_client_version, set_client_version) + + def get_default_settings_variant(self): + """Default variant used for settings. + + Returns: + Union[str, None]: name of variant or None. + """ + + return self._default_settings_variant + + def set_default_settings_variant(self, variant): + """Change default variant for addon settings. + + Note: + It is recommended to set only 'production' or 'staging' variants + as default variant. + + Args: + variant (Literal['production', 'staging']): Settings variant name. + """ + + if variant not in ("production", "staging"): + raise ValueError(( + "Invalid variant name {}. Expected 'production' or 'staging'" + ).format(variant)) + self._default_settings_variant = variant + + default_settings_variant = property( + get_default_settings_variant, + set_default_settings_variant + ) + + def get_sender(self): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + return self._sender + + def set_sender(self, sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + if sender == self._sender: + return + self._sender = sender + self._update_session_headers() + + sender = property(get_sender, set_sender) + + def get_default_service_username(self): + """Default username used for callbacks when used with service API key. + + Returns: + Union[str, None]: Username if any was filled. + """ + + return self._as_user_stack.get_default_username() + + def set_default_service_username(self, username=None): + """Service API will work as other user. + + Service API keys can work as other user. It can be temporary using + context manager 'as_user' or it is possible to set default username if + 'as_user' context manager is not entered. + + Args: + username (Optional[str]): Username to work as when service. + + Raises: + ValueError: When connection is not yet authenticated or api key + is not service token. + """ + + current_username = self._as_user_stack.get_default_username() + if current_username == username: + return + + if not self.has_valid_token: + raise ValueError( + "Authentication of connection did not happen yet." + ) + + if not self._access_token_is_service: + raise ValueError( + "Can't set service username. API key is not a service token." + ) + + self._as_user_stack.set_default_username(username) + if self._as_user_stack.username == username: + self._update_session_headers() + + @contextmanager + def as_username(self, username): + """Service API will temporarily work as other user. + + This method can be used only if service API key is logged in. + + Args: + username (Union[str, None]): Username to work as when service. + + Raises: + ValueError: When connection is not yet authenticated or api key + is not service token. + """ + + if not self.has_valid_token: + raise ValueError( + "Authentication of connection did not happen yet." + ) + + if not self._access_token_is_service: + raise ValueError( + "Can't set service username. API key is not a service token." + ) + + with self._as_user_stack.as_user(username) as o: + self._update_session_headers() + try: + yield o + finally: + self._update_session_headers() + + @property + def is_server_available(self): + if self._server_available is None: + response = requests.get( + self._base_url, + cert=self._cert, + verify=self._ssl_verify + ) + self._server_available = response.status_code == 200 + return self._server_available + + @property + def has_valid_token(self): + if self._access_token is None: + return False + + if self._token_is_valid is None: + self.validate_token() + return self._token_is_valid + + def validate_server_availability(self): + if not self.is_server_available: + raise ServerNotReached("Server \"{}\" can't be reached".format( + self._base_url + )) + + def validate_token(self): + try: + self._token_validation_started = True + # TODO add other possible validations + # - existence of 'user' key in info + # - validate that 'site_id' is in 'sites' in info + self.get_info() + self.get_user() + self._token_is_valid = True + + except UnauthorizedError: + self._token_is_valid = False + + finally: + self._token_validation_started = False + return self._token_is_valid + + def set_token(self, token): + self.reset_token() + self._access_token = token + self.get_user() + + def reset_token(self): + self._access_token = None + self._token_is_valid = None + self.close_session() + + def create_session(self, ignore_existing=True, force=False): + """Create a connection session. + + Session helps to keep connection with server without + need to reconnect on each call. + + Args: + ignore_existing (bool): If session already exists, + ignore creation. + force (bool): If session already exists, close it and + create new. + """ + + if force and self._session is not None: + self.close_session() + + if self._session is not None: + if ignore_existing: + return + raise ValueError("Session is already created.") + + self._as_user_stack.clear() + # Validate token before session creation + self.validate_token() + + session = requests.Session() + session.cert = self._cert + session.verify = self._ssl_verify + session.headers.update(self.get_headers()) + + self._session_functions_mapping = { + RequestTypes.get: session.get, + RequestTypes.post: session.post, + RequestTypes.put: session.put, + RequestTypes.patch: session.patch, + RequestTypes.delete: session.delete + } + self._session = session + + def close_session(self): + if self._session is None: + return + + session = self._session + self._session = None + self._session_functions_mapping = {} + session.close() + + def _update_session_headers(self): + if self._session is None: + return + + # Header keys that may change over time + for key, value in ( + ("X-as-user", self._as_user_stack.username), + ("x-ayon-version", self._client_version), + ("x-ayon-site-id", self._site_id), + ("x-sender", self._sender), + ): + if value is not None: + self._session.headers[key] = value + elif key in self._session.headers: + self._session.headers.pop(key) + + def get_info(self): + """Get information about current used api key. + + By default, the 'info' contains only 'uptime' and 'version'. With + logged user info also contains information about user and machines on + which was logged in. + + Todos: + Use this method for validation of token instead of 'get_user'. + + Returns: + dict[str, Any]: Information from server. + """ + + response = self.get("info") + return response.data + + def get_server_version(self): + """Get server version. + + Version should match semantic version (https://semver.org/). + + Returns: + str: Server version. + """ + + if self._server_version is None: + self._server_version = self.get_info()["version"] + return self._server_version + + def get_server_version_tuple(self): + """Get server version as tuple. + + Version should match semantic version (https://semver.org/). + + This function only returns first three numbers of version. + + Returns: + Tuple[int, int, int, Union[str, None], Union[str, None]]: Server + version. + """ + + if self._server_version_tuple is None: + re_match = VERSION_REGEX.fullmatch( + self.get_server_version()) + self._server_version_tuple = ( + int(re_match.group("major")), + int(re_match.group("minor")), + int(re_match.group("patch")), + re_match.group("prerelease"), + re_match.group("buildmetadata") + ) + return self._server_version_tuple + + server_version = property(get_server_version) + server_version_tuple = property(get_server_version_tuple) + + def _get_user_info(self): + if self._access_token is None: + return None + + if self._access_token_is_service is not None: + response = self.get("users/me") + return response.data + + self._access_token_is_service = False + response = self.get("users/me") + if response.status == 200: + return response.data + + self._access_token_is_service = True + response = self.get("users/me") + if response.status == 200: + return response.data + + self._access_token_is_service = None + return None + + def get_users(self, usernames=None, fields=None): + """Get Users. + + Args: + usernames (Optional[Iterable[str]]): Filter by usernames. + fields (Optional[Iterable[str]]): fields to be queried + for users. + + Returns: + Generator[dict[str, Any]]: Queried users. + """ + + filters = {} + if usernames is not None: + usernames = set(usernames) + if not usernames: + return + filters["userNames"] = list(usernames) + + if not fields: + fields = self.get_default_fields_for_type("user") + + query = users_graphql_query(set(fields)) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + # Backwards compatibility for server 0.3.x + # - will be removed in future releases + major, minor, _, _, _ = self.server_version_tuple + access_groups_field = "accessGroups" + if major == 0 and minor <= 3: + access_groups_field = "roles" + + for parsed_data in query.continuous_query(self): + for user in parsed_data["users"]: + user[access_groups_field] = json.loads( + user[access_groups_field]) + yield user + + def get_user(self, username=None): + output = None + if username is None: + output = self._get_user_info() + else: + response = self.get("users/{}".format(username)) + if response.status == 200: + output = response.data + + if output is None: + raise UnauthorizedError("User is not authorized.") + return output + + def get_headers(self, content_type=None): + if content_type is None: + content_type = "application/json" + + headers = { + "Content-Type": content_type, + "x-ayon-platform": platform.system().lower(), + "x-ayon-hostname": platform.node(), + } + if self._site_id is not None: + headers["x-ayon-site-id"] = self._site_id + + if self._client_version is not None: + headers["x-ayon-version"] = self._client_version + + if self._sender is not None: + headers["x-sender"] = self._sender + + if self._access_token: + if self._access_token_is_service: + headers["X-Api-Key"] = self._access_token + username = self._as_user_stack.username + if username: + headers["X-as-user"] = username + else: + headers["Authorization"] = "Bearer {}".format( + self._access_token) + return headers + + def login(self, username, password, create_session=True): + """Login to server. + + Args: + username (str): Username. + password (str): Password. + create_session (Optional[bool]): Create session after login. + Default: True. + + Raises: + AuthenticationError: Login failed. + """ + + if self.has_valid_token: + try: + user_info = self.get_user() + except UnauthorizedError: + user_info = {} + + current_username = user_info.get("name") + if current_username == username: + self.close_session() + if create_session: + self.create_session() + return + + self.reset_token() + + self.validate_server_availability() + + self._token_validation_started = True + + try: + response = self.post( + "auth/login", + name=username, + password=password + ) + if response.status_code != 200: + _detail = response.data.get("detail") + details = "" + if _detail: + details = " {}".format(_detail) + + raise AuthenticationError("Login failed {}".format(details)) + + finally: + self._token_validation_started = False + + self._access_token = response["token"] + + if not self.has_valid_token: + raise AuthenticationError("Invalid credentials") + + if create_session: + self.create_session() + + def logout(self, soft=False): + if self._access_token: + if not soft: + self._logout() + self.reset_token() + + def _logout(self): + logout_from_server(self._base_url, self._access_token) + + def _do_rest_request(self, function, url, **kwargs): + kwargs.setdefault("timeout", self.timeout) + max_retries = kwargs.get("max_retries", self.max_retries) + if max_retries < 1: + max_retries = 1 + if self._session is None: + # Validate token if was not yet validated + # - ignore validation if we're in middle of + # validation + if ( + self._token_is_valid is None + and not self._token_validation_started + ): + self.validate_token() + + if "headers" not in kwargs: + kwargs["headers"] = self.get_headers() + + if isinstance(function, RequestType): + function = self._base_functions_mapping[function] + + elif isinstance(function, RequestType): + function = self._session_functions_mapping[function] + + response = None + new_response = None + for _ in range(max_retries): + try: + response = function(url, **kwargs) + break + + except ConnectionRefusedError: + # Server may be restarting + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection refused"} + ) + except requests.exceptions.Timeout: + # Connection timed out + new_response = RestApiResponse( + None, + {"detail": "Connection timed out."} + ) + except requests.exceptions.ConnectionError: + # Other connection error (ssl, etc) - does not make sense to + # try call server again + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection error"} + ) + break + + time.sleep(0.1) + + if new_response is not None: + return new_response + + content_type = response.headers.get("Content-Type") + if content_type == "application/json": + try: + new_response = RestApiResponse(response) + except JSONDecodeError: + new_response = RestApiResponse( + None, + { + "detail": "The response is not a JSON: {}".format( + response.text) + } + ) + + else: + new_response = RestApiResponse(response) + + self.log.debug("Response {}".format(str(new_response))) + return new_response + + def raw_post(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [POST] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.post, + url, + **kwargs + ) + + def raw_put(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [PUT] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.put, + url, + **kwargs + ) + + def raw_patch(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [PATCH] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.patch, + url, + **kwargs + ) + + def raw_get(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [GET] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.get, + url, + **kwargs + ) + + def raw_delete(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [DELETE] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.delete, + url, + **kwargs + ) + + def post(self, entrypoint, **kwargs): + return self.raw_post(entrypoint, json=kwargs) + + def put(self, entrypoint, **kwargs): + return self.raw_put(entrypoint, json=kwargs) + + def patch(self, entrypoint, **kwargs): + return self.raw_patch(entrypoint, json=kwargs) + + def get(self, entrypoint, **kwargs): + return self.raw_get(entrypoint, params=kwargs) + + def delete(self, entrypoint, **kwargs): + return self.raw_delete(entrypoint, params=kwargs) + + def get_event(self, event_id): + """Query full event data by id. + + Events received using event server do not contain full information. To + get the full event information is required to receive it explicitly. + + Args: + event_id (str): Id of event. + + Returns: + dict[str, Any]: Full event data. + """ + + response = self.get("events/{}".format(event_id)) + response.raise_for_status() + return response.data + + def get_events( + self, + topics=None, + project_names=None, + states=None, + users=None, + include_logs=None, + fields=None + ): + """Get events from server with filtering options. + + Notes: + Not all event happen on a project. + + Args: + topics (Optional[Iterable[str]]): Name of topics. + project_names (Optional[Iterable[str]]): Project on which + event happened. + states (Optional[Iterable[str]]): Filtering by states. + users (Optional[Iterable[str]]): Filtering by users + who created/triggered an event. + include_logs (Optional[bool]): Query also log events. + fields (Optional[Iterable[str]]): Fields that should be received + for each event. + + Returns: + Generator[dict[str, Any]]: Available events matching filters. + """ + + filters = {} + if topics is not None: + topics = set(topics) + if not topics: + return + filters["eventTopics"] = list(topics) + + if project_names is not None: + project_names = set(project_names) + if not project_names: + return + filters["projectNames"] = list(project_names) + + if states is not None: + states = set(states) + if not states: + return + filters["eventStates"] = list(states) + + if users is not None: + users = set(users) + if not users: + return + filters["eventUsers"] = list(users) + + if include_logs is None: + include_logs = False + filters["includeLogsFilter"] = include_logs + + if not fields: + fields = self.get_default_fields_for_type("event") + + query = events_graphql_query(set(fields)) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for event in parsed_data["events"]: + yield event + + def update_event( + self, + event_id, + sender=None, + project_name=None, + status=None, + description=None, + summary=None, + payload=None + ): + kwargs = { + key: value + for key, value in ( + ("sender", sender), + ("project", project_name), + ("status", status), + ("description", description), + ("summary", summary), + ("payload", payload), + ) + if value is not None + } + response = self.patch( + "events/{}".format(event_id), + **kwargs + ) + response.raise_for_status() + + def dispatch_event( + self, + topic, + sender=None, + event_hash=None, + project_name=None, + username=None, + dependencies=None, + description=None, + summary=None, + payload=None, + finished=True, + store=True, + ): + """Dispatch event to server. + + Arg: + topic (str): Event topic used for filtering of listeners. + sender (Optional[str]): Sender of event. + hash (Optional[str]): Event hash. + project_name (Optional[str]): Project name. + username (Optional[str]): Username which triggered event. + dependencies (Optional[list[str]]): List of event id dependencies. + description (Optional[str]): Description of event. + summary (Optional[dict[str, Any]]): Summary of event that can be used + for simple filtering on listeners. + payload (Optional[dict[str, Any]]): Full payload of event data with + all details. + finished (Optional[bool]): Mark event as finished on dispatch. + store (Optional[bool]): Store event in event queue for possible + future processing otherwise is event send only + to active listeners. + """ + + if summary is None: + summary = {} + if payload is None: + payload = {} + event_data = { + "topic": topic, + "sender": sender, + "hash": event_hash, + "project": project_name, + "user": username, + "dependencies": dependencies, + "description": description, + "summary": summary, + "payload": payload, + "finished": finished, + "store": store, + } + if self.post("events", **event_data): + self.log.debug("Dispatched event {}".format(topic)) + return True + self.log.error("Unable to dispatch event {}".format(topic)) + return False + + def enroll_event_job( + self, + source_topic, + target_topic, + sender, + description=None, + sequential=None, + events_filter=None, + ): + """Enroll job based on events. + + Enroll will find first unprocessed event with 'source_topic' and will + create new event with 'target_topic' for it and return the new event + data. + + Use 'sequential' to control that only single target event is created + at same time. Creation of new target events is blocked while there is + at least one unfinished event with target topic, when set to 'True'. + This helps when order of events matter and more than one process using + the same target is running at the same time. + - Make sure the new event has updated status to '"finished"' status + when you're done with logic + + Target topic should not clash with other processes/services. + + Created target event have 'dependsOn' key where is id of source topic. + + Use-case: + - Service 1 is creating events with topic 'my.leech' + - Service 2 process 'my.leech' and uses target topic 'my.process' + - this service can run on 1-n machines + - all events must be processed in a sequence by their creation + time and only one event can be processed at a time + - in this case 'sequential' should be set to 'True' so only + one machine is actually processing events, but if one goes + down there are other that can take place + - Service 3 process 'my.leech' and uses target topic 'my.discover' + - this service can run on 1-n machines + - order of events is not important + - 'sequential' should be 'False' + + Args: + source_topic (str): Source topic to enroll. + target_topic (str): Topic of dependent event. + sender (str): Identifier of sender (e.g. service name or username). + description (Optional[str]): Human readable text shown + in target event. + sequential (Optional[bool]): The source topic must be processed + in sequence. + events_filter (Optional[ayon_server.sqlfilter.Filter]): A dict-like + with conditions to filter the source event. + + Returns: + Union[None, dict[str, Any]]: None if there is no event matching + filters. Created event with 'target_topic'. + """ + + kwargs = { + "sourceTopic": source_topic, + "targetTopic": target_topic, + "sender": sender, + } + if sequential is not None: + kwargs["sequential"] = sequential + if description is not None: + kwargs["description"] = description + if events_filter is not None: + kwargs["filter"] = events_filter + response = self.post("enroll", **kwargs) + if response.status_code == 204: + return None + elif response.status_code >= 400: + self.log.error(response.text) + return None + + return response.data + + def _download_file(self, url, filepath, chunk_size, progress): + dst_directory = os.path.dirname(filepath) + if not os.path.exists(dst_directory): + os.makedirs(dst_directory) + + kwargs = {"stream": True} + if self._session is None: + kwargs["headers"] = self.get_headers() + get_func = self._base_functions_mapping[RequestTypes.get] + else: + get_func = self._session_functions_mapping[RequestTypes.get] + + with open(filepath, "wb") as f_stream: + with get_func(url, **kwargs) as response: + response.raise_for_status() + progress.set_content_size(response.headers["Content-length"]) + for chunk in response.iter_content(chunk_size=chunk_size): + f_stream.write(chunk) + progress.add_transferred_chunk(len(chunk)) + + def download_file( + self, endpoint, filepath, chunk_size=None, progress=None + ): + """Download file from AYON server. + + Endpoint can be full url (must start with 'base_url' of api object). + + Progress object can be used to track download. Can be used when + download happens in thread and other thread want to catch changes over + time. + + Args: + endpoint (str): Endpoint or URL to file that should be downloaded. + filepath (str): Path where file will be downloaded. + chunk_size (Optional[int]): Size of chunks that are received + in single loop. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + """ + + if not chunk_size: + # 1 MB chunk by default + chunk_size = 1024 * 1024 + + if endpoint.startswith(self._base_url): + url = endpoint + else: + endpoint = endpoint.lstrip("/").rstrip("/") + url = "{}/{}".format(self._rest_url, endpoint) + + # Create dummy object so the function does not have to check + # 'progress' variable everywhere + if progress is None: + progress = TransferProgress() + + progress.set_source_url(url) + progress.set_destination_url(filepath) + progress.set_started() + try: + self._download_file(url, filepath, chunk_size, progress) + + except Exception as exc: + progress.set_failed(str(exc)) + raise + + finally: + progress.set_transfer_done() + return progress + + def _upload_file(self, url, filepath, progress, request_type=None): + if request_type is None: + request_type = RequestTypes.put + kwargs = {} + if self._session is None: + kwargs["headers"] = self.get_headers() + post_func = self._base_functions_mapping[request_type] + else: + post_func = self._session_functions_mapping[request_type] + + with open(filepath, "rb") as stream: + stream.seek(0, io.SEEK_END) + size = stream.tell() + stream.seek(0) + progress.set_content_size(size) + response = post_func(url, data=stream, **kwargs) + response.raise_for_status() + progress.set_transferred_size(size) + return response + + def upload_file( + self, endpoint, filepath, progress=None, request_type=None + ): + """Upload file to server. + + Todos: + Uploading with more detailed progress. + + Args: + endpoint (str): Endpoint or url where file will be uploaded. + filepath (str): Source filepath. + progress (Optional[TransferProgress]): Object that gives ability + to track upload progress. + request_type (Optional[RequestType]): Type of request that will + be used to upload file. + + Returns: + requests.Response: Response object. + """ + + if endpoint.startswith(self._base_url): + url = endpoint + else: + endpoint = endpoint.lstrip("/").rstrip("/") + url = "{}/{}".format(self._rest_url, endpoint) + + # Create dummy object so the function does not have to check + # 'progress' variable everywhere + if progress is None: + progress = TransferProgress() + + progress.set_source_url(filepath) + progress.set_destination_url(url) + progress.set_started() + + try: + return self._upload_file(url, filepath, progress, request_type) + + except Exception as exc: + progress.set_failed(str(exc)) + raise + + finally: + progress.set_transfer_done() + + def trigger_server_restart(self): + """Trigger server restart. + + Restart may be required when a change of specific value happened on + server. + """ + + result = self.post("system/restart") + if result.status_code != 204: + # TODO add better exception + raise ValueError("Failed to restart server") + + def query_graphql(self, query, variables=None): + """Execute GraphQl query. + + Args: + query (str): GraphQl query string. + variables (Optional[dict[str, Any]): Variables that can be + used in query. + + Returns: + GraphQlResponse: Response from server. + """ + + data = {"query": query, "variables": variables or {}} + response = self._do_rest_request( + RequestTypes.post, + self._graphql_url, + json=data + ) + response.raise_for_status() + return GraphQlResponse(response) + + def get_graphql_schema(self): + return self.query_graphql(INTROSPECTION_QUERY).data + + def get_server_schema(self): + """Get server schema with info, url paths, components etc. + + Todos: + Cache schema - How to find out it is outdated? + + Returns: + dict[str, Any]: Full server schema. + """ + + url = "{}/openapi.json".format(self._base_url) + response = self._do_rest_request(RequestTypes.get, url) + if response: + return response.data + return None + + def get_schemas(self): + """Get components schema. + + Name of components does not match entity type names e.g. 'project' is + under 'ProjectModel'. We should find out some mapping. Also, there + are properties which don't have information about reference to object + e.g. 'config' has just object definition without reference schema. + + Returns: + dict[str, Any]: Component schemas. + """ + + server_schema = self.get_server_schema() + return server_schema["components"]["schemas"] + + def get_attributes_schema(self, use_cache=True): + if not use_cache: + self.reset_attributes_schema() + + if self._attributes_schema is None: + result = self.get("attributes") + if result.status_code != 200: + raise UnauthorizedError( + "User must be authorized to receive attributes" + ) + self._attributes_schema = result.data + return copy.deepcopy(self._attributes_schema) + + def reset_attributes_schema(self): + self._attributes_schema = None + self._entity_type_attributes_cache = {} + + def set_attribute_config( + self, attribute_name, data, scope, position=None, builtin=False + ): + if position is None: + attributes = self.get("attributes").data["attributes"] + origin_attr = next( + ( + attr for attr in attributes + if attr["name"] == attribute_name + ), + None + ) + if origin_attr: + position = origin_attr["position"] + else: + position = len(attributes) + + response = self.put( + "attributes/{}".format(attribute_name), + data=data, + scope=scope, + position=position, + builtin=builtin + ) + if response.status_code != 204: + # TODO raise different exception + raise ValueError( + "Attribute \"{}\" was not created/updated. {}".format( + attribute_name, response.detail + ) + ) + + self.reset_attributes_schema() + + def remove_attribute_config(self, attribute_name): + """Remove attribute from server. + + This can't be un-done, please use carefully. + + Args: + attribute_name (str): Name of attribute to remove. + """ + + response = self.delete("attributes/{}".format(attribute_name)) + response.raise_for_status( + "Attribute \"{}\" was not created/updated. {}".format( + attribute_name, response.detail + ) + ) + + self.reset_attributes_schema() + + def get_attributes_for_type(self, entity_type): + """Get attribute schemas available for an entity type. + + ``` + # Example attribute schema + { + # Common + "type": "integer", + "title": "Clip Out", + "description": null, + "example": 1, + "default": 1, + # These can be filled based on value of 'type' + "gt": null, + "ge": null, + "lt": null, + "le": null, + "minLength": null, + "maxLength": null, + "minItems": null, + "maxItems": null, + "regex": null, + "enum": null + } + ``` + + Args: + entity_type (str): Entity type for which should be attributes + received. + + Returns: + dict[str, dict[str, Any]]: Attribute schemas that are available + for entered entity type. + """ + attributes = self._entity_type_attributes_cache.get(entity_type) + if attributes is None: + attributes_schema = self.get_attributes_schema() + attributes = {} + for attr in attributes_schema["attributes"]: + if entity_type not in attr["scope"]: + continue + attr_name = attr["name"] + attributes[attr_name] = attr["data"] + + self._entity_type_attributes_cache[entity_type] = attributes + + return copy.deepcopy(attributes) + + def get_attributes_fields_for_type(self, entity_type): + """Prepare attribute fields for entity type. + + Returns: + set[str]: Attributes fields for entity type. + """ + + attributes = self.get_attributes_for_type(entity_type) + return { + "attrib.{}".format(attr) + for attr in attributes + } + + def get_default_fields_for_type(self, entity_type): + """Default fields for entity type. + + Returns most of commonly used fields from server. + + Args: + entity_type (str): Name of entity type. + + Returns: + set[str]: Fields that should be queried from server. + """ + + # Event does not have attributes + if entity_type == "event": + return set(DEFAULT_EVENT_FIELDS) + + if entity_type == "project": + entity_type_defaults = DEFAULT_PROJECT_FIELDS + + elif entity_type == "folder": + entity_type_defaults = DEFAULT_FOLDER_FIELDS + + elif entity_type == "task": + entity_type_defaults = DEFAULT_TASK_FIELDS + + elif entity_type == "product": + entity_type_defaults = DEFAULT_PRODUCT_FIELDS + + elif entity_type == "version": + entity_type_defaults = DEFAULT_VERSION_FIELDS + + elif entity_type == "representation": + entity_type_defaults = ( + DEFAULT_REPRESENTATION_FIELDS + | REPRESENTATION_FILES_FIELDS + ) + + elif entity_type == "productType": + entity_type_defaults = DEFAULT_PRODUCT_TYPE_FIELDS + + elif entity_type == "workfile": + entity_type_defaults = DEFAULT_WORKFILE_INFO_FIELDS + + elif entity_type == "user": + entity_type_defaults = set(DEFAULT_USER_FIELDS) + # Backwards compatibility for server 0.3.x + # - will be removed in future releases + major, minor, _, _, _ = self.server_version_tuple + if major == 0 and minor <= 3: + entity_type_defaults.discard("accessGroups") + entity_type_defaults.discard("defaultAccessGroups") + entity_type_defaults.add("roles") + entity_type_defaults.add("defaultRoles") + + else: + raise ValueError("Unknown entity type \"{}\"".format(entity_type)) + return ( + entity_type_defaults + | self.get_attributes_fields_for_type(entity_type) + ) + + def get_addons_info(self, details=True): + """Get information about addons available on server. + + Args: + details (Optional[bool]): Detailed data with information how + to get client code. + """ + + endpoint = "addons" + if details: + endpoint += "?details=1" + response = self.get(endpoint) + response.raise_for_status() + return response.data + + def get_addon_url(self, addon_name, addon_version, *subpaths): + """Calculate url to addon route. + + Example: + >>> api = ServerAPI("https://your.url.com") + >>> api.get_addon_url( + ... "example", "1.0.0", "private", "my.zip") + 'https://your.url.com/addons/example/1.0.0/private/my.zip' + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + *subpaths (str): Any amount of subpaths that are added to + addon url. + + Returns: + str: Final url. + """ + + ending = "" + if subpaths: + ending = "/{}".format("/".join(subpaths)) + return "{}/addons/{}/{}{}".format( + self._base_url, + addon_name, + addon_version, + ending + ) + + def download_addon_private_file( + self, + addon_name, + addon_version, + filename, + destination_dir, + destination_filename=None, + chunk_size=None, + progress=None, + ): + """Download a file from addon private files. + + This method requires to have authorized token available. Private files + are not under '/api' restpoint. + + Args: + addon_name (str): Addon name. + addon_version (str): Addon version. + filename (str): Filename in private folder on server. + destination_dir (str): Where the file should be downloaded. + destination_filename (Optional[str]): Name of destination + filename. Source filename is used if not passed. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + str: Filepath to downloaded file. + """ + + if not destination_filename: + destination_filename = filename + dst_filepath = os.path.join(destination_dir, destination_filename) + # Filename can contain "subfolders" + dst_dirpath = os.path.dirname(dst_filepath) + if not os.path.exists(dst_dirpath): + os.makedirs(dst_dirpath) + + url = self.get_addon_url( + addon_name, + addon_version, + "private", + filename + ) + self.download_file( + url, dst_filepath, chunk_size=chunk_size, progress=progress + ) + return dst_filepath + + def get_installers(self, version=None, platform_name=None): + """Information about desktop application installers on server. + + Desktop application installers are helpers to download/update AYON + desktop application for artists. + + Args: + version (Optional[str]): Filter installers by version. + platform_name (Optional[str]): Filter installers by platform name. + + Returns: + list[dict[str, Any]]: + """ + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("version", version), + ("platform", platform_name), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get("desktop/installers{}".format(query)) + response.raise_for_status() + return response.data + + def create_installer( + self, + filename, + version, + python_version, + platform_name, + python_modules, + runtime_python_modules, + checksum, + checksum_algorithm, + file_size, + sources=None, + ): + """Create new installer information on server. + + This step will create only metadata. Make sure to upload installer + to the server using 'upload_installer' method. + + Runtime python modules are modules that are required to run AYON + desktop application, but are not added to PYTHONPATH for any + subprocess. + + Args: + filename (str): Installer filename. + version (str): Version of installer. + python_version (str): Version of Python. + platform_name (str): Name of platform. + python_modules (dict[str, str]): Python modules that are available + in installer. + runtime_python_modules (dict[str, str]): Runtime python modules + that are available in installer. + checksum (str): Installer file checksum. + checksum_algorithm (str): Type of checksum used to create checksum. + file_size (int): File size. + sources (Optional[list[dict[str, Any]]]): List of sources that + can be used to download file. + """ + + body = { + "filename": filename, + "version": version, + "pythonVersion": python_version, + "platform": platform_name, + "pythonModules": python_modules, + "runtimePythonModules": runtime_python_modules, + "checksum": checksum, + "checksumAlgorithm": checksum_algorithm, + "size": file_size, + } + if sources: + body["sources"] = sources + + response = self.post("desktop/installers", **body) + response.raise_for_status() + + def update_installer(self, filename, sources): + """Update installer information on server. + + Args: + filename (str): Installer filename. + sources (list[dict[str, Any]]): List of sources that + can be used to download file. Fully replaces existing sources. + """ + + response = self.patch( + "desktop/installers/{}".format(filename), + sources=sources + ) + response.raise_for_status() + + def delete_installer(self, filename): + """Delete installer from server. + + Args: + filename (str): Installer filename. + """ + + response = self.delete("desktop/installers/{}".format(filename)) + response.raise_for_status() + + def download_installer( + self, + filename, + dst_filepath, + chunk_size=None, + progress=None + ): + """Download installer file from server. + + Args: + filename (str): Installer filename. + dst_filepath (str): Destination filepath. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + """ + + self.download_file( + "desktop/installers/{}".format(filename), + dst_filepath, + chunk_size=chunk_size, + progress=progress + ) + + def upload_installer(self, src_filepath, dst_filename, progress=None): + """Upload installer file to server. + + Args: + src_filepath (str): Source filepath. + dst_filename (str): Destination filename. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + requests.Response: Response object. + """ + + return self.upload_file( + "desktop/installers/{}".format(dst_filename), + src_filepath, + progress=progress + ) + + def get_dependencies_info(self): + """Information about dependency packages on server. + + Example data structure: + { + "packages": [ + { + "name": str, + "platform": str, + "checksum": str, + "sources": list[dict[str, Any]], + "supportedAddons": dict[str, str], + "pythonModules": dict[str, str] + } + ], + "productionPackage": str + } + + Deprecated: + Deprecated since server version 0.2.1. Use + 'get_dependency_packages' instead. + + Returns: + dict[str, Any]: Information about dependency packages known for + server. + """ + + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and (minor < 2 or (minor == 2 and patch < 1)): + result = self.get("dependencies") + return result.data + packages = self.get_dependency_packages() + packages["productionPackage"] = None + return packages + + def update_dependency_info( + self, + name, + platform_name, + size, + checksum, + checksum_algorithm=None, + supported_addons=None, + python_modules=None, + sources=None + ): + """Update or create dependency package for identifiers. + + The endpoint can be used to create or update dependency package. + + + Deprecated: + Deprecated for server version 0.2.1. Use + 'create_dependency_pacakge' instead. + + Args: + name (str): Name of dependency package. + platform_name (Literal["windows", "linux", "darwin"]): Platform + for which is dependency package targeted. + size (int): Size of dependency package in bytes. + checksum (str): Checksum of archive file where dependencies are. + checksum_algorithm (Optional[str]): Algorithm used to calculate + checksum. By default, is used 'md5' (defined by server). + supported_addons (Optional[dict[str, str]]): Name of addons for + which was the package created. + '{"": "", ...}' + python_modules (Optional[dict[str, str]]): Python modules in + dependencies package. + '{"": "", ...}' + sources (Optional[list[dict[str, Any]]]): Information about + sources where dependency package is available. + """ + + kwargs = { + key: value + for key, value in ( + ("checksumAlgorithm", checksum_algorithm), + ("supportedAddons", supported_addons), + ("pythonModules", python_modules), + ("sources", sources), + ) + if value + } + + response = self.put( + "dependencies", + name=name, + platform=platform_name, + size=size, + checksum=checksum, + **kwargs + ) + response.raise_for_status("Failed to create/update dependency") + return response.data + + def get_dependency_packages(self): + """Information about dependency packages on server. + + To download dependency package, use 'download_dependency_package' + method and pass in 'filename'. + + Example data structure: + { + "packages": [ + { + "filename": str, + "platform": str, + "checksum": str, + "checksumAlgorithm": str, + "size": int, + "sources": list[dict[str, Any]], + "supportedAddons": dict[str, str], + "pythonModules": dict[str, str] + } + ] + } + + Returns: + dict[str, Any]: Information about dependency packages known for + server. + """ + + endpoint = "desktop/dependencyPackages" + major, minor, _, _, _ = self.server_version_tuple + if major == 0 and minor <= 3: + endpoint = "desktop/dependency_packages" + + result = self.get(endpoint) + result.raise_for_status() + return result.data + + def _get_dependency_package_route( + self, filename=None, platform_name=None + ): + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and (minor > 2 or (minor == 2 and patch >= 1)): + base = "desktop/dependency_packages" + if not filename: + return base + return "{}/{}".format(base, filename) + + # Backwards compatibility for AYON server 0.2.0 and lower + if platform_name is None: + platform_name = platform.system().lower() + base = "dependencies" + if not filename: + return base + return "{}/{}/{}".format(base, filename, platform_name) + + def create_dependency_package( + self, + filename, + python_modules, + source_addons, + installer_version, + checksum, + checksum_algorithm, + file_size, + sources=None, + platform_name=None, + ): + """Create dependency package on server. + + The package will be created on a server, it is also required to upload + the package archive file (using 'upload_dependency_package'). + + Args: + filename (str): Filename of dependency package. + python_modules (dict[str, str]): Python modules in dependency + package. + '{"": "", ...}' + source_addons (dict[str, str]): Name of addons for which is + dependency package created. + '{"": "", ...}' + installer_version (str): Version of installer for which was + package created. + checksum (str): Checksum of archive file where dependencies are. + checksum_algorithm (str): Algorithm used to calculate checksum. + file_size (Optional[int]): Size of file. + sources (Optional[list[dict[str, Any]]]): Information about + sources from where it is possible to get file. + platform_name (Optional[str]): Name of platform for which is + dependency package targeted. Default value is + current platform. + """ + + post_body = { + "filename": filename, + "pythonModules": python_modules, + "sourceAddons": source_addons, + "installerVersion": installer_version, + "checksum": checksum, + "checksumAlgorithm": checksum_algorithm, + "size": file_size, + "platform": platform_name or platform.system().lower(), + } + if sources: + post_body["sources"] = sources + + route = self._get_dependency_package_route() + response = self.post(route, **post_body) + response.raise_for_status() + + def update_dependency_package(self, filename, sources): + """Update dependency package metadata on server. + + Args: + filename (str): Filename of dependency package. + sources (list[dict[str, Any]]): Information about + sources from where it is possible to get file. Fully replaces + existing sources. + """ + + response = self.patch( + self._get_dependency_package_route(filename), + sources=sources + ) + response.raise_for_status() + + def delete_dependency_package(self, filename, platform_name=None): + """Remove dependency package for specific platform. + + Args: + filename (str): Filename of dependency package. Or name of package + for server version 0.2.0 or lower. + platform_name (Optional[str]): Which platform of the package + should be removed. Current platform is used if not passed. + Deprecated since version 0.2.1 + """ + + route = self._get_dependency_package_route(filename, platform_name) + response = self.delete(route) + response.raise_for_status("Failed to delete dependency file") + return response.data + + def download_dependency_package( + self, + src_filename, + dst_directory, + dst_filename, + platform_name=None, + chunk_size=None, + progress=None, + ): + """Download dependency package from server. + + This method requires to have authorized token available. The package + is only downloaded. + + Args: + src_filename (str): Filename of dependency pacakge. + For server version 0.2.0 and lower it is name of package + to download. + dst_directory (str): Where the file should be downloaded. + dst_filename (str): Name of destination filename. + platform_name (Optional[str]): Name of platform for which the + dependency package is targeted. Default value is + current platform. Deprecated since server version 0.2.1. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + str: Filepath to downloaded file. + """ + + route = self._get_dependency_package_route(src_filename, platform_name) + package_filepath = os.path.join(dst_directory, dst_filename) + self.download_file( + route, + package_filepath, + chunk_size=chunk_size, + progress=progress + ) + return package_filepath + + def upload_dependency_package( + self, src_filepath, dst_filename, platform_name=None, progress=None + ): + """Upload dependency package to server. + + Args: + src_filepath (str): Path to a package file. + dst_filename (str): Dependency package filename or name of package + for server version 0.2.0 or lower. Must be unique. + platform_name (Optional[str]): For which platform is the + package targeted. Deprecated since server version 0.2.1. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + """ + + route = self._get_dependency_package_route(dst_filename, platform_name) + self.upload_file(route, src_filepath, progress=progress) + + def create_dependency_package_basename(self, platform_name=None): + """Create basename for dependency package file. + + Deprecated: + Use 'create_dependency_package_basename' from `ayon_api` or + `ayon_api.utils` instead. + + Args: + platform_name (Optional[str]): Name of platform for which the + bundle is targeted. Default value is current platform. + + Returns: + str: Dependency package name with timestamp and platform. + """ + + return create_dependency_package_basename(platform_name) + + def upload_addon_zip(self, src_filepath, progress=None): + """Upload addon zip file to server. + + File is validated on server. If it is valid, it is installed. It will + create an event job which can be tracked (tracking part is not + implemented yet). + + Example output: + {'eventId': 'a1bfbdee27c611eea7580242ac120003'} + + Args: + src_filepath (str): Path to a zip file. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + + Returns: + dict[str, Any]: Response data from server. + """ + + response = self.upload_file( + "addons/install", + src_filepath, + progress=progress, + request_type=RequestTypes.post, + ) + return response.json() + + def _get_bundles_route(self): + major, minor, patch, _, _ = self.server_version_tuple + # Backwards compatibility for AYON server 0.3.0 + # - first version where bundles were available + if major == 0 and minor == 3 and patch == 0: + return "desktop/bundles" + return "bundles" + + def get_bundles(self): + """Server bundles with basic information. + + Example output: + { + "bundles": [ + { + "name": "my_bundle", + "createdAt": "2023-06-12T15:37:02.420260", + "installerVersion": "1.0.0", + "addons": { + "core": "1.2.3" + }, + "dependencyPackages": { + "windows": "a_windows_package123.zip", + "linux": "a_linux_package123.zip", + "darwin": "a_mac_package123.zip" + }, + "isProduction": False, + "isStaging": False + } + ], + "productionBundle": "my_bundle", + "stagingBundle": "test_bundle" + } + + Returns: + dict[str, Any]: Server bundles with basic information. + """ + + response = self.get(self._get_bundles_route()) + response.raise_for_status() + return response.data + + def create_bundle( + self, + name, + addon_versions, + installer_version, + dependency_packages=None, + is_production=None, + is_staging=None + ): + """Create bundle on server. + + Bundle cannot be changed once is created. Only isProduction, isStaging + and dependency packages can change after creation. + + Args: + name (str): Name of bundle. + addon_versions (dict[str, str]): Addon versions. + installer_version (Union[str, None]): Installer version. + dependency_packages (Optional[dict[str, str]]): Dependency + package names. Keys are platform names and values are name of + packages. + is_production (Optional[bool]): Bundle will be marked as + production. + is_staging (Optional[bool]): Bundle will be marked as staging. + """ + + body = { + "name": name, + "installerVersion": installer_version, + "addons": addon_versions, + } + for key, value in ( + ("dependencyPackages", dependency_packages), + ("isProduction", is_production), + ("isStaging", is_staging), + ): + if value is not None: + body[key] = value + + response = self.post(self._get_bundles_route(), **body) + response.raise_for_status() + + def update_bundle( + self, + bundle_name, + dependency_packages=None, + is_production=None, + is_staging=None + ): + """Update bundle on server. + + Dependency packages can be update only for single platform. Others + will be left untouched. Use 'None' value to unset dependency package + from bundle. + + Args: + bundle_name (str): Name of bundle. + dependency_packages (Optional[dict[str, str]]): Dependency pacakge + names that should be used with the bundle. + is_production (Optional[bool]): Bundle will be marked as + production. + is_staging (Optional[bool]): Bundle will be marked as staging. + """ + + body = { + key: value + for key, value in ( + ("dependencyPackages", dependency_packages), + ("isProduction", is_production), + ("isStaging", is_staging), + ) + if value is not None + } + response = self.patch( + "{}/{}".format(self._get_bundles_route(), bundle_name), + **body + ) + response.raise_for_status() + + def delete_bundle(self, bundle_name): + """Delete bundle from server. + + Args: + bundle_name (str): Name of bundle to delete. + """ + + response = self.delete( + "{}/{}".format(self._get_bundles_route(), bundle_name) + ) + response.raise_for_status() + + # Anatomy presets + def get_project_anatomy_presets(self): + """Anatomy presets available on server. + + Content has basic information about presets. Example output: + [ + { + "name": "netflix_VFX", + "primary": false, + "version": "1.0.0" + }, + { + ... + }, + ... + ] + + Returns: + list[dict[str, str]]: Anatomy presets available on server. + """ + + result = self.get("anatomy/presets") + result.raise_for_status() + return result.data.get("presets") or [] + + def get_project_anatomy_preset(self, preset_name=None): + """Anatomy preset values by name. + + Get anatomy preset values by preset name. Primary preset is returned + if preset name is set to 'None'. + + Args: + preset_name (Optional[str]): Preset name. + + Returns: + dict[str, Any]: Anatomy preset values. + """ + + if preset_name is None: + preset_name = "_" + result = self.get("anatomy/presets/{}".format(preset_name)) + result.raise_for_status() + return result.data + + def get_project_roots_by_site(self, project_name): + """Root overrides per site name. + + Method is based on logged user and can't be received for any other + user on server. + + Output will contain only roots per site id used by logged user. + + Args: + project_name (str): Name of project. + + Returns: + dict[str, dict[str, str]]: Root values by root name by site id. + """ + + result = self.get("projects/{}/roots".format(project_name)) + result.raise_for_status() + return result.data + + def get_project_roots_for_site(self, project_name, site_id=None): + """Root overrides for site. + + If site id is not passed a site set in current api object is used + instead. + + Args: + project_name (str): Name of project. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + + Returns: + dict[str, str]: Root values by root name or None if + site does not have overrides. + """ + + if site_id is None: + site_id = self.site_id + + if site_id is None: + return {} + roots = self.get_project_roots_by_site(project_name) + return roots.get(site_id, {}) + + def get_addon_settings_schema( + self, addon_name, addon_version, project_name=None + ): + """Sudio/Project settings schema of an addon. + + Project schema may look differently as some enums are based on project + values. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (Optional[str]): Schema for specific project or + default studio schemas. + + Returns: + dict[str, Any]: Schema of studio/project settings. + """ + + args = tuple() + if project_name: + args = (project_name, ) + + endpoint = self.get_addon_url( + addon_name, addon_version, "schema", *args + ) + result = self.get(endpoint) + result.raise_for_status() + return result.data + + def get_addon_site_settings_schema(self, addon_name, addon_version): + """Site settings schema of an addon. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + + Returns: + dict[str, Any]: Schema of site settings. + """ + + result = self.get("addons/{}/{}/siteSettings/schema".format( + addon_name, addon_version + )) + result.raise_for_status() + return result.data + + def get_addon_studio_settings( + self, + addon_name, + addon_version, + variant=None + ): + """Addon studio settings. + + Receive studio settings for specific version of an addon. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + + Returns: + dict[str, Any]: Addon settings. + """ + + if variant is None: + variant = self.default_settings_variant + + query_items = {} + if variant: + query_items["variant"] = variant + query = prepare_query_string(query_items) + + result = self.get( + "addons/{}/{}/settings{}".format(addon_name, addon_version, query) + ) + result.raise_for_status() + return result.data + + def get_addon_project_settings( + self, + addon_name, + addon_version, + project_name, + variant=None, + site_id=None, + use_site=True + ): + """Addon project settings. + + Receive project settings for specific version of an addon. The settings + may be with site overrides when enabled. + + Site id is filled with current connection site id if not passed. To + make sure any site id is used set 'use_site' to 'False'. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (str): Name of project for which the settings are + received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Name of site which is used for site + overrides. Is filled with connection 'site_id' attribute + if not passed. + use_site (Optional[bool]): To force disable option of using site + overrides set to 'False'. In that case won't be applied + any site overrides. + + Returns: + dict[str, Any]: Addon settings. + """ + + if not use_site: + site_id = None + elif not site_id: + site_id = self.site_id + + query_items = {} + if site_id: + query_items["site"] = site_id + + if variant is None: + variant = self.default_settings_variant + + if variant: + query_items["variant"] = variant + + query = prepare_query_string(query_items) + result = self.get( + "addons/{}/{}/settings/{}{}".format( + addon_name, addon_version, project_name, query + ) + ) + result.raise_for_status() + return result.data + + def get_addon_settings( + self, + addon_name, + addon_version, + project_name=None, + variant=None, + site_id=None, + use_site=True + ): + """Receive addon settings. + + Receive addon settings based on project name value. Some arguments may + be ignored if 'project_name' is set to 'None'. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (Optional[str]): Name of project for which the + settings are received. A studio settings values are received + if is 'None'. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Name of site which is used for site + overrides. Is filled with connection 'site_id' attribute + if not passed. + use_site (Optional[bool]): To force disable option of using + site overrides set to 'False'. In that case won't be applied + any site overrides. + + Returns: + dict[str, Any]: Addon settings. + """ + + if project_name is None: + return self.get_addon_studio_settings( + addon_name, addon_version, variant + ) + return self.get_addon_project_settings( + addon_name, addon_version, project_name, variant, site_id, use_site + ) + + def get_addon_site_settings( + self, addon_name, addon_version, site_id=None + ): + """Site settings of an addon. + + If site id is not available an empty dictionary is returned. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + site_id (Optional[str]): Name of site for which should be settings + returned. using 'site_id' attribute if not passed. + + Returns: + dict[str, Any]: Site settings. + """ + + if site_id is None: + site_id = self.site_id + + if not site_id: + return {} + + query = prepare_query_string({"site": site_id}) + result = self.get("addons/{}/{}/siteSettings{}".format( + addon_name, addon_version, query + )) + result.raise_for_status() + return result.data + + def get_bundle_settings( + self, + bundle_name=None, + project_name=None, + variant=None, + site_id=None, + use_site=True + ): + """Get complete set of settings for given data. + + If project is not passed then studio settings are returned. If variant + is not passed 'default_settings_variant' is used. If bundle name is + not passed then current production/staging bundle is used, based on + variant value. + + Output contains addon settings and site settings in single dictionary. + + TODOs: + - test how it behaves if there is not any bundle. + - test how it behaves if there is not any production/staging + bundle. + + Warnings: + For AYON server < 0.3.0 bundle name will be ignored. + + Example output: + { + "addons": [ + { + "name": "addon-name", + "version": "addon-version", + "settings": {...} + "siteSettings": {...} + } + ] + } + + Returns: + dict[str, Any]: All settings for single bundle. + """ + + major, minor, _, _, _ = self.server_version_tuple + query_values = { + key: value + for key, value in ( + ("project_name", project_name), + ("variant", variant or self.default_settings_variant), + ("bundle_name", bundle_name), + ) + if value + } + if use_site: + if not site_id: + site_id = self.site_id + if site_id: + query_values["site_id"] = site_id + + if major == 0 and minor >= 3: + url = "settings" + else: + # Backward compatibility for AYON server < 0.3.0 + url = "settings/addons" + query_values.pop("bundle_name", None) + for new_key, old_key in ( + ("project_name", "project"), + ("site_id", "site"), + ): + if new_key in query_values: + query_values[old_key] = query_values.pop(new_key) + + query = prepare_query_string(query_values) + response = self.get("{}{}".format(url, query)) + response.raise_for_status() + return response.data + + def get_addons_studio_settings( + self, + bundle_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """All addons settings in one bulk. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (bool): To force disable option of using site overrides + set to 'False'. In that case won't be applied any site + overrides. + only_values (Optional[bool]): Output will contain only settings + values without metadata about addons. + + Returns: + dict[str, Any]: Settings of all addons on server. + """ + + output = self.get_bundle_settings( + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site + ) + if only_values: + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and minor >= 3: + output = { + addon["name"]: addon["settings"] + for addon in output["addons"] + } + else: + # Backward compatibility for AYON server < 0.3.0 + output = output["settings"] + return output + + def get_addons_project_settings( + self, + project_name, + bundle_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """Project settings of all addons. + + Server returns information about used addon versions, so full output + looks like: + { + "settings": {...}, + "addons": {...} + } + + The output can be limited to only values. To do so is 'only_values' + argument which is by default set to 'True'. In that case output + contains only value of 'settings' key. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + project_name (str): Name of project for which are settings + received. + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (bool): To force disable option of using site overrides + set to 'False'. In that case won't be applied any site + overrides. + only_values (Optional[bool]): Output will contain only settings + values without metadata about addons. + + Returns: + dict[str, Any]: Settings of all addons on server for passed + project. + """ + + if not project_name: + raise ValueError("Project name must be passed.") + + output = self.get_bundle_settings( + project_name=project_name, + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site + ) + if only_values: + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and minor >= 3: + output = { + addon["name"]: addon["settings"] + for addon in output["addons"] + } + else: + # Backward compatibility for AYON server < 0.3.0 + output = output["settings"] + return output + + def get_addons_settings( + self, + bundle_name=None, + project_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """Universal function to receive all addon settings. + + Based on 'project_name' will receive studio settings or project + settings. In case project is not passed is 'site_id' ignored. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + project_name (Optional[str]): Name of project for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (Optional[bool]): To force disable option of using site + overrides set to 'False'. In that case won't be applied + any site overrides. + only_values (Optional[bool]): Only settings values will be + returned. By default, is set to 'True'. + """ + + if project_name is None: + return self.get_addons_studio_settings( + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site, + only_values=only_values + ) + + return self.get_addons_project_settings( + project_name=project_name, + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site, + only_values=only_values + ) + + def get_secrets(self): + """Get all secrets. + + Example output: + [ + { + "name": "secret_1", + "value": "secret_value_1", + }, + { + "name": "secret_2", + "value": "secret_value_2", + } + ] + + Returns: + list[dict[str, str]]: List of secret entities. + """ + + response = self.get("secrets") + response.raise_for_status() + return response.data + + def get_secret(self, secret_name): + """Get secret by name. + + Example output: + { + "name": "secret_name", + "value": "secret_value", + } + + Args: + secret_name (str): Name of secret. + + Returns: + dict[str, str]: Secret entity data. + """ + + response = self.get("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + + def save_secret(self, secret_name, secret_value): + """Save secret. + + This endpoint can create and update secret. + + Args: + secret_name (str): Name of secret. + secret_value (str): Value of secret. + """ + + response = self.put( + "secrets/{}".format(secret_name), + name=secret_name, + value=secret_value, + ) + response.raise_for_status() + return response.data + + + def delete_secret(self, secret_name): + """Delete secret by name. + + Args: + secret_name (str): Name of secret to delete. + """ + + response = self.delete("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + + # Entity getters + def get_rest_project(self, project_name): + """Query project by name. + + This call returns project with anatomy data. + + Args: + project_name (str): Name of project. + + Returns: + Union[dict[str, Any], None]: Project entity data or 'None' if + project was not found. + """ + + if not project_name: + return None + + response = self.get("projects/{}".format(project_name)) + if response.status == 200: + return response.data + return None + + def get_rest_projects(self, active=True, library=None): + """Query available project entities. + + User must be logged in. + + Args: + active (Optional[bool]): Filter active/inactive projects. Both + are returned if 'None' is passed. + library (Optional[bool]): Filter standard/library projects. Both + are returned if 'None' is passed. + + Returns: + Generator[dict[str, Any]]: Available projects. + """ + + for project_name in self.get_project_names(active, library): + project = self.get_rest_project(project_name) + if project: + yield project + + def get_rest_entity_by_id(self, project_name, entity_type, entity_id): + """Get entity using REST on a project by its id. + + Args: + project_name (str): Name of project where entity is. + entity_type (Literal["folder", "task", "product", "version"]): The + entity type which should be received. + entity_id (str): Id of entity. + + Returns: + dict[str, Any]: Received entity data. + """ + + if not all((project_name, entity_type, entity_id)): + return None + + entity_endpoint = "{}s".format(entity_type) + response = self.get("projects/{}/{}/{}".format( + project_name, entity_endpoint, entity_id + )) + if response.status == 200: + return response.data + return None + + def get_rest_folder(self, project_name, folder_id): + return self.get_rest_entity_by_id(project_name, "folder", folder_id) + + def get_rest_task(self, project_name, task_id): + return self.get_rest_entity_by_id(project_name, "task", task_id) + + def get_rest_product(self, project_name, product_id): + return self.get_rest_entity_by_id(project_name, "product", product_id) + + def get_rest_version(self, project_name, version_id): + return self.get_rest_entity_by_id(project_name, "version", version_id) + + def get_rest_representation(self, project_name, representation_id): + return self.get_rest_entity_by_id( + project_name, "representation", representation_id + ) + + def get_project_names(self, active=True, library=None): + """Receive available project names. + + User must be logged in. + + Args: + active (Optional[bool]): Filter active/inactive projects. Both + are returned if 'None' is passed. + library (Optional[bool]): Filter standard/library projects. Both + are returned if 'None' is passed. + + Returns: + list[str]: List of available project names. + """ + + query_keys = {} + if active is not None: + query_keys["active"] = "true" if active else "false" + + if library is not None: + query_keys["library"] = "true" if library else "false" + query = "" + if query_keys: + query = "?{}".format(",".join([ + "{}={}".format(key, value) + for key, value in query_keys.items() + ])) + + response = self.get("projects{}".format(query), **query_keys) + response.raise_for_status() + data = response.data + project_names = [] + if data: + for project in data["projects"]: + project_names.append(project["name"]) + return project_names + + def get_projects( + self, active=True, library=None, fields=None, own_attributes=False + ): + """Get projects. + + Args: + active (Optional[bool]): Filter active or inactive projects. + Filter is disabled when 'None' is passed. + library (Optional[bool]): Filter library projects. Filter is + disabled when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for project. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried projects. + """ + + if fields is None: + use_rest = True + else: + use_rest = False + fields = set(fields) + for field in fields: + if field.startswith("config"): + use_rest = True + break + + if use_rest: + for project in self.get_rest_projects(active, library): + if own_attributes: + fill_own_attribs(project) + yield project + + else: + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + + if own_attributes: + fields.add("ownAttrib") + + query = projects_graphql_query(fields) + for parsed_data in query.continuous_query(self): + for project in parsed_data["projects"]: + if own_attributes: + fill_own_attribs(project) + yield project + + def get_project(self, project_name, fields=None, own_attributes=False): + """Get project. + + Args: + project_name (str): Name of project. + fields (Optional[Iterable[str]]): fields to be queried + for project. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Project entity data or None + if project was not found. + """ + + use_rest = True + if fields is not None: + use_rest = False + _fields = set() + for field in fields: + if field.startswith("config") or field == "data": + use_rest = True + break + _fields.add(field) + + fields = _fields + + if use_rest: + project = self.get_rest_project(project_name) + if own_attributes: + fill_own_attribs(project) + return project + + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + + if own_attributes: + fields.add("ownAttrib") + query = project_graphql_query(fields) + query.set_variable_value("projectName", project_name) + + parsed_data = query.query(self) + + project = parsed_data["project"] + if project is not None: + project["name"] = project_name + if own_attributes: + fill_own_attribs(project) + return project + + def get_folders_hierarchy( + self, + project_name, + search_string=None, + folder_types=None + ): + """Get project hierarchy. + + All folders in project in hierarchy data structure. + + Example output: + { + "hierarchy": [ + { + "id": "...", + "name": "...", + "label": "...", + "status": "...", + "folderType": "...", + "hasTasks": False, + "taskNames": [], + "parents": [], + "parentId": None, + "children": [...children folders...] + }, + ... + ] + } + + Args: + project_name (str): Project where to look for folders. + search_string (Optional[str]): Search string to filter folders. + folder_types (Optional[Iterable[str]]): Folder types to filter. + + Returns: + dict[str, Any]: Response data from server. + """ + + if folder_types: + folder_types = ",".join(folder_types) + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("search", search_string), + ("types", folder_types), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get( + "projects/{}/hierarchy{}".format(project_name, query) + ) + response.raise_for_status() + return response.data + + def get_folders( + self, + project_name, + folder_ids=None, + folder_paths=None, + folder_names=None, + parent_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query folders from server. + + Todos: + Folder name won't be unique identifier, so we should add folder path + filtering. + + Notes: + Filter 'active' don't have direct filter in GraphQl. + + Args: + project_name (str): Name of project. + folder_ids (Optional[Iterable[str]]): Folder ids to filter. + folder_paths (Optional[Iterable[str]]): Folder paths used + for filtering. + folder_names (Optional[Iterable[str]]): Folder names used + for filtering. + parent_ids (Optional[Iterable[str]]): Ids of folder parents. + Use 'None' if folder is direct child of project. + active (Optional[bool]): Filter active/inactive folders. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried folder entities. + """ + + if not project_name: + return + + filters = { + "projectName": project_name + } + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return + filters["folderIds"] = list(folder_ids) + + if folder_paths is not None: + folder_paths = set(folder_paths) + if not folder_paths: + return + filters["folderPaths"] = list(folder_paths) + + if folder_names is not None: + folder_names = set(folder_names) + if not folder_names: + return + filters["folderNames"] = list(folder_names) + + if parent_ids is not None: + parent_ids = set(parent_ids) + if not parent_ids: + return + if None in parent_ids: + # Replace 'None' with '"root"' which is used during GraphQl + # query for parent ids filter for folders without folder + # parent + parent_ids.remove(None) + parent_ids.add("root") + + if project_name in parent_ids: + # Replace project name with '"root"' which is used during + # GraphQl query for parent ids filter for folders without + # folder parent + parent_ids.remove(project_name) + parent_ids.add("root") + + filters["parentFolderIds"] = list(parent_ids) + + if not fields: + fields = self.get_default_fields_for_type("folder") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("folder") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes and not use_rest: + fields.add("ownAttrib") + + query = folders_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for folder in parsed_data["project"]["folders"]: + if active is not None and active is not folder["active"]: + continue + + if use_rest: + folder = self.get_rest_folder(project_name, folder["id"]) + + if own_attributes: + fill_own_attribs(folder) + yield folder + + def get_folder_by_id( + self, + project_name, + folder_id, + fields=None, + own_attributes=False + ): + """Query folder entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_id (str): Folder id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_ids=[folder_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_by_path( + self, + project_name, + folder_path, + fields=None, + own_attributes=False + ): + """Query folder entity by path. + + Folder path is a path to folder with all parent names joined by slash. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_path (str): Folder path. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_paths=[folder_path], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_by_name( + self, + project_name, + folder_name, + fields=None, + own_attributes=False + ): + """Query folder entity by path. + + Warnings: + Folder name is not a unique identifier of a folder. Function is + kept for OpenPype 3 compatibility. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_name (str): Folder name. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_names=[folder_name], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_ids_with_products(self, project_name, folder_ids=None): + """Find folders which have at least one product. + + Folders that have at least one product should be immutable, so they + should not change path -> change of name or name of any parent + is not possible. + + Args: + project_name (str): Name of project. + folder_ids (Optional[Iterable[str]]): Limit folder ids filtering + to a set of folders. If set to None all folders on project are + checked. + + Returns: + set[str]: Folder ids that have at least one product. + """ + + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return set() + + query = folders_graphql_query({"id"}) + query.set_variable_value("projectName", project_name) + query.set_variable_value("folderHasProducts", True) + if folder_ids: + query.set_variable_value("folderIds", list(folder_ids)) + + parsed_data = query.query(self) + folders = parsed_data["project"]["folders"] + return { + folder["id"] + for folder in folders + } + + def get_tasks( + self, + project_name, + task_ids=None, + task_names=None, + task_types=None, + folder_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query task entities from server. + + Args: + project_name (str): Name of project. + task_ids (Iterable[str]): Task ids to filter. + task_names (Iterable[str]): Task names used for filtering. + task_types (Iterable[str]): Task types used for filtering. + folder_ids (Iterable[str]): Ids of task parents. Use 'None' + if folder is direct child of project. + active (Optional[bool]): Filter active/inactive tasks. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried task entities. + """ + + if not project_name: + return + + filters = { + "projectName": project_name + } + + if task_ids is not None: + task_ids = set(task_ids) + if not task_ids: + return + filters["taskIds"] = list(task_ids) + + if task_names is not None: + task_names = set(task_names) + if not task_names: + return + filters["taskNames"] = list(task_names) + + if task_types is not None: + task_types = set(task_types) + if not task_types: + return + filters["taskTypes"] = list(task_types) + + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return + filters["folderIds"] = list(folder_ids) + + if not fields: + fields = self.get_default_fields_for_type("task") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("task") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + query = tasks_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for task in parsed_data["project"]["tasks"]: + if active is not None and active is not task["active"]: + continue + + if use_rest: + task = self.get_rest_task(project_name, task["id"]) + + if own_attributes: + fill_own_attribs(task) + yield task + + def get_task_by_name( + self, + project_name, + folder_id, + task_name, + fields=None, + own_attributes=False + ): + """Query task entity by name and folder id. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_id (str): Folder id. + task_name (str): Task name + fields (Optional[Iterable[str]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Task entity data or None if was not found. + """ + + for task in self.get_tasks( + project_name, + folder_ids=[folder_id], + task_names=[task_name], + active=None, + fields=fields, + own_attributes=own_attributes + ): + return task + return None + + def get_task_by_id( + self, + project_name, + task_id, + fields=None, + own_attributes=False + ): + """Query task entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + task_id (str): Task id. + fields (Optional[Iterable[str]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Task entity data or None if was not found. + """ + + for task in self.get_tasks( + project_name, + task_ids=[task_id], + active=None, + fields=fields, + own_attributes=own_attributes + ): + return task + return None + + def _filter_product( + self, project_name, product, active, own_attributes, use_rest + ): + if active is not None and product["active"] is not active: + return None + + if use_rest: + product = self.get_rest_product(project_name, product["id"]) + + if own_attributes: + fill_own_attribs(product) + + return product + + def get_products( + self, + project_name, + product_ids=None, + product_names=None, + folder_ids=None, + product_types=None, + statuses=None, + names_by_folder_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query products from server. + + Todos: + Separate 'name_by_folder_ids' filtering to separated method. It + cannot be combined with some other filters. + + Args: + project_name (str): Name of project. + product_ids (Optional[Iterable[str]]): Task ids to filter. + product_names (Optional[Iterable[str]]): Task names used for + filtering. + folder_ids (Optional[Iterable[str]]): Ids of task parents. + Use 'None' if folder is direct child of project. + product_types (Optional[Iterable[str]]): Product types used for + filtering. + statuses (Optional[Iterable[str]]): Product statuses used for + filtering. + names_by_folder_ids (Optional[dict[str, Iterable[str]]]): Product + name filtering by folder id. + active (Optional[bool]): Filter active/inactive products. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried product entities. + """ + + if not project_name: + return + + if product_ids is not None: + product_ids = set(product_ids) + if not product_ids: + return + + filter_product_names = None + if product_names is not None: + filter_product_names = set(product_names) + if not filter_product_names: + return + + filter_folder_ids = None + if folder_ids is not None: + filter_folder_ids = set(folder_ids) + if not filter_folder_ids: + return + + filter_product_types = None + if product_types is not None: + filter_product_types = set(product_types) + if not filter_product_types: + return + + filter_statuses = None + if statuses is not None: + filter_statuses = set(statuses) + if not filter_statuses: + return + + # This will disable 'folder_ids' and 'product_names' filters + # - maybe could be enhanced in future? + if names_by_folder_ids is not None: + filter_product_names = set() + filter_folder_ids = set() + + for folder_id, names in names_by_folder_ids.items(): + if folder_id and names: + filter_folder_ids.add(folder_id) + filter_product_names |= set(names) + + if not filter_product_names or not filter_folder_ids: + return + + # Convert fields and add minimum required fields + if fields: + fields = set(fields) | {"id"} + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("product") + else: + fields = self.get_default_fields_for_type("product") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + # Add 'name' and 'folderId' if 'names_by_folder_ids' filter is entered + if names_by_folder_ids: + fields.add("name") + fields.add("folderId") + + # Prepare filters for query + filters = { + "projectName": project_name + } + if filter_folder_ids: + filters["folderIds"] = list(filter_folder_ids) + + if filter_product_types: + filters["productTypes"] = list(filter_product_types) + + if filter_statuses: + filters["statuses"] = list(filter_statuses) + + if product_ids: + filters["productIds"] = list(product_ids) + + if filter_product_names: + filters["productNames"] = list(filter_product_names) + + query = products_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + parsed_data = query.query(self) + + products = parsed_data.get("project", {}).get("products", []) + # Filter products by 'names_by_folder_ids' + if names_by_folder_ids: + products_by_folder_id = collections.defaultdict(list) + for product in products: + filtered_product = self._filter_product( + project_name, product, active, own_attributes, use_rest + ) + if filtered_product is not None: + folder_id = filtered_product["folderId"] + products_by_folder_id[folder_id].append(filtered_product) + + for folder_id, names in names_by_folder_ids.items(): + for folder_product in products_by_folder_id[folder_id]: + if folder_product["name"] in names: + yield folder_product + + else: + for product in products: + filtered_product = self._filter_product( + project_name, product, active, own_attributes, use_rest + ) + if filtered_product is not None: + yield filtered_product + + def get_product_by_id( + self, + project_name, + product_id, + fields=None, + own_attributes=False + ): + """Query product entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_id (str): Product id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Product entity data or None if was not found. + """ + + products = self.get_products( + project_name, + product_ids=[product_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for product in products: + return product + return None + + def get_product_by_name( + self, + project_name, + product_name, + folder_id, + fields=None, + own_attributes=False + ): + """Query product entity by name and folder id. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_name (str): Product name. + folder_id (str): Folder id (Folder is a parent of products). + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Product entity data or None if was not found. + """ + + products = self.get_products( + project_name, + product_names=[product_name], + folder_ids=[folder_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for product in products: + return product + return None + + def get_product_types(self, fields=None): + """Types of products. + + This is server wide information. Product types have 'name', 'icon' and + 'color'. + + Args: + fields (Optional[Iterable[str]]): Product types fields to query. + + Returns: + list[dict[str, Any]]: Product types information. + """ + + if not fields: + fields = self.get_default_fields_for_type("productType") + + query = product_types_query(fields) + + parsed_data = query.query(self) + + return parsed_data.get("productTypes", []) + + def get_project_product_types(self, project_name, fields=None): + """Types of products available on a project. + + Filter only product types available on project. + + Args: + project_name (str): Name of project where to look for + product types. + fields (Optional[Iterable[str]]): Product types fields to query. + + Returns: + list[dict[str, Any]]: Product types information. + """ + + if not fields: + fields = self.get_default_fields_for_type("productType") + + query = project_product_types_query(fields) + query.set_variable_value("projectName", project_name) + + parsed_data = query.query(self) + + return parsed_data.get("project", {}).get("productTypes", []) + + def get_product_type_names(self, project_name=None, product_ids=None): + """Product type names. + + Warnings: + This function will be probably removed. Matters if 'products_id' + filter has real use-case. + + Args: + project_name (Optional[str]): Name of project where to look for + queried entities. + product_ids (Optional[Iterable[str]]): Product ids filter. Can be + used only with 'project_name'. + + Returns: + set[str]: Product type names. + """ + + if project_name and product_ids: + products = self.get_products( + project_name, + product_ids=product_ids, + fields=["productType"], + active=None, + ) + return { + product["productType"] + for product in products + } + + return { + product_info["name"] + for product_info in self.get_project_product_types( + project_name, fields=["name"] + ) + } + + def get_versions( + self, + project_name, + version_ids=None, + product_ids=None, + versions=None, + hero=True, + standard=True, + latest=None, + active=True, + fields=None, + own_attributes=False + ): + """Get version entities based on passed filters from server. + + Args: + project_name (str): Name of project where to look for versions. + version_ids (Optional[Iterable[str]]): Version ids used for + version filtering. + product_ids (Optional[Iterable[str]]): Product ids used for + version filtering. + versions (Optional[Iterable[int]]): Versions we're interested in. + hero (Optional[bool]): Receive also hero versions when set to true. + standard (Optional[bool]): Receive versions which are not hero when + set to true. + latest (Optional[bool]): Return only latest version of standard + versions. This can be combined only with 'standard' attribute + set to True. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields to be queried + for version. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried version entities. + """ + + if not fields: + fields = self.get_default_fields_for_type("version") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("version") + + if active is not None: + fields.add("active") + + # Make sure fields have minimum required fields + fields |= {"id", "version"} + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if own_attributes: + fields.add("ownAttrib") + + filters = { + "projectName": project_name + } + if version_ids is not None: + version_ids = set(version_ids) + if not version_ids: + return + filters["versionIds"] = list(version_ids) + + if product_ids is not None: + product_ids = set(product_ids) + if not product_ids: + return + filters["productIds"] = list(product_ids) + + # TODO versions can't be used as filter at this moment! + if versions is not None: + versions = set(versions) + if not versions: + return + filters["versions"] = list(versions) + + if not hero and not standard: + return + + queries = [] + # Add filters based on 'hero' and 'standard' + # NOTE: There is not a filter to "ignore" hero versions or to get + # latest and hero version + # - if latest and hero versions should be returned it must be done in + # 2 graphql queries + if standard and not latest: + # This query all versions standard + hero + # - hero must be filtered out if is not enabled during loop + query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + queries.append(query) + else: + if hero: + # Add hero query if hero is enabled + hero_query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + hero_query.set_variable_value(attr, filter_value) + + hero_query.set_variable_value("heroOnly", True) + queries.append(hero_query) + + if standard: + standard_query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + standard_query.set_variable_value(attr, filter_value) + + if latest: + standard_query.set_variable_value("latestOnly", True) + queries.append(standard_query) + + for query in queries: + for parsed_data in query.continuous_query(self): + for version in parsed_data["project"]["versions"]: + if active is not None and version["active"] is not active: + continue + + if not hero and version["version"] < 0: + continue + + if use_rest: + version = self.get_rest_version( + project_name, version["id"] + ) + + if own_attributes: + fill_own_attribs(version) + + yield version + + def get_version_by_id( + self, + project_name, + version_id, + fields=None, + own_attributes=False + ): + """Query version entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version_id (str): Version id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_versions( + project_name, + version_ids=[version_id], + active=None, + hero=True, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_version_by_name( + self, + project_name, + version, + product_id, + fields=None, + own_attributes=False + ): + """Query version entity by version and product id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version (int): Version of version entity. + product_id (str): Product id. Product is a parent of version. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_versions( + project_name, + product_ids=[product_id], + versions=[version], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_version_by_id( + self, + project_name, + version_id, + fields=None, + own_attributes=False + ): + """Query hero version entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version_id (int): Hero version id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_hero_versions( + project_name, + version_ids=[version_id], + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_version_by_product_id( + self, + project_name, + product_id, + fields=None, + own_attributes=False + ): + """Query hero version entity by product id. + + Only one hero version is available on a product. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_id (int): Product id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_hero_versions( + project_name, + product_ids=[product_id], + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_versions( + self, + project_name, + product_ids=None, + version_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query hero versions by multiple filters. + + Only one hero version is available on a product. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_ids (Optional[Iterable[str]]): Product ids. + version_ids (Optional[Iterable[str]]): Version ids. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + return self.get_versions( + project_name, + version_ids=version_ids, + product_ids=product_ids, + hero=True, + standard=False, + active=active, + fields=fields, + own_attributes=own_attributes + ) + + def get_last_versions( + self, + project_name, + product_ids, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entities by product ids. + + Args: + project_name (str): Project where to look for representation. + product_ids (Iterable[str]): Product ids. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + dict[str, dict[str, Any]]: Last versions by product id. + """ + + versions = self.get_versions( + project_name, + product_ids=product_ids, + latest=True, + active=active, + fields=fields, + own_attributes=own_attributes + ) + return { + version["parent"]: version + for version in versions + } + + def get_last_version_by_product_id( + self, + project_name, + product_id, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entity by product id. + + Args: + project_name (str): Project where to look for representation. + product_id (str): Product id. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried version entity or None. + """ + + versions = self.get_versions( + project_name, + product_ids=[product_id], + latest=True, + active=active, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_last_version_by_product_name( + self, + project_name, + product_name, + folder_id, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entity by product name and folder id. + + Args: + project_name (str): Project where to look for representation. + product_name (str): Product name. + folder_id (str): Folder id. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried version entity or None. + """ + + if not folder_id: + return None + + product = self.get_product_by_name( + project_name, product_name, folder_id, fields=["_id"] + ) + if not product: + return None + return self.get_last_version_by_product_id( + project_name, + product["id"], + active=active, + fields=fields, + own_attributes=own_attributes + ) + + def version_is_latest(self, project_name, version_id): + """Is version latest from a product. + + Args: + project_name (str): Project where to look for representation. + version_id (str): Version id. + + Returns: + bool: Version is latest or not. + """ + + query = GraphQlQuery("VersionIsLatest") + project_name_var = query.add_variable( + "projectName", "String!", project_name + ) + version_id_var = query.add_variable( + "versionId", "String!", version_id + ) + project_query = query.add_field("project") + project_query.set_filter("name", project_name_var) + version_query = project_query.add_field("version") + version_query.set_filter("id", version_id_var) + product_query = version_query.add_field("product") + latest_version_query = product_query.add_field("latestVersion") + latest_version_query.add_field("id") + + parsed_data = query.query(self) + latest_version = ( + parsed_data["project"]["version"]["product"]["latestVersion"] + ) + return latest_version["id"] == version_id + + def get_representations( + self, + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + names_by_version_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Get representation entities based on passed filters from server. + + Todos: + Add separated function for 'names_by_version_ids' filtering. + Because can't be combined with others. + + Args: + project_name (str): Name of project where to look for versions. + representation_ids (Optional[Iterable[str]]): Representation ids + used for representation filtering. + representation_names (Optional[Iterable[str]]): Representation + names used for representation filtering. + version_ids (Optional[Iterable[str]]): Version ids used for + representation filtering. Versions are parents of + representations. + names_by_version_ids (Optional[bool]): Find representations + by names and version ids. This filter discard all + other filters. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried representation entities. + """ + + if not fields: + fields = self.get_default_fields_for_type("representation") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("representation") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + filters = { + "projectName": project_name + } + + if representation_ids is not None: + representation_ids = set(representation_ids) + if not representation_ids: + return + filters["representationIds"] = list(representation_ids) + + version_ids_filter = None + representaion_names_filter = None + if names_by_version_ids is not None: + version_ids_filter = set() + representaion_names_filter = set() + for version_id, names in names_by_version_ids.items(): + version_ids_filter.add(version_id) + representaion_names_filter |= set(names) + + if not version_ids_filter or not representaion_names_filter: + return + + else: + if representation_names is not None: + representaion_names_filter = set(representation_names) + if not representaion_names_filter: + return + + if version_ids is not None: + version_ids_filter = set(version_ids) + if not version_ids_filter: + return + + if version_ids_filter: + filters["versionIds"] = list(version_ids_filter) + + if representaion_names_filter: + filters["representationNames"] = list(representaion_names_filter) + + query = representations_graphql_query(fields) + + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for repre in parsed_data["project"]["representations"]: + if active is not None and active is not repre["active"]: + continue + + if use_rest: + repre = self.get_rest_representation( + project_name, repre["id"] + ) + + if "context" in repre: + orig_context = repre["context"] + context = {} + if orig_context and orig_context != "null": + context = json.loads(orig_context) + repre["context"] = context + + if own_attributes: + fill_own_attribs(repre) + yield repre + + def get_representation_by_id( + self, + project_name, + representation_id, + fields=None, + own_attributes=False + ): + """Query representation entity from server based on id filter. + + Args: + project_name (str): Project where to look for representation. + representation_id (str): Id of representation. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried representation entity or None. + """ + + representations = self.get_representations( + project_name, + representation_ids=[representation_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for representation in representations: + return representation + return None + + def get_representation_by_name( + self, + project_name, + representation_name, + version_id, + fields=None, + own_attributes=False + ): + """Query representation entity by name and version id. + + Args: + project_name (str): Project where to look for representation. + representation_name (str): Representation name. + version_id (str): Version id. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried representation entity or None. + """ + + representations = self.get_representations( + project_name, + representation_names=[representation_name], + version_ids=[version_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for representation in representations: + return representation + return None + + def get_representations_parents(self, project_name, representation_ids): + """Find representations parents by representation id. + + Representation parent entities up to project. + + Args: + project_name (str): Project where to look for entities. + representation_ids (Iterable[str]): Representation ids. + + Returns: + dict[str, RepresentationParents]: Parent entities by + representation id. + """ + + if not representation_ids: + return {} + + project = self.get_project(project_name) + repre_ids = set(representation_ids) + output = { + repre_id: RepresentationParents(None, None, None, None) + for repre_id in representation_ids + } + + version_fields = self.get_default_fields_for_type("version") + product_fields = self.get_default_fields_for_type("product") + folder_fields = self.get_default_fields_for_type("folder") + + query = representations_parents_qraphql_query( + version_fields, product_fields, folder_fields + ) + query.set_variable_value("projectName", project_name) + query.set_variable_value("representationIds", list(repre_ids)) + + parsed_data = query.query(self) + for repre in parsed_data["project"]["representations"]: + repre_id = repre["id"] + version = repre.pop("version") + product = version.pop("product") + folder = product.pop("folder") + output[repre_id] = RepresentationParents( + version, product, folder, project + ) + + return output + + def get_representation_parents(self, project_name, representation_id): + """Find representation parents by representation id. + + Representation parent entities up to project. + + Args: + project_name (str): Project where to look for entities. + representation_id (str): Representation id. + + Returns: + RepresentationParents: Representation parent entities. + """ + + if not representation_id: + return None + + parents_by_repre_id = self.get_representations_parents( + project_name, [representation_id] + ) + return parents_by_repre_id[representation_id] + + def get_repre_ids_by_context_filters( + self, + project_name, + context_filters, + representation_names=None, + version_ids=None + ): + """Find representation ids which match passed context filters. + + Each representation has context integrated on representation entity in + database. The context may contain project, folder, task name or + product name, product type and many more. This implementation gives + option to quickly filter representation based on representation data + in database. + + Context filters have defined structure. To define filter of nested + subfield use dot '.' as delimiter (For example 'task.name'). + Filter values can be regex filters. String or 're.Pattern' can be used. + + Args: + project_name (str): Project where to look for representations. + context_filters (dict[str, list[str]]): Filters of context fields. + representation_names (Optional[Iterable[str]]): Representation + names, can be used as additional filter for representations + by their names. + version_ids (Optional[Iterable[str]]): Version ids, can be used + as additional filter for representations by their parent ids. + + Returns: + list[str]: Representation ids that match passed filters. + + Example: + The function returns just representation ids so if entities are + required for funtionality they must be queried afterwards by + their ids. + >>> project_name = "testProject" + >>> filters = { + ... "task.name": ["[aA]nimation"], + ... "product": [".*[Mm]ain"] + ... } + >>> repre_ids = get_repre_ids_by_context_filters( + ... project_name, filters) + >>> repres = get_representations(project_name, repre_ids) + """ + + if not isinstance(context_filters, dict): + raise TypeError( + "Expected 'dict' got {}".format(str(type(context_filters))) + ) + + filter_body = {} + if representation_names is not None: + if not representation_names: + return [] + filter_body["names"] = list(set(representation_names)) + + if version_ids is not None: + if not version_ids: + return [] + filter_body["versionIds"] = list(set(version_ids)) + + body_context_filters = [] + for key, filters in context_filters.items(): + if not isinstance(filters, (set, list, tuple)): + raise TypeError( + "Expected 'set', 'list', 'tuple' got {}".format( + str(type(filters)))) + + + new_filters = set() + for filter_value in filters: + if isinstance(filter_value, PatternType): + filter_value = filter_value.pattern + new_filters.add(filter_value) + + body_context_filters.append({ + "key": key, + "values": list(new_filters) + }) + + response = self.post( + "projects/{}/repreContextFilter".format(project_name), + context=body_context_filters, + **filter_body + ) + response.raise_for_status() + return response.data["ids"] + + def get_workfiles_info( + self, + project_name, + workfile_ids=None, + task_ids=None, + paths=None, + fields=None, + own_attributes=False + ): + """Workfile info entities by passed filters. + + Args: + project_name (str): Project under which the entity is located. + workfile_ids (Optional[Iterable[str]]): Workfile ids. + task_ids (Optional[Iterable[str]]): Task ids. + paths (Optional[Iterable[str]]): Rootless workfiles paths. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried workfile info entites. + """ + + filters = {"projectName": project_name} + if task_ids is not None: + task_ids = set(task_ids) + if not task_ids: + return + filters["taskIds"] = list(task_ids) + + if paths is not None: + paths = set(paths) + if not paths: + return + filters["paths"] = list(paths) + + if workfile_ids is not None: + workfile_ids = set(workfile_ids) + if not workfile_ids: + return + filters["workfileIds"] = list(workfile_ids) + + if not fields: + fields = self.get_default_fields_for_type("workfile") + + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= { + "attrib.{}".format(attr) + for attr in self.get_attributes_for_type("workfile") + } + if own_attributes: + fields.add("ownAttrib") + + query = workfiles_info_graphql_query(fields) + + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for workfile_info in parsed_data["project"]["workfiles"]: + if own_attributes: + fill_own_attribs(workfile_info) + yield workfile_info + + def get_workfile_info( + self, project_name, task_id, path, fields=None, own_attributes=False + ): + """Workfile info entity by task id and workfile path. + + Args: + project_name (str): Project under which the entity is located. + task_id (str): Task id. + path (str): Rootless workfile path. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Workfile info entity or None. + """ + + if not task_id or not path: + return None + + for workfile_info in self.get_workfiles_info( + project_name, + task_ids=[task_id], + paths=[path], + fields=fields, + own_attributes=own_attributes + ): + return workfile_info + return None + + def get_workfile_info_by_id( + self, project_name, workfile_id, fields=None, own_attributes=False + ): + """Workfile info entity by id. + + Args: + project_name (str): Project under which the entity is located. + workfile_id (str): Workfile info id. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Workfile info entity or None. + """ + + if not workfile_id: + return None + + for workfile_info in self.get_workfiles_info( + project_name, + workfile_ids=[workfile_id], + fields=fields, + own_attributes=own_attributes + ): + return workfile_info + return None + + def _prepare_thumbnail_content(self, project_name, response): + content = None + content_type = response.content_type + + # It is expected the response contains thumbnail id otherwise the + # content cannot be cached and filepath returned + thumbnail_id = response.headers.get("X-Thumbnail-Id") + if thumbnail_id is not None: + content = response.content + + return ThumbnailContent( + project_name, thumbnail_id, content, content_type + ) + + def get_thumbnail_by_id(self, project_name, thumbnail_id): + """Get thumbnail from server by id. + + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. + + Notes: + It is recommended to use one of prepared entity type specific + methods 'get_folder_thumbnail', 'get_version_thumbnail' or + 'get_workfile_thumbnail'. + We do recommend pass thumbnail id if you have access to it. Each + entity that allows thumbnails has 'thumbnailId' field, so it + can be queried. + + Args: + project_name (str): Project under which the entity is located. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. + + Returns: + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. + """ + + response = self.raw_get( + "projects/{}/thumbnails/{}".format( + project_name, + thumbnail_id + ) + ) + return self._prepare_thumbnail_content(project_name, response) + + def get_thumbnail( + self, project_name, entity_type, entity_id, thumbnail_id=None + ): + """Get thumbnail from server. + + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. + + Notes: + It is recommended to use one of prepared entity type specific + methods 'get_folder_thumbnail', 'get_version_thumbnail' or + 'get_workfile_thumbnail'. + We do recommend pass thumbnail id if you have access to it. Each + entity that allows thumbnails has 'thumbnailId' field, so it + can be queried. + + Args: + project_name (str): Project under which the entity is located. + entity_type (str): Entity type which passed entity id represents. + entity_id (str): Entity id for which thumbnail should be returned. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. + + Returns: + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. + """ + + if thumbnail_id: + return self.get_thumbnail_by_id(project_name, thumbnail_id) + + if entity_type in ( + "folder", + "version", + "workfile", + ): + entity_type += "s" + + response = self.raw_get("projects/{}/{}/{}/thumbnail".format( + project_name, + entity_type, + entity_id + )) + return self._prepare_thumbnail_content(project_name, response) + + def get_folder_thumbnail( + self, project_name, folder_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for folder entity. + + Args: + project_name (str): Project under which the entity is located. + folder_id (str): Folder id for which thumbnail should be returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "folder", folder_id, thumbnail_id + ) + + def get_version_thumbnail( + self, project_name, version_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for version entity. + + Args: + project_name (str): Project under which the entity is located. + version_id (str): Version id for which thumbnail should be + returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "version", version_id, thumbnail_id + ) + + def get_workfile_thumbnail( + self, project_name, workfile_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for workfile entity. + + Args: + project_name (str): Project under which the entity is located. + workfile_id (str): Worfile id for which thumbnail should be + returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "workfile", workfile_id, thumbnail_id + ) + + def _get_thumbnail_mime_type(self, thumbnail_path): + """Get thumbnail mime type on thumbnail creation based on source path. + + Args: + thumbnail_path (str): Path to thumbnail source fie. + + Returns: + str: Mime type used for thumbnail creation. + + Raises: + ValueError: Mime type cannot be determined. + """ + + ext = os.path.splitext(thumbnail_path)[-1].lower() + if ext == ".png": + return "image/png" + + elif ext in (".jpeg", ".jpg"): + return "image/jpeg" + + raise ValueError( + "Thumbnail source file has unknown extensions {}".format(ext)) + + def create_thumbnail(self, project_name, src_filepath, thumbnail_id=None): + """Create new thumbnail on server from passed path. + + Args: + project_name (str): Project where the thumbnail will be created + and can be used. + src_filepath (str): Filepath to thumbnail which should be uploaded. + thumbnail_id (Optional[str]): Prepared if of thumbnail. + + Returns: + str: Created thumbnail id. + + Raises: + ValueError: When thumbnail source cannot be processed. + """ + + if not os.path.exists(src_filepath): + raise ValueError("Entered filepath does not exist.") + + if thumbnail_id: + self.update_thumbnail( + project_name, + thumbnail_id, + src_filepath + ) + return thumbnail_id + + mime_type = self._get_thumbnail_mime_type(src_filepath) + with open(src_filepath, "rb") as stream: + content = stream.read() + + response = self.raw_post( + "projects/{}/thumbnails".format(project_name), + headers={"Content-Type": mime_type}, + data=content + ) + response.raise_for_status() + return response.data["id"] + + def update_thumbnail(self, project_name, thumbnail_id, src_filepath): + """Change thumbnail content by id. + + Update can be also used to create new thumbnail. + + Args: + project_name (str): Project where the thumbnail will be created + and can be used. + thumbnail_id (str): Thumbnail id to update. + src_filepath (str): Filepath to thumbnail which should be uploaded. + + Raises: + ValueError: When thumbnail source cannot be processed. + """ + + if not os.path.exists(src_filepath): + raise ValueError("Entered filepath does not exist.") + + mime_type = self._get_thumbnail_mime_type(src_filepath) + with open(src_filepath, "rb") as stream: + content = stream.read() + + response = self.raw_put( + "projects/{}/thumbnails/{}".format(project_name, thumbnail_id), + headers={"Content-Type": mime_type}, + data=content + ) + response.raise_for_status() + + def create_project( + self, + project_name, + project_code, + library_project=False, + preset_name=None + ): + """Create project using Ayon settings. + + This project creation function is not validating project entity on + creation. It is because project entity is created blindly with only + minimum required information about project which is name and code. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name (str): New project name. Should be unique. + project_code (str): Project's code should be unique too. + library_project (Optional[bool]): Project is library project. + preset_name (Optional[str]): Name of anatomy preset. Default is + used if not passed. + + Raises: + ValueError: When project name already exists. + + Returns: + dict[str, Any]: Created project entity. + """ + + if self.get_project(project_name): + raise ValueError("Project with name \"{}\" already exists".format( + project_name + )) + + if not PROJECT_NAME_REGEX.match(project_name): + raise ValueError(( + "Project name \"{}\" contain invalid characters" + ).format(project_name)) + + preset = self.get_project_anatomy_preset(preset_name) + + result = self.post( + "projects", + name=project_name, + code=project_code, + anatomy=preset, + library=library_project + ) + + if result.status != 201: + details = "Unknown details ({})".format(result.status) + if result.data: + details = result.data.get("detail") or details + raise ValueError("Failed to create project \"{}\": {}".format( + project_name, details + )) + + return self.get_project(project_name) + + def update_project( + self, + project_name, + library=None, + folder_types=None, + task_types=None, + link_types=None, + statuses=None, + tags=None, + config=None, + attrib=None, + data=None, + active=None, + project_code=None, + **changes + ): + """Update project entity on server. + + Args: + project_name (str): Name of project. + library (Optional[bool]): Change library state. + folder_types (Optional[list[dict[str, Any]]]): Folder type + definitions. + task_types (Optional[list[dict[str, Any]]]): Task type + definitions. + link_types (Optional[list[dict[str, Any]]]): Link type + definitions. + statuses (Optional[list[dict[str, Any]]]): Status definitions. + tags (Optional[list[dict[str, Any]]]): List of tags available to + set on entities. + config (Optional[dict[dict[str, Any]]]): Project anatomy config + with templates and roots. + attrib (Optional[dict[str, Any]]): Project attributes to change. + data (Optional[dict[str, Any]]): Custom data of a project. This + value will 100% override project data. + active (Optional[bool]): Change active state of a project. + project_code (Optional[str]): Change project code. Not recommended + during production. + **changes: Other changed keys based on Rest API documentation. + """ + + changes.update({ + key: value + for key, value in ( + ("library", library), + ("folderTypes", folder_types), + ("taskTypes", task_types), + ("linkTypes", link_types), + ("statuses", statuses), + ("tags", tags), + ("config", config), + ("attrib", attrib), + ("data", data), + ("active", active), + ("code", project_code), + ) + if value is not None + }) + response = self.patch( + "projects/{}".format(project_name), + **changes + ) + response.raise_for_status() + + def delete_project(self, project_name): + """Delete project from server. + + This will completely remove project from server without any step back. + + Args: + project_name (str): Project name that will be removed. + """ + + if not self.get_project(project_name): + raise ValueError("Project with name \"{}\" was not found".format( + project_name + )) + + result = self.delete("projects/{}".format(project_name)) + if result.status_code != 204: + raise ValueError( + "Failed to delete project \"{}\". {}".format( + project_name, result.data["detail"] + ) + ) + + # --- Links --- + def get_full_link_type_name(self, link_type_name, input_type, output_type): + """Calculate full link type name used for query from server. + + Args: + link_type_name (str): Type of link. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Returns: + str: Full name of link type used for query from server. + """ + + return "|".join([link_type_name, input_type, output_type]) + + def get_link_types(self, project_name): + """All link types available on a project. + + Example output: + [ + { + "name": "reference|folder|folder", + "link_type": "reference", + "input_type": "folder", + "output_type": "folder", + "data": {} + } + ] + + Args: + project_name (str): Name of project where to look for link types. + + Returns: + list[dict[str, Any]]: Link types available on project. + """ + + response = self.get("projects/{}/links/types".format(project_name)) + response.raise_for_status() + return response.data["types"] + + def get_link_type( + self, project_name, link_type_name, input_type, output_type + ): + """Get link type data. + + There is not dedicated REST endpoint to get single link type, + so method 'get_link_types' is used. + + Example output: + { + "name": "reference|folder|folder", + "link_type": "reference", + "input_type": "folder", + "output_type": "folder", + "data": {} + } + + Args: + project_name (str): Project where link type is available. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Returns: + Union[None, dict[str, Any]]: Link type information. + """ + + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + for link_type in self.get_link_types(project_name): + if link_type["name"] == full_type_name: + return link_type + return None + + def create_link_type( + self, project_name, link_type_name, input_type, output_type, data=None + ): + """Create or update link type on server. + + Warning: + Because PUT is used for creation it is also used for update. + + Args: + project_name (str): Project where link type is created. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + data (Optional[dict[str, Any]]): Additional data related to link. + + Raises: + HTTPRequestError: Server error happened. + """ + + if data is None: + data = {} + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + response = self.put( + "projects/{}/links/types/{}".format(project_name, full_type_name), + **data + ) + response.raise_for_status() + + def delete_link_type( + self, project_name, link_type_name, input_type, output_type + ): + """Remove link type from project. + + Args: + project_name (str): Project where link type is created. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Raises: + HTTPRequestError: Server error happened. + """ + + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + response = self.delete( + "projects/{}/links/types/{}".format(project_name, full_type_name)) + response.raise_for_status() + + def make_sure_link_type_exists( + self, project_name, link_type_name, input_type, output_type, data=None + ): + """Make sure link type exists on a project. + + Args: + project_name (str): Name of project. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + data (Optional[dict[str, Any]]): Link type related data. + """ + + link_type = self.get_link_type( + project_name, link_type_name, input_type, output_type) + if ( + link_type + and (data is None or data == link_type["data"]) + ): + return + self.create_link_type( + project_name, link_type_name, input_type, output_type, data + ) + + def create_link( + self, + project_name, + link_type_name, + input_id, + input_type, + output_id, + output_type + ): + """Create link between 2 entities. + + Link has a type which must already exists on a project. + + Example output: + { + "id": "59a212c0d2e211eda0e20242ac120002" + } + + Args: + project_name (str): Project where the link is created. + link_type_name (str): Type of link. + input_id (str): Id of input entity. + input_type (str): Entity type of input entity. + output_id (str): Id of output entity. + output_type (str): Entity type of output entity. + + Returns: + dict[str, str]: Information about link. + + Raises: + HTTPRequestError: Server error happened. + """ + + full_link_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type) + response = self.post( + "projects/{}/links".format(project_name), + link=full_link_type_name, + input=input_id, + output=output_id + ) + response.raise_for_status() + return response.data + + def delete_link(self, project_name, link_id): + """Remove link by id. + + Args: + project_name (str): Project where link exists. + link_id (str): Id of link. + + Raises: + HTTPRequestError: Server error happened. + """ + + response = self.delete( + "projects/{}/links/{}".format(project_name, link_id) + ) + response.raise_for_status() + + def _prepare_link_filters(self, filters, link_types, link_direction): + """Add links filters for GraphQl queries. + + Args: + filters (dict[str, Any]): Object where filters will be added. + link_types (Union[Iterable[str], None]): Link types filters. + link_direction (Union[Literal["in", "out"], None]): Direction of + link "in", "out" or 'None' for both. + + Returns: + bool: Links are valid, and query from server can happen. + """ + + if link_types is not None: + link_types = set(link_types) + if not link_types: + return False + filters["linkTypes"] = list(link_types) + + if link_direction is not None: + if link_direction not in ("in", "out"): + return False + filters["linkDirection"] = link_direction + return True + + def get_entities_links( + self, + project_name, + entity_type, + entity_ids=None, + link_types=None, + link_direction=None + ): + """Helper method to get links from server for entity types. + + Example output: + [ + { + "id": "59a212c0d2e211eda0e20242ac120002", + "linkType": "reference", + "description": "reference link between folders", + "projectName": "my_project", + "author": "frantadmin", + "entityId": "b1df109676db11ed8e8c6c9466b19aa8", + "entityType": "folder", + "direction": "out" + }, + ... + ] + + Args: + project_name (str): Project where links are. + entity_type (Literal["folder", "task", "product", + "version", "representations"]): Entity type. + entity_ids (Optional[Iterable[str]]): Ids of entities for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by entity ids. + """ + + if entity_type == "folder": + query_func = folders_graphql_query + id_filter_key = "folderIds" + project_sub_key = "folders" + elif entity_type == "task": + query_func = tasks_graphql_query + id_filter_key = "taskIds" + project_sub_key = "tasks" + elif entity_type == "product": + query_func = products_graphql_query + id_filter_key = "productIds" + project_sub_key = "products" + elif entity_type == "version": + query_func = versions_graphql_query + id_filter_key = "versionIds" + project_sub_key = "versions" + elif entity_type == "representation": + query_func = representations_graphql_query + id_filter_key = "representationIds" + project_sub_key = "representations" + else: + raise ValueError("Unknown type \"{}\". Expected {}".format( + entity_type, + ", ".join( + ("folder", "task", "product", "version", "representation") + ) + )) + + output = collections.defaultdict(list) + filters = { + "projectName": project_name + } + if entity_ids is not None: + entity_ids = set(entity_ids) + if not entity_ids: + return output + filters[id_filter_key] = list(entity_ids) + + if not self._prepare_link_filters(filters, link_types, link_direction): + return output + + query = query_func({"id", "links"}) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for entity in parsed_data["project"][project_sub_key]: + entity_id = entity["id"] + output[entity_id].extend(entity["links"]) + return output + + def get_folders_links( + self, + project_name, + folder_ids=None, + link_types=None, + link_direction=None + ): + """Query folders links from server. + + Args: + project_name (str): Project where links are. + folder_ids (Optional[Iterable[str]]): Ids of folders for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by folder ids. + """ + + return self.get_entities_links( + project_name, "folder", folder_ids, link_types, link_direction + ) + + def get_folder_links( + self, + project_name, + folder_id, + link_types=None, + link_direction=None + ): + """Query folder links from server. + + Args: + project_name (str): Project where links are. + folder_id (str): Folder id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of folder. + """ + + return self.get_folders_links( + project_name, [folder_id], link_types, link_direction + )[folder_id] + + def get_tasks_links( + self, + project_name, + task_ids=None, + link_types=None, + link_direction=None + ): + """Query tasks links from server. + + Args: + project_name (str): Project where links are. + task_ids (Optional[Iterable[str]]): Ids of tasks for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by task ids. + """ + + return self.get_entities_links( + project_name, "task", task_ids, link_types, link_direction + ) + + def get_task_links( + self, + project_name, + task_id, + link_types=None, + link_direction=None + ): + """Query task links from server. + + Args: + project_name (str): Project where links are. + task_id (str): Task id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of task. + """ + + return self.get_tasks_links( + project_name, [task_id], link_types, link_direction + )[task_id] + + def get_products_links( + self, + project_name, + product_ids=None, + link_types=None, + link_direction=None + ): + """Query products links from server. + + Args: + project_name (str): Project where links are. + product_ids (Optional[Iterable[str]]): Ids of products for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by product ids. + """ + + return self.get_entities_links( + project_name, "product", product_ids, link_types, link_direction + ) + + def get_product_links( + self, + project_name, + product_id, + link_types=None, + link_direction=None + ): + """Query product links from server. + + Args: + project_name (str): Project where links are. + product_id (str): Product id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of product. + """ + + return self.get_products_links( + project_name, [product_id], link_types, link_direction + )[product_id] + + def get_versions_links( + self, + project_name, + version_ids=None, + link_types=None, + link_direction=None + ): + """Query versions links from server. + + Args: + project_name (str): Project where links are. + version_ids (Optional[Iterable[str]]): Ids of versions for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by version ids. + """ + + return self.get_entities_links( + project_name, "version", version_ids, link_types, link_direction + ) + + def get_version_links( + self, + project_name, + version_id, + link_types=None, + link_direction=None + ): + """Query version links from server. + + Args: + project_name (str): Project where links are. + version_id (str): Version id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of version. + """ + + return self.get_versions_links( + project_name, [version_id], link_types, link_direction + )[version_id] + + def get_representations_links( + self, + project_name, + representation_ids=None, + link_types=None, + link_direction=None + ): + """Query representations links from server. + + Args: + project_name (str): Project where links are. + representation_ids (Optional[Iterable[str]]): Ids of + representations for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by representation ids. + """ + + return self.get_entities_links( + project_name, + "representation", + representation_ids, + link_types, + link_direction + ) + + def get_representation_links( + self, + project_name, + representation_id, + link_types=None, + link_direction=None + ): + """Query representation links from server. + + Args: + project_name (str): Project where links are. + representation_id (str): Representation id for which links + should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of representation. + """ + + return self.get_representations_links( + project_name, [representation_id], link_types, link_direction + )[representation_id] + + # --- Batch operations processing --- + def send_batch_operations( + self, + project_name, + operations, + can_fail=False, + raise_on_fail=True + ): + """Post multiple CRUD operations to server. + + When multiple changes should be made on server side this is the best + way to go. It is possible to pass multiple operations to process on a + server side and do the changes in a transaction. + + Args: + project_name (str): On which project should be operations + processed. + operations (list[dict[str, Any]]): Operations to be processed. + can_fail (Optional[bool]): Server will try to process all + operations even if one of them fails. + raise_on_fail (Optional[bool]): Raise exception if an operation + fails. You can handle failed operations on your own + when set to 'False'. + + Raises: + ValueError: Operations can't be converted to json string. + FailedOperations: When output does not contain server operations + or 'raise_on_fail' is enabled and any operation fails. + + Returns: + list[dict[str, Any]]: Operations result with process details. + """ + + if not operations: + return [] + + body_by_id = {} + operations_body = [] + for operation in operations: + if not operation: + continue + + op_id = operation.get("id") + if not op_id: + op_id = create_entity_id() + operation["id"] = op_id + + try: + body = json.loads( + json.dumps(operation, default=entity_data_json_default) + ) + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps( + operation, indent=4, default=failed_json_default + ) + )) + + body_by_id[op_id] = body + operations_body.append(body) + + if not operations_body: + return [] + + result = self.post( + "projects/{}/operations".format(project_name), + operations=operations_body, + canFail=can_fail + ) + + op_results = result.get("operations") + if op_results is None: + raise FailedOperations( + "Operation failed. Content: {}".format(str(result)) + ) + + if result.get("success") or not raise_on_fail: + return op_results + + for op_result in op_results: + if not op_result["success"]: + operation_id = op_result["id"] + raise FailedOperations(( + "Operation \"{}\" failed with data:\n{}\nDetail: {}." + ).format( + operation_id, + json.dumps(body_by_id[operation_id], indent=4), + op_result["detail"], + )) + return op_results diff --git a/openpype/vendor/python/common/ayon_api/utils.py b/openpype/vendor/python/common/ayon_api/utils.py new file mode 100644 index 0000000000..314d13faec --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/utils.py @@ -0,0 +1,633 @@ +import re +import datetime +import uuid +import string +import platform +import collections +try: + # Python 3 + from urllib.parse import urlparse, urlencode +except ImportError: + # Python 2 + from urlparse import urlparse + from urllib import urlencode + +import requests +import unidecode + +from .exceptions import UrlError + +REMOVED_VALUE = object() +SLUGIFY_WHITELIST = string.ascii_letters + string.digits +SLUGIFY_SEP_WHITELIST = " ,./\\;:!|*^#@~+-_=" + +RepresentationParents = collections.namedtuple( + "RepresentationParents", + ("version", "product", "folder", "project") +) + + +class ThumbnailContent: + """Wrapper for thumbnail content. + + Args: + project_name (str): Project name. + thumbnail_id (Union[str, None]): Thumbnail id. + content_type (Union[str, None]): Content type e.g. 'image/png'. + content (Union[bytes, None]): Thumbnail content. + """ + + def __init__(self, project_name, thumbnail_id, content, content_type): + self.project_name = project_name + self.thumbnail_id = thumbnail_id + self.content_type = content_type + self.content = content or b"" + + @property + def id(self): + """Wrapper for thumbnail id. + + Returns: + + """ + + return self.thumbnail_id + + @property + def is_valid(self): + """Content of thumbnail is valid. + + Returns: + bool: Content is valid and can be used. + """ + return ( + self.thumbnail_id is not None + and self.content_type is not None + ) + + +def prepare_query_string(key_values): + """Prepare data to query string. + + If there are any values a query starting with '?' is returned otherwise + an empty string. + + Args: + dict[str, Any]: Query values. + + Returns: + str: Query string. + """ + + if not key_values: + return "" + return "?{}".format(urlencode(key_values)) + + +def create_entity_id(): + return uuid.uuid1().hex + + +def convert_entity_id(entity_id): + if not entity_id: + return None + + if isinstance(entity_id, uuid.UUID): + return entity_id.hex + + try: + return uuid.UUID(entity_id).hex + + except (TypeError, ValueError, AttributeError): + pass + return None + + +def convert_or_create_entity_id(entity_id=None): + output = convert_entity_id(entity_id) + if output is None: + output = create_entity_id() + return output + + +def entity_data_json_default(value): + if isinstance(value, datetime.datetime): + return int(value.timestamp()) + + raise TypeError( + "Object of type {} is not JSON serializable".format(str(type(value))) + ) + + +def slugify_string( + input_string, + separator="_", + slug_whitelist=SLUGIFY_WHITELIST, + split_chars=SLUGIFY_SEP_WHITELIST, + min_length=1, + lower=False, + make_set=False, +): + """Slugify a text string. + + This function removes transliterates input string to ASCII, removes + special characters and use join resulting elements using + specified separator. + + Args: + input_string (str): Input string to slugify + separator (str): A string used to separate returned elements + (default: "_") + slug_whitelist (str): Characters allowed in the output + (default: ascii letters, digits and the separator) + split_chars (str): Set of characters used for word splitting + (there is a sane default) + lower (bool): Convert to lower-case (default: False) + make_set (bool): Return "set" object instead of string. + min_length (int): Minimal length of an element (word). + + Returns: + Union[str, Set[str]]: Based on 'make_set' value returns slugified + string. + """ + + tmp_string = unidecode.unidecode(input_string) + if lower: + tmp_string = tmp_string.lower() + + parts = [ + # Remove all characters that are not in whitelist + re.sub("[^{}]".format(re.escape(slug_whitelist)), "", part) + # Split text into part by split characters + for part in re.split("[{}]".format(re.escape(split_chars)), tmp_string) + ] + # Filter text parts by length + filtered_parts = [ + part + for part in parts + if len(part) >= min_length + ] + if make_set: + return set(filtered_parts) + return separator.join(filtered_parts) + + +def failed_json_default(value): + return "< Failed value {} > {}".format(type(value), str(value)) + + +def prepare_attribute_changes(old_entity, new_entity, replace=False): + attrib_changes = {} + new_attrib = new_entity.get("attrib") + old_attrib = old_entity.get("attrib") + if new_attrib is None: + if not replace: + return attrib_changes + new_attrib = {} + + if old_attrib is None: + return new_attrib + + for attr, new_attr_value in new_attrib.items(): + old_attr_value = old_attrib.get(attr) + if old_attr_value != new_attr_value: + attrib_changes[attr] = new_attr_value + + if replace: + for attr in old_attrib: + if attr not in new_attrib: + attrib_changes[attr] = REMOVED_VALUE + + return attrib_changes + + +def prepare_entity_changes(old_entity, new_entity, replace=False): + """Prepare changes of entities.""" + + changes = {} + for key, new_value in new_entity.items(): + if key == "attrib": + continue + + old_value = old_entity.get(key) + if old_value != new_value: + changes[key] = new_value + + if replace: + for key in old_entity: + if key not in new_entity: + changes[key] = REMOVED_VALUE + + attr_changes = prepare_attribute_changes(old_entity, new_entity, replace) + if attr_changes: + changes["attrib"] = attr_changes + return changes + + +def _try_parse_url(url): + try: + return urlparse(url) + except BaseException: + return None + + +def _try_connect_to_server(url): + try: + # TODO add validation if the url lead to Ayon server + # - thiw won't validate if the url lead to 'google.com' + requests.get(url) + + except BaseException: + return False + return True + + +def login_to_server(url, username, password): + """Use login to the server to receive token. + + Args: + url (str): Server url. + username (str): User's username. + password (str): User's password. + + Returns: + Union[str, None]: User's token if login was successfull. + Otherwise 'None'. + """ + + headers = {"Content-Type": "application/json"} + response = requests.post( + "{}/api/auth/login".format(url), + headers=headers, + json={ + "name": username, + "password": password + } + ) + token = None + # 200 - success + # 401 - invalid credentials + # * - other issues + if response.status_code == 200: + token = response.json()["token"] + return token + + +def logout_from_server(url, token): + """Logout from server and throw token away. + + Args: + url (str): Url from which should be logged out. + token (str): Token which should be used to log out. + """ + + headers = { + "Content-Type": "application/json", + "Authorization": "Bearer {}".format(token) + } + requests.post( + url + "/api/auth/logout", + headers=headers + ) + + +def is_token_valid(url, token): + """Check if token is valid. + + Args: + url (str): Server url. + token (str): User's token. + + Returns: + bool: True if token is valid. + """ + + headers = { + "Content-Type": "application/json", + "Authorization": "Bearer {}".format(token) + } + response = requests.get( + "{}/api/users/me".format(url), + headers=headers + ) + return response.status_code == 200 + + +def validate_url(url): + """Validate url if is valid and server is available. + + Validation checks if can be parsed as url and contains scheme. + + Function will try to autofix url thus will return modified url when + connection to server works. + + ```python + my_url = "my.server.url" + try: + # Store new url + validated_url = validate_url(my_url) + + except UrlError: + # Handle invalid url + ... + ``` + + Args: + url (str): Server url. + + Returns: + Url which was used to connect to server. + + Raises: + UrlError: Error with short description and hints for user. + """ + + stripperd_url = url.strip() + if not stripperd_url: + raise UrlError( + "Invalid url format. Url is empty.", + title="Invalid url format", + hints=["url seems to be empty"] + ) + + # Not sure if this is good idea? + modified_url = stripperd_url.rstrip("/") + parsed_url = _try_parse_url(modified_url) + universal_hints = [ + "does the url work in browser?" + ] + if parsed_url is None: + raise UrlError( + "Invalid url format. Url cannot be parsed as url \"{}\".".format( + modified_url + ), + title="Invalid url format", + hints=universal_hints + ) + + # Try add 'https://' scheme if is missing + # - this will trigger UrlError if both will crash + if not parsed_url.scheme: + new_url = "https://" + modified_url + if _try_connect_to_server(new_url): + return new_url + + if _try_connect_to_server(modified_url): + return modified_url + + hints = [] + if "/" in parsed_url.path or not parsed_url.scheme: + new_path = parsed_url.path.split("/")[0] + if not parsed_url.scheme: + new_path = "https://" + new_path + + hints.append( + "did you mean \"{}\"?".format(parsed_url.scheme + new_path) + ) + + raise UrlError( + "Couldn't connect to server on \"{}\"".format(url), + title="Couldn't connect to server", + hints=hints + universal_hints + ) + + +class TransferProgress: + """Object to store progress of download/upload from/to server.""" + + def __init__(self): + self._started = False + self._transfer_done = False + self._transferred = 0 + self._content_size = None + + self._failed = False + self._fail_reason = None + + self._source_url = "N/A" + self._destination_url = "N/A" + + def get_content_size(self): + """Content size in bytes. + + Returns: + Union[int, None]: Content size in bytes or None + if is unknown. + """ + + return self._content_size + + def set_content_size(self, content_size): + """Set content size in bytes. + + Args: + content_size (int): Content size in bytes. + + Raises: + ValueError: If content size was already set. + """ + + if self._content_size is not None: + raise ValueError("Content size was set more then once") + self._content_size = content_size + + def get_started(self): + """Transfer was started. + + Returns: + bool: True if transfer started. + """ + + return self._started + + def set_started(self): + """Mark that transfer started. + + Raises: + ValueError: If transfer was already started. + """ + + if self._started: + raise ValueError("Progress already started") + self._started = True + + def get_transfer_done(self): + """Transfer finished. + + Returns: + bool: Transfer finished. + """ + + return self._transfer_done + + def set_transfer_done(self): + """Mark progress as transfer finished. + + Raises: + ValueError: If progress was already marked as done + or wasn't started yet. + """ + + if self._transfer_done: + raise ValueError("Progress was already marked as done") + if not self._started: + raise ValueError("Progress didn't start yet") + self._transfer_done = True + + def get_failed(self): + """Transfer failed. + + Returns: + bool: True if transfer failed. + """ + + return self._failed + + def get_fail_reason(self): + """Get reason why transfer failed. + + Returns: + Union[str, None]: Reason why transfer + failed or None. + """ + + return self._fail_reason + + def set_failed(self, reason): + """Mark progress as failed. + + Args: + reason (str): Reason why transfer failed. + """ + + self._fail_reason = reason + self._failed = True + + def get_transferred_size(self): + """Already transferred size in bytes. + + Returns: + int: Already transferred size in bytes. + """ + + return self._transferred + + def set_transferred_size(self, transferred): + """Set already transferred size in bytes. + + Args: + transferred (int): Already transferred size in bytes. + """ + + self._transferred = transferred + + def add_transferred_chunk(self, chunk_size): + """Add transferred chunk size in bytes. + + Args: + chunk_size (int): Add transferred chunk size + in bytes. + """ + + self._transferred += chunk_size + + def get_source_url(self): + """Source url from where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Source url from where transfer happens. + """ + + return self._source_url + + def set_source_url(self, url): + """Set source url from where transfer happens. + + Args: + url (str): Source url from where transfer happens. + """ + + self._source_url = url + + def get_destination_url(self): + """Destination url where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Destination url where transfer happens. + """ + + return self._destination_url + + def set_destination_url(self, url): + """Set destination url where transfer happens. + + Args: + url (str): Destination url where transfer happens. + """ + + self._destination_url = url + + @property + def is_running(self): + """Check if transfer is running. + + Returns: + bool: True if transfer is running. + """ + + if ( + not self.started + or self.transfer_done + or self.failed + ): + return False + return True + + @property + def transfer_progress(self): + """Get transfer progress in percents. + + Returns: + Union[float, None]: Transfer progress in percents or 'None' + if content size is unknown. + """ + + if self._content_size is None: + return None + return (self._transferred * 100.0) / float(self._content_size) + + content_size = property(get_content_size, set_content_size) + started = property(get_started) + transfer_done = property(get_transfer_done) + failed = property(get_failed) + fail_reason = property(get_fail_reason) + source_url = property(get_source_url, set_source_url) + destination_url = property(get_destination_url, set_destination_url) + transferred_size = property(get_transferred_size, set_transferred_size) + + +def create_dependency_package_basename(platform_name=None): + """Create basename for dependency package file. + + Args: + platform_name (Optional[str]): Name of platform for which the + bundle is targeted. Default value is current platform. + + Returns: + str: Dependency package name with timestamp and platform. + """ + + if platform_name is None: + platform_name = platform.system().lower() + + now_date = datetime.datetime.now() + time_stamp = now_date.strftime("%y%m%d%H%M") + return "ayon_{}_{}".format(time_stamp, platform_name) diff --git a/openpype/vendor/python/common/ayon_api/version.py b/openpype/vendor/python/common/ayon_api/version.py new file mode 100644 index 0000000000..f3826a6407 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/version.py @@ -0,0 +1,2 @@ +"""Package declaring Python API for Ayon server.""" +__version__ = "0.4.1" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/__init__.py b/openpype/vendor/python/python_2/arrow/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/__init__.py rename to openpype/vendor/python/python_2/arrow/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/_version.py b/openpype/vendor/python/python_2/arrow/_version.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/_version.py rename to openpype/vendor/python/python_2/arrow/_version.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/api.py b/openpype/vendor/python/python_2/arrow/api.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/api.py rename to openpype/vendor/python/python_2/arrow/api.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/arrow.py b/openpype/vendor/python/python_2/arrow/arrow.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/arrow.py rename to openpype/vendor/python/python_2/arrow/arrow.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/constants.py b/openpype/vendor/python/python_2/arrow/constants.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/constants.py rename to openpype/vendor/python/python_2/arrow/constants.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/factory.py b/openpype/vendor/python/python_2/arrow/factory.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/factory.py rename to openpype/vendor/python/python_2/arrow/factory.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/formatter.py b/openpype/vendor/python/python_2/arrow/formatter.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/formatter.py rename to openpype/vendor/python/python_2/arrow/formatter.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/locales.py b/openpype/vendor/python/python_2/arrow/locales.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/locales.py rename to openpype/vendor/python/python_2/arrow/locales.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/parser.py b/openpype/vendor/python/python_2/arrow/parser.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/parser.py rename to openpype/vendor/python/python_2/arrow/parser.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/util.py b/openpype/vendor/python/python_2/arrow/util.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/util.py rename to openpype/vendor/python/python_2/arrow/util.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/__init__.py b/openpype/vendor/python/python_2/backports/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/__init__.py rename to openpype/vendor/python/python_2/backports/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/__init__.py b/openpype/vendor/python/python_2/backports/configparser/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/__init__.py rename to openpype/vendor/python/python_2/backports/configparser/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/helpers.py b/openpype/vendor/python/python_2/backports/configparser/helpers.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/helpers.py rename to openpype/vendor/python/python_2/backports/configparser/helpers.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/functools_lru_cache.py b/openpype/vendor/python/python_2/backports/functools_lru_cache.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/functools_lru_cache.py rename to openpype/vendor/python/python_2/backports/functools_lru_cache.py diff --git a/openpype/modules/ftrack/python2_vendor/builtins/builtins/__init__.py b/openpype/vendor/python/python_2/builtins/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/builtins/builtins/__init__.py rename to openpype/vendor/python/python_2/builtins/__init__.py diff --git a/openpype/vendor/python/python_2/click/__init__.py b/openpype/vendor/python/python_2/click/__init__.py new file mode 100644 index 0000000000..2b6008f2dd --- /dev/null +++ b/openpype/vendor/python/python_2/click/__init__.py @@ -0,0 +1,79 @@ +""" +Click is a simple Python module inspired by the stdlib optparse to make +writing command line scripts fun. Unlike other modules, it's based +around a simple API that does not come with too much magic and is +composable. +""" +from .core import Argument +from .core import BaseCommand +from .core import Command +from .core import CommandCollection +from .core import Context +from .core import Group +from .core import MultiCommand +from .core import Option +from .core import Parameter +from .decorators import argument +from .decorators import command +from .decorators import confirmation_option +from .decorators import group +from .decorators import help_option +from .decorators import make_pass_decorator +from .decorators import option +from .decorators import pass_context +from .decorators import pass_obj +from .decorators import password_option +from .decorators import version_option +from .exceptions import Abort +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import FileError +from .exceptions import MissingParameter +from .exceptions import NoSuchOption +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import wrap_text +from .globals import get_current_context +from .parser import OptionParser +from .termui import clear +from .termui import confirm +from .termui import echo_via_pager +from .termui import edit +from .termui import get_terminal_size +from .termui import getchar +from .termui import launch +from .termui import pause +from .termui import progressbar +from .termui import prompt +from .termui import secho +from .termui import style +from .termui import unstyle +from .types import BOOL +from .types import Choice +from .types import DateTime +from .types import File +from .types import FLOAT +from .types import FloatRange +from .types import INT +from .types import IntRange +from .types import ParamType +from .types import Path +from .types import STRING +from .types import Tuple +from .types import UNPROCESSED +from .types import UUID +from .utils import echo +from .utils import format_filename +from .utils import get_app_dir +from .utils import get_binary_stream +from .utils import get_os_args +from .utils import get_text_stream +from .utils import open_file + +# Controls if click should emit the warning about the use of unicode +# literals. +disable_unicode_literals_warning = False + +__version__ = "7.1.2" diff --git a/openpype/vendor/python/python_2/click/_bashcomplete.py b/openpype/vendor/python/python_2/click/_bashcomplete.py new file mode 100644 index 0000000000..8bca24480f --- /dev/null +++ b/openpype/vendor/python/python_2/click/_bashcomplete.py @@ -0,0 +1,375 @@ +import copy +import os +import re + +from .core import Argument +from .core import MultiCommand +from .core import Option +from .parser import split_arg_string +from .types import Choice +from .utils import echo + +try: + from collections import abc +except ImportError: + import collections as abc + +WORDBREAK = "=" + +# Note, only BASH version 4.4 and later have the nosort option. +COMPLETION_SCRIPT_BASH = """ +%(complete_func)s() { + local IFS=$'\n' + COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\ + COMP_CWORD=$COMP_CWORD \\ + %(autocomplete_var)s=complete $1 ) ) + return 0 +} + +%(complete_func)setup() { + local COMPLETION_OPTIONS="" + local BASH_VERSION_ARR=(${BASH_VERSION//./ }) + # Only BASH version 4.4 and later have the nosort option. + if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] \ +&& [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then + COMPLETION_OPTIONS="-o nosort" + fi + + complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s +} + +%(complete_func)setup +""" + +COMPLETION_SCRIPT_ZSH = """ +#compdef %(script_names)s + +%(complete_func)s() { + local -a completions + local -a completions_with_descriptions + local -a response + (( ! $+commands[%(script_names)s] )) && return 1 + + response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\ + COMP_CWORD=$((CURRENT-1)) \\ + %(autocomplete_var)s=\"complete_zsh\" \\ + %(script_names)s )}") + + for key descr in ${(kv)response}; do + if [[ "$descr" == "_" ]]; then + completions+=("$key") + else + completions_with_descriptions+=("$key":"$descr") + fi + done + + if [ -n "$completions_with_descriptions" ]; then + _describe -V unsorted completions_with_descriptions -U + fi + + if [ -n "$completions" ]; then + compadd -U -V unsorted -a completions + fi + compstate[insert]="automenu" +} + +compdef %(complete_func)s %(script_names)s +""" + +COMPLETION_SCRIPT_FISH = ( + "complete --no-files --command %(script_names)s --arguments" + ' "(env %(autocomplete_var)s=complete_fish' + " COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t)" + ' %(script_names)s)"' +) + +_completion_scripts = { + "bash": COMPLETION_SCRIPT_BASH, + "zsh": COMPLETION_SCRIPT_ZSH, + "fish": COMPLETION_SCRIPT_FISH, +} + +_invalid_ident_char_re = re.compile(r"[^a-zA-Z0-9_]") + + +def get_completion_script(prog_name, complete_var, shell): + cf_name = _invalid_ident_char_re.sub("", prog_name.replace("-", "_")) + script = _completion_scripts.get(shell, COMPLETION_SCRIPT_BASH) + return ( + script + % { + "complete_func": "_{}_completion".format(cf_name), + "script_names": prog_name, + "autocomplete_var": complete_var, + } + ).strip() + ";" + + +def resolve_ctx(cli, prog_name, args): + """Parse into a hierarchy of contexts. Contexts are connected + through the parent variable. + + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :return: the final context/command parsed + """ + ctx = cli.make_context(prog_name, args, resilient_parsing=True) + args = ctx.protected_args + ctx.args + while args: + if isinstance(ctx.command, MultiCommand): + if not ctx.command.chain: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + ctx = cmd.make_context( + cmd_name, args, parent=ctx, resilient_parsing=True + ) + args = ctx.protected_args + ctx.args + else: + # Walk chained subcommand contexts saving the last one. + while args: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + resilient_parsing=True, + ) + args = sub_ctx.args + ctx = sub_ctx + args = sub_ctx.protected_args + sub_ctx.args + else: + break + return ctx + + +def start_of_option(param_str): + """ + :param param_str: param_str to check + :return: whether or not this is the start of an option declaration + (i.e. starts "-" or "--") + """ + return param_str and param_str[:1] == "-" + + +def is_incomplete_option(all_args, cmd_param): + """ + :param all_args: the full original list of args supplied + :param cmd_param: the current command paramter + :return: whether or not the last option declaration (i.e. starts + "-" or "--") is incomplete and corresponds to this cmd_param. In + other words whether this cmd_param option can still accept + values + """ + if not isinstance(cmd_param, Option): + return False + if cmd_param.is_flag: + return False + last_option = None + for index, arg_str in enumerate( + reversed([arg for arg in all_args if arg != WORDBREAK]) + ): + if index + 1 > cmd_param.nargs: + break + if start_of_option(arg_str): + last_option = arg_str + + return True if last_option and last_option in cmd_param.opts else False + + +def is_incomplete_argument(current_params, cmd_param): + """ + :param current_params: the current params and values for this + argument as already entered + :param cmd_param: the current command parameter + :return: whether or not the last argument is incomplete and + corresponds to this cmd_param. In other words whether or not the + this cmd_param argument can still accept values + """ + if not isinstance(cmd_param, Argument): + return False + current_param_values = current_params[cmd_param.name] + if current_param_values is None: + return True + if cmd_param.nargs == -1: + return True + if ( + isinstance(current_param_values, abc.Iterable) + and cmd_param.nargs > 1 + and len(current_param_values) < cmd_param.nargs + ): + return True + return False + + +def get_user_autocompletions(ctx, args, incomplete, cmd_param): + """ + :param ctx: context associated with the parsed command + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :param cmd_param: command definition + :return: all the possible user-specified completions for the param + """ + results = [] + if isinstance(cmd_param.type, Choice): + # Choices don't support descriptions. + results = [ + (c, None) for c in cmd_param.type.choices if str(c).startswith(incomplete) + ] + elif cmd_param.autocompletion is not None: + dynamic_completions = cmd_param.autocompletion( + ctx=ctx, args=args, incomplete=incomplete + ) + results = [ + c if isinstance(c, tuple) else (c, None) for c in dynamic_completions + ] + return results + + +def get_visible_commands_starting_with(ctx, starts_with): + """ + :param ctx: context associated with the parsed command + :starts_with: string that visible commands must start with. + :return: all visible (not hidden) commands that start with starts_with. + """ + for c in ctx.command.list_commands(ctx): + if c.startswith(starts_with): + command = ctx.command.get_command(ctx, c) + if not command.hidden: + yield command + + +def add_subcommand_completions(ctx, incomplete, completions_out): + # Add subcommand completions. + if isinstance(ctx.command, MultiCommand): + completions_out.extend( + [ + (c.name, c.get_short_help_str()) + for c in get_visible_commands_starting_with(ctx, incomplete) + ] + ) + + # Walk up the context list and add any other completion + # possibilities from chained commands + while ctx.parent is not None: + ctx = ctx.parent + if isinstance(ctx.command, MultiCommand) and ctx.command.chain: + remaining_commands = [ + c + for c in get_visible_commands_starting_with(ctx, incomplete) + if c.name not in ctx.protected_args + ] + completions_out.extend( + [(c.name, c.get_short_help_str()) for c in remaining_commands] + ) + + +def get_choices(cli, prog_name, args, incomplete): + """ + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :return: all the possible completions for the incomplete + """ + all_args = copy.deepcopy(args) + + ctx = resolve_ctx(cli, prog_name, args) + if ctx is None: + return [] + + has_double_dash = "--" in all_args + + # In newer versions of bash long opts with '='s are partitioned, but + # it's easier to parse without the '=' + if start_of_option(incomplete) and WORDBREAK in incomplete: + partition_incomplete = incomplete.partition(WORDBREAK) + all_args.append(partition_incomplete[0]) + incomplete = partition_incomplete[2] + elif incomplete == WORDBREAK: + incomplete = "" + + completions = [] + if not has_double_dash and start_of_option(incomplete): + # completions for partial options + for param in ctx.command.params: + if isinstance(param, Option) and not param.hidden: + param_opts = [ + param_opt + for param_opt in param.opts + param.secondary_opts + if param_opt not in all_args or param.multiple + ] + completions.extend( + [(o, param.help) for o in param_opts if o.startswith(incomplete)] + ) + return completions + # completion for option values from user supplied values + for param in ctx.command.params: + if is_incomplete_option(all_args, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + # completion for argument values from user supplied values + for param in ctx.command.params: + if is_incomplete_argument(ctx.params, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + + add_subcommand_completions(ctx, incomplete, completions) + # Sort before returning so that proper ordering can be enforced in custom types. + return sorted(completions) + + +def do_complete(cli, prog_name, include_descriptions): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + for item in get_choices(cli, prog_name, args, incomplete): + echo(item[0]) + if include_descriptions: + # ZSH has trouble dealing with empty array parameters when + # returned from commands, use '_' to indicate no description + # is present. + echo(item[1] if item[1] else "_") + + return True + + +def do_complete_fish(cli, prog_name): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + incomplete = os.environ["COMP_CWORD"] + args = cwords[1:] + + for item in get_choices(cli, prog_name, args, incomplete): + if item[1]: + echo("{arg}\t{desc}".format(arg=item[0], desc=item[1])) + else: + echo(item[0]) + + return True + + +def bashcomplete(cli, prog_name, complete_var, complete_instr): + if "_" in complete_instr: + command, shell = complete_instr.split("_", 1) + else: + command = complete_instr + shell = "bash" + + if command == "source": + echo(get_completion_script(prog_name, complete_var, shell)) + return True + elif command == "complete": + if shell == "fish": + return do_complete_fish(cli, prog_name) + elif shell in {"bash", "zsh"}: + return do_complete(cli, prog_name, shell == "zsh") + + return False diff --git a/openpype/vendor/python/python_2/click/_compat.py b/openpype/vendor/python/python_2/click/_compat.py new file mode 100644 index 0000000000..60cb115bc5 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_compat.py @@ -0,0 +1,786 @@ +# flake8: noqa +import codecs +import io +import os +import re +import sys +from weakref import WeakKeyDictionary + +PY2 = sys.version_info[0] == 2 +CYGWIN = sys.platform.startswith("cygwin") +MSYS2 = sys.platform.startswith("win") and ("GCC" in sys.version) +# Determine local App Engine environment, per Google's own suggestion +APP_ENGINE = "APPENGINE_RUNTIME" in os.environ and "Development/" in os.environ.get( + "SERVER_SOFTWARE", "" +) +WIN = sys.platform.startswith("win") and not APP_ENGINE and not MSYS2 +DEFAULT_COLUMNS = 80 + + +_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") + + +def get_filesystem_encoding(): + return sys.getfilesystemencoding() or sys.getdefaultencoding() + + +def _make_text_stream( + stream, encoding, errors, force_readable=False, force_writable=False +): + if encoding is None: + encoding = get_best_encoding(stream) + if errors is None: + errors = "replace" + return _NonClosingTextIOWrapper( + stream, + encoding, + errors, + line_buffering=True, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def is_ascii_encoding(encoding): + """Checks if a given encoding is ascii.""" + try: + return codecs.lookup(encoding).name == "ascii" + except LookupError: + return False + + +def get_best_encoding(stream): + """Returns the default stream encoding if not found.""" + rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() + if is_ascii_encoding(rv): + return "utf-8" + return rv + + +class _NonClosingTextIOWrapper(io.TextIOWrapper): + def __init__( + self, + stream, + encoding, + errors, + force_readable=False, + force_writable=False, + **extra + ): + self._stream = stream = _FixupStream(stream, force_readable, force_writable) + io.TextIOWrapper.__init__(self, stream, encoding, errors, **extra) + + # The io module is a place where the Python 3 text behavior + # was forced upon Python 2, so we need to unbreak + # it to look like Python 2. + if PY2: + + def write(self, x): + if isinstance(x, str) or is_bytes(x): + try: + self.flush() + except Exception: + pass + return self.buffer.write(str(x)) + return io.TextIOWrapper.write(self, x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __del__(self): + try: + self.detach() + except Exception: + pass + + def isatty(self): + # https://bitbucket.org/pypy/pypy/issue/1803 + return self._stream.isatty() + + +class _FixupStream(object): + """The new io interface needs more from streams than streams + traditionally implement. As such, this fix-up code is necessary in + some circumstances. + + The forcing of readable and writable flags are there because some tools + put badly patched objects on sys (one such offender are certain version + of jupyter notebook). + """ + + def __init__(self, stream, force_readable=False, force_writable=False): + self._stream = stream + self._force_readable = force_readable + self._force_writable = force_writable + + def __getattr__(self, name): + return getattr(self._stream, name) + + def read1(self, size): + f = getattr(self._stream, "read1", None) + if f is not None: + return f(size) + # We only dispatch to readline instead of read in Python 2 as we + # do not want cause problems with the different implementation + # of line buffering. + if PY2: + return self._stream.readline(size) + return self._stream.read(size) + + def readable(self): + if self._force_readable: + return True + x = getattr(self._stream, "readable", None) + if x is not None: + return x() + try: + self._stream.read(0) + except Exception: + return False + return True + + def writable(self): + if self._force_writable: + return True + x = getattr(self._stream, "writable", None) + if x is not None: + return x() + try: + self._stream.write("") + except Exception: + try: + self._stream.write(b"") + except Exception: + return False + return True + + def seekable(self): + x = getattr(self._stream, "seekable", None) + if x is not None: + return x() + try: + self._stream.seek(self._stream.tell()) + except Exception: + return False + return True + + +if PY2: + text_type = unicode + raw_input = raw_input + string_types = (str, unicode) + int_types = (int, long) + iteritems = lambda x: x.iteritems() + range_type = xrange + + def is_bytes(x): + return isinstance(x, (buffer, bytearray)) + + _identifier_re = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$") + + # For Windows, we need to force stdout/stdin/stderr to binary if it's + # fetched for that. This obviously is not the most correct way to do + # it as it changes global state. Unfortunately, there does not seem to + # be a clear better way to do it as just reopening the file in binary + # mode does not change anything. + # + # An option would be to do what Python 3 does and to open the file as + # binary only, patch it back to the system, and then use a wrapper + # stream that converts newlines. It's not quite clear what's the + # correct option here. + # + # This code also lives in _winconsole for the fallback to the console + # emulation stream. + # + # There are also Windows environments where the `msvcrt` module is not + # available (which is why we use try-catch instead of the WIN variable + # here), such as the Google App Engine development server on Windows. In + # those cases there is just nothing we can do. + def set_binary_mode(f): + return f + + try: + import msvcrt + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + msvcrt.setmode(fileno, os.O_BINARY) + return f + + try: + import fcntl + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + flags = fcntl.fcntl(fileno, fcntl.F_GETFL) + fcntl.fcntl(fileno, fcntl.F_SETFL, flags & ~os.O_NONBLOCK) + return f + + def isidentifier(x): + return _identifier_re.search(x) is not None + + def get_binary_stdin(): + return set_binary_mode(sys.stdin) + + def get_binary_stdout(): + _wrap_std_stream("stdout") + return set_binary_mode(sys.stdout) + + def get_binary_stderr(): + _wrap_std_stream("stderr") + return set_binary_mode(sys.stderr) + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdin, encoding, errors, force_readable=True) + + def get_text_stdout(encoding=None, errors=None): + _wrap_std_stream("stdout") + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdout, encoding, errors, force_writable=True) + + def get_text_stderr(encoding=None, errors=None): + _wrap_std_stream("stderr") + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stderr, encoding, errors, force_writable=True) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + return value + + +else: + import io + + text_type = str + raw_input = input + string_types = (str,) + int_types = (int,) + range_type = range + isidentifier = lambda x: x.isidentifier() + iteritems = lambda x: iter(x.items()) + + def is_bytes(x): + return isinstance(x, (bytes, memoryview, bytearray)) + + def _is_binary_reader(stream, default=False): + try: + return isinstance(stream.read(0), bytes) + except Exception: + return default + # This happens in some cases where the stream was already + # closed. In this case, we assume the default. + + def _is_binary_writer(stream, default=False): + try: + stream.write(b"") + except Exception: + try: + stream.write("") + return False + except Exception: + pass + return default + return True + + def _find_binary_reader(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_reader(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_reader(buf, True): + return buf + + def _find_binary_writer(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detatching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_writer(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_writer(buf, True): + return buf + + def _stream_is_misconfigured(stream): + """A stream is misconfigured if its encoding is ASCII.""" + # If the stream does not have an encoding set, we assume it's set + # to ASCII. This appears to happen in certain unittest + # environments. It's not quite clear what the correct behavior is + # but this at least will force Click to recover somehow. + return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") + + def _is_compat_stream_attr(stream, attr, value): + """A stream attribute is compatible if it is equal to the + desired value or the desired value is unset and the attribute + has a value. + """ + stream_value = getattr(stream, attr, None) + return stream_value == value or (value is None and stream_value is not None) + + def _is_compatible_text_stream(stream, encoding, errors): + """Check if a stream's encoding and errors attributes are + compatible with the desired values. + """ + return _is_compat_stream_attr( + stream, "encoding", encoding + ) and _is_compat_stream_attr(stream, "errors", errors) + + def _force_correct_text_stream( + text_stream, + encoding, + errors, + is_binary, + find_binary, + force_readable=False, + force_writable=False, + ): + if is_binary(text_stream, False): + binary_reader = text_stream + else: + # If the stream looks compatible, and won't default to a + # misconfigured ascii encoding, return it as-is. + if _is_compatible_text_stream(text_stream, encoding, errors) and not ( + encoding is None and _stream_is_misconfigured(text_stream) + ): + return text_stream + + # Otherwise, get the underlying binary reader. + binary_reader = find_binary(text_stream) + + # If that's not possible, silently use the original reader + # and get mojibake instead of exceptions. + if binary_reader is None: + return text_stream + + # Default errors to replace instead of strict in order to get + # something that works. + if errors is None: + errors = "replace" + + # Wrap the binary stream in a text stream with the correct + # encoding parameters. + return _make_text_stream( + binary_reader, + encoding, + errors, + force_readable=force_readable, + force_writable=force_writable, + ) + + def _force_correct_text_reader(text_reader, encoding, errors, force_readable=False): + return _force_correct_text_stream( + text_reader, + encoding, + errors, + _is_binary_reader, + _find_binary_reader, + force_readable=force_readable, + ) + + def _force_correct_text_writer(text_writer, encoding, errors, force_writable=False): + return _force_correct_text_stream( + text_writer, + encoding, + errors, + _is_binary_writer, + _find_binary_writer, + force_writable=force_writable, + ) + + def get_binary_stdin(): + reader = _find_binary_reader(sys.stdin) + if reader is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdin.") + return reader + + def get_binary_stdout(): + writer = _find_binary_writer(sys.stdout) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stdout." + ) + return writer + + def get_binary_stderr(): + writer = _find_binary_writer(sys.stderr) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stderr." + ) + return writer + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_reader( + sys.stdin, encoding, errors, force_readable=True + ) + + def get_text_stdout(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stdout, encoding, errors, force_writable=True + ) + + def get_text_stderr(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stderr, encoding, errors, force_writable=True + ) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + else: + value = value.encode("utf-8", "surrogateescape").decode("utf-8", "replace") + return value + + +def get_streerror(e, default=None): + if hasattr(e, "strerror"): + msg = e.strerror + else: + if default is not None: + msg = default + else: + msg = str(e) + if isinstance(msg, bytes): + msg = msg.decode("utf-8", "replace") + return msg + + +def _wrap_io_open(file, mode, encoding, errors): + """On Python 2, :func:`io.open` returns a text file wrapper that + requires passing ``unicode`` to ``write``. Need to open the file in + binary mode then wrap it in a subclass that can write ``str`` and + ``unicode``. + + Also handles not passing ``encoding`` and ``errors`` in binary mode. + """ + binary = "b" in mode + + if binary: + kwargs = {} + else: + kwargs = {"encoding": encoding, "errors": errors} + + if not PY2 or binary: + return io.open(file, mode, **kwargs) + + f = io.open(file, "{}b".format(mode.replace("t", ""))) + return _make_text_stream(f, **kwargs) + + +def open_stream(filename, mode="r", encoding=None, errors="strict", atomic=False): + binary = "b" in mode + + # Standard streams first. These are simple because they don't need + # special handling for the atomic flag. It's entirely ignored. + if filename == "-": + if any(m in mode for m in ["w", "a", "x"]): + if binary: + return get_binary_stdout(), False + return get_text_stdout(encoding=encoding, errors=errors), False + if binary: + return get_binary_stdin(), False + return get_text_stdin(encoding=encoding, errors=errors), False + + # Non-atomic writes directly go out through the regular open functions. + if not atomic: + return _wrap_io_open(filename, mode, encoding, errors), True + + # Some usability stuff for atomic writes + if "a" in mode: + raise ValueError( + "Appending to an existing file is not supported, because that" + " would involve an expensive `copy`-operation to a temporary" + " file. Open the file in normal `w`-mode and copy explicitly" + " if that's what you're after." + ) + if "x" in mode: + raise ValueError("Use the `overwrite`-parameter instead.") + if "w" not in mode: + raise ValueError("Atomic writes only make sense with `w`-mode.") + + # Atomic writes are more complicated. They work by opening a file + # as a proxy in the same folder and then using the fdopen + # functionality to wrap it in a Python file. Then we wrap it in an + # atomic file that moves the file over on close. + import errno + import random + + try: + perm = os.stat(filename).st_mode + except OSError: + perm = None + + flags = os.O_RDWR | os.O_CREAT | os.O_EXCL + + if binary: + flags |= getattr(os, "O_BINARY", 0) + + while True: + tmp_filename = os.path.join( + os.path.dirname(filename), + ".__atomic-write{:08x}".format(random.randrange(1 << 32)), + ) + try: + fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) + break + except OSError as e: + if e.errno == errno.EEXIST or ( + os.name == "nt" + and e.errno == errno.EACCES + and os.path.isdir(e.filename) + and os.access(e.filename, os.W_OK) + ): + continue + raise + + if perm is not None: + os.chmod(tmp_filename, perm) # in case perm includes bits in umask + + f = _wrap_io_open(fd, mode, encoding, errors) + return _AtomicFile(f, tmp_filename, os.path.realpath(filename)), True + + +# Used in a destructor call, needs extra protection from interpreter cleanup. +if hasattr(os, "replace"): + _replace = os.replace + _can_replace = True +else: + _replace = os.rename + _can_replace = not WIN + + +class _AtomicFile(object): + def __init__(self, f, tmp_filename, real_filename): + self._f = f + self._tmp_filename = tmp_filename + self._real_filename = real_filename + self.closed = False + + @property + def name(self): + return self._real_filename + + def close(self, delete=False): + if self.closed: + return + self._f.close() + if not _can_replace: + try: + os.remove(self._real_filename) + except OSError: + pass + _replace(self._tmp_filename, self._real_filename) + self.closed = True + + def __getattr__(self, name): + return getattr(self._f, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close(delete=exc_type is not None) + + def __repr__(self): + return repr(self._f) + + +auto_wrap_for_ansi = None +colorama = None +get_winterm_size = None + + +def strip_ansi(value): + return _ansi_re.sub("", value) + + +def _is_jupyter_kernel_output(stream): + if WIN: + # TODO: Couldn't test on Windows, should't try to support until + # someone tests the details wrt colorama. + return + + while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): + stream = stream._stream + + return stream.__class__.__module__.startswith("ipykernel.") + + +def should_strip_ansi(stream=None, color=None): + if color is None: + if stream is None: + stream = sys.stdin + return not isatty(stream) and not _is_jupyter_kernel_output(stream) + return not color + + +# If we're on Windows, we provide transparent integration through +# colorama. This will make ANSI colors through the echo function +# work automatically. +if WIN: + # Windows has a smaller terminal + DEFAULT_COLUMNS = 79 + + from ._winconsole import _get_windows_console_stream, _wrap_std_stream + + def _get_argv_encoding(): + import locale + + return locale.getpreferredencoding() + + if PY2: + + def raw_input(prompt=""): + sys.stderr.flush() + if prompt: + stdout = _default_text_stdout() + stdout.write(prompt) + stdin = _default_text_stdin() + return stdin.readline().rstrip("\r\n") + + try: + import colorama + except ImportError: + pass + else: + _ansi_stream_wrappers = WeakKeyDictionary() + + def auto_wrap_for_ansi(stream, color=None): + """This function wraps a stream so that calls through colorama + are issued to the win32 console API to recolor on demand. It + also ensures to reset the colors if a write call is interrupted + to not destroy the console afterwards. + """ + try: + cached = _ansi_stream_wrappers.get(stream) + except Exception: + cached = None + if cached is not None: + return cached + strip = should_strip_ansi(stream, color) + ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) + rv = ansi_wrapper.stream + _write = rv.write + + def _safe_write(s): + try: + return _write(s) + except: + ansi_wrapper.reset_all() + raise + + rv.write = _safe_write + try: + _ansi_stream_wrappers[stream] = rv + except Exception: + pass + return rv + + def get_winterm_size(): + win = colorama.win32.GetConsoleScreenBufferInfo( + colorama.win32.STDOUT + ).srWindow + return win.Right - win.Left, win.Bottom - win.Top + + +else: + + def _get_argv_encoding(): + return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding() + + _get_windows_console_stream = lambda *x: None + _wrap_std_stream = lambda *x: None + + +def term_len(x): + return len(strip_ansi(x)) + + +def isatty(stream): + try: + return stream.isatty() + except Exception: + return False + + +def _make_cached_stream_func(src_func, wrapper_func): + cache = WeakKeyDictionary() + + def func(): + stream = src_func() + try: + rv = cache.get(stream) + except Exception: + rv = None + if rv is not None: + return rv + rv = wrapper_func() + try: + stream = src_func() # In case wrapper_func() modified the stream + cache[stream] = rv + except Exception: + pass + return rv + + return func + + +_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) +_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) +_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) + + +binary_streams = { + "stdin": get_binary_stdin, + "stdout": get_binary_stdout, + "stderr": get_binary_stderr, +} + +text_streams = { + "stdin": get_text_stdin, + "stdout": get_text_stdout, + "stderr": get_text_stderr, +} diff --git a/openpype/vendor/python/python_2/click/_termui_impl.py b/openpype/vendor/python/python_2/click/_termui_impl.py new file mode 100644 index 0000000000..88bec37701 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_termui_impl.py @@ -0,0 +1,657 @@ +# -*- coding: utf-8 -*- +""" +This module contains implementations for the termui module. To keep the +import time of Click down, some infrequently used functionality is +placed in this module and only imported as needed. +""" +import contextlib +import math +import os +import sys +import time + +from ._compat import _default_text_stdout +from ._compat import CYGWIN +from ._compat import get_best_encoding +from ._compat import int_types +from ._compat import isatty +from ._compat import open_stream +from ._compat import range_type +from ._compat import strip_ansi +from ._compat import term_len +from ._compat import WIN +from .exceptions import ClickException +from .utils import echo + +if os.name == "nt": + BEFORE_BAR = "\r" + AFTER_BAR = "\n" +else: + BEFORE_BAR = "\r\033[?25l" + AFTER_BAR = "\033[?25h\n" + + +def _length_hint(obj): + """Returns the length hint of an object.""" + try: + return len(obj) + except (AttributeError, TypeError): + try: + get_hint = type(obj).__length_hint__ + except AttributeError: + return None + try: + hint = get_hint(obj) + except TypeError: + return None + if hint is NotImplemented or not isinstance(hint, int_types) or hint < 0: + return None + return hint + + +class ProgressBar(object): + def __init__( + self, + iterable, + length=None, + fill_char="#", + empty_char=" ", + bar_template="%(bar)s", + info_sep=" ", + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + label=None, + file=None, + color=None, + width=30, + ): + self.fill_char = fill_char + self.empty_char = empty_char + self.bar_template = bar_template + self.info_sep = info_sep + self.show_eta = show_eta + self.show_percent = show_percent + self.show_pos = show_pos + self.item_show_func = item_show_func + self.label = label or "" + if file is None: + file = _default_text_stdout() + self.file = file + self.color = color + self.width = width + self.autowidth = width == 0 + + if length is None: + length = _length_hint(iterable) + if iterable is None: + if length is None: + raise TypeError("iterable or length is required") + iterable = range_type(length) + self.iter = iter(iterable) + self.length = length + self.length_known = length is not None + self.pos = 0 + self.avg = [] + self.start = self.last_eta = time.time() + self.eta_known = False + self.finished = False + self.max_width = None + self.entered = False + self.current_item = None + self.is_hidden = not isatty(self.file) + self._last_line = None + self.short_limit = 0.5 + + def __enter__(self): + self.entered = True + self.render_progress() + return self + + def __exit__(self, exc_type, exc_value, tb): + self.render_finish() + + def __iter__(self): + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + self.render_progress() + return self.generator() + + def __next__(self): + # Iteration is defined in terms of a generator function, + # returned by iter(self); use that to define next(). This works + # because `self.iter` is an iterable consumed by that generator, + # so it is re-entry safe. Calling `next(self.generator())` + # twice works and does "what you want". + return next(iter(self)) + + # Python 2 compat + next = __next__ + + def is_fast(self): + return time.time() - self.start <= self.short_limit + + def render_finish(self): + if self.is_hidden or self.is_fast(): + return + self.file.write(AFTER_BAR) + self.file.flush() + + @property + def pct(self): + if self.finished: + return 1.0 + return min(self.pos / (float(self.length) or 1), 1.0) + + @property + def time_per_iteration(self): + if not self.avg: + return 0.0 + return sum(self.avg) / float(len(self.avg)) + + @property + def eta(self): + if self.length_known and not self.finished: + return self.time_per_iteration * (self.length - self.pos) + return 0.0 + + def format_eta(self): + if self.eta_known: + t = int(self.eta) + seconds = t % 60 + t //= 60 + minutes = t % 60 + t //= 60 + hours = t % 24 + t //= 24 + if t > 0: + return "{}d {:02}:{:02}:{:02}".format(t, hours, minutes, seconds) + else: + return "{:02}:{:02}:{:02}".format(hours, minutes, seconds) + return "" + + def format_pos(self): + pos = str(self.pos) + if self.length_known: + pos += "/{}".format(self.length) + return pos + + def format_pct(self): + return "{: 4}%".format(int(self.pct * 100))[1:] + + def format_bar(self): + if self.length_known: + bar_length = int(self.pct * self.width) + bar = self.fill_char * bar_length + bar += self.empty_char * (self.width - bar_length) + elif self.finished: + bar = self.fill_char * self.width + else: + bar = list(self.empty_char * (self.width or 1)) + if self.time_per_iteration != 0: + bar[ + int( + (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) + * self.width + ) + ] = self.fill_char + bar = "".join(bar) + return bar + + def format_progress_line(self): + show_percent = self.show_percent + + info_bits = [] + if self.length_known and show_percent is None: + show_percent = not self.show_pos + + if self.show_pos: + info_bits.append(self.format_pos()) + if show_percent: + info_bits.append(self.format_pct()) + if self.show_eta and self.eta_known and not self.finished: + info_bits.append(self.format_eta()) + if self.item_show_func is not None: + item_info = self.item_show_func(self.current_item) + if item_info is not None: + info_bits.append(item_info) + + return ( + self.bar_template + % { + "label": self.label, + "bar": self.format_bar(), + "info": self.info_sep.join(info_bits), + } + ).rstrip() + + def render_progress(self): + from .termui import get_terminal_size + + if self.is_hidden: + return + + buf = [] + # Update width in case the terminal has been resized + if self.autowidth: + old_width = self.width + self.width = 0 + clutter_length = term_len(self.format_progress_line()) + new_width = max(0, get_terminal_size()[0] - clutter_length) + if new_width < old_width: + buf.append(BEFORE_BAR) + buf.append(" " * self.max_width) + self.max_width = new_width + self.width = new_width + + clear_width = self.width + if self.max_width is not None: + clear_width = self.max_width + + buf.append(BEFORE_BAR) + line = self.format_progress_line() + line_len = term_len(line) + if self.max_width is None or self.max_width < line_len: + self.max_width = line_len + + buf.append(line) + buf.append(" " * (clear_width - line_len)) + line = "".join(buf) + # Render the line only if it changed. + + if line != self._last_line and not self.is_fast(): + self._last_line = line + echo(line, file=self.file, color=self.color, nl=False) + self.file.flush() + + def make_step(self, n_steps): + self.pos += n_steps + if self.length_known and self.pos >= self.length: + self.finished = True + + if (time.time() - self.last_eta) < 1.0: + return + + self.last_eta = time.time() + + # self.avg is a rolling list of length <= 7 of steps where steps are + # defined as time elapsed divided by the total progress through + # self.length. + if self.pos: + step = (time.time() - self.start) / self.pos + else: + step = time.time() - self.start + + self.avg = self.avg[-6:] + [step] + + self.eta_known = self.length_known + + def update(self, n_steps): + self.make_step(n_steps) + self.render_progress() + + def finish(self): + self.eta_known = 0 + self.current_item = None + self.finished = True + + def generator(self): + """Return a generator which yields the items added to the bar + during construction, and updates the progress bar *after* the + yielded block returns. + """ + # WARNING: the iterator interface for `ProgressBar` relies on + # this and only works because this is a simple generator which + # doesn't create or manage additional state. If this function + # changes, the impact should be evaluated both against + # `iter(bar)` and `next(bar)`. `next()` in particular may call + # `self.generator()` repeatedly, and this must remain safe in + # order for that interface to work. + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + + if self.is_hidden: + for rv in self.iter: + yield rv + else: + for rv in self.iter: + self.current_item = rv + yield rv + self.update(1) + self.finish() + self.render_progress() + + +def pager(generator, color=None): + """Decide what method to use for paging through text.""" + stdout = _default_text_stdout() + if not isatty(sys.stdin) or not isatty(stdout): + return _nullpager(stdout, generator, color) + pager_cmd = (os.environ.get("PAGER", None) or "").strip() + if pager_cmd: + if WIN: + return _tempfilepager(generator, pager_cmd, color) + return _pipepager(generator, pager_cmd, color) + if os.environ.get("TERM") in ("dumb", "emacs"): + return _nullpager(stdout, generator, color) + if WIN or sys.platform.startswith("os2"): + return _tempfilepager(generator, "more <", color) + if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0: + return _pipepager(generator, "less", color) + + import tempfile + + fd, filename = tempfile.mkstemp() + os.close(fd) + try: + if hasattr(os, "system") and os.system('more "{}"'.format(filename)) == 0: + return _pipepager(generator, "more", color) + return _nullpager(stdout, generator, color) + finally: + os.unlink(filename) + + +def _pipepager(generator, cmd, color): + """Page through text by feeding it to another program. Invoking a + pager through this might support colors. + """ + import subprocess + + env = dict(os.environ) + + # If we're piping to less we might support colors under the + # condition that + cmd_detail = cmd.rsplit("/", 1)[-1].split() + if color is None and cmd_detail[0] == "less": + less_flags = "{}{}".format(os.environ.get("LESS", ""), " ".join(cmd_detail[1:])) + if not less_flags: + env["LESS"] = "-R" + color = True + elif "r" in less_flags or "R" in less_flags: + color = True + + c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env) + encoding = get_best_encoding(c.stdin) + try: + for text in generator: + if not color: + text = strip_ansi(text) + + c.stdin.write(text.encode(encoding, "replace")) + except (IOError, KeyboardInterrupt): + pass + else: + c.stdin.close() + + # Less doesn't respect ^C, but catches it for its own UI purposes (aborting + # search or other commands inside less). + # + # That means when the user hits ^C, the parent process (click) terminates, + # but less is still alive, paging the output and messing up the terminal. + # + # If the user wants to make the pager exit on ^C, they should set + # `LESS='-K'`. It's not our decision to make. + while True: + try: + c.wait() + except KeyboardInterrupt: + pass + else: + break + + +def _tempfilepager(generator, cmd, color): + """Page through text by invoking a program on a temporary file.""" + import tempfile + + filename = tempfile.mktemp() + # TODO: This never terminates if the passed generator never terminates. + text = "".join(generator) + if not color: + text = strip_ansi(text) + encoding = get_best_encoding(sys.stdout) + with open_stream(filename, "wb")[0] as f: + f.write(text.encode(encoding)) + try: + os.system('{} "{}"'.format(cmd, filename)) + finally: + os.unlink(filename) + + +def _nullpager(stream, generator, color): + """Simply print unformatted text. This is the ultimate fallback.""" + for text in generator: + if not color: + text = strip_ansi(text) + stream.write(text) + + +class Editor(object): + def __init__(self, editor=None, env=None, require_save=True, extension=".txt"): + self.editor = editor + self.env = env + self.require_save = require_save + self.extension = extension + + def get_editor(self): + if self.editor is not None: + return self.editor + for key in "VISUAL", "EDITOR": + rv = os.environ.get(key) + if rv: + return rv + if WIN: + return "notepad" + for editor in "sensible-editor", "vim", "nano": + if os.system("which {} >/dev/null 2>&1".format(editor)) == 0: + return editor + return "vi" + + def edit_file(self, filename): + import subprocess + + editor = self.get_editor() + if self.env: + environ = os.environ.copy() + environ.update(self.env) + else: + environ = None + try: + c = subprocess.Popen( + '{} "{}"'.format(editor, filename), env=environ, shell=True, + ) + exit_code = c.wait() + if exit_code != 0: + raise ClickException("{}: Editing failed!".format(editor)) + except OSError as e: + raise ClickException("{}: Editing failed: {}".format(editor, e)) + + def edit(self, text): + import tempfile + + text = text or "" + if text and not text.endswith("\n"): + text += "\n" + + fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) + try: + if WIN: + encoding = "utf-8-sig" + text = text.replace("\n", "\r\n") + else: + encoding = "utf-8" + text = text.encode(encoding) + + f = os.fdopen(fd, "wb") + f.write(text) + f.close() + timestamp = os.path.getmtime(name) + + self.edit_file(name) + + if self.require_save and os.path.getmtime(name) == timestamp: + return None + + f = open(name, "rb") + try: + rv = f.read() + finally: + f.close() + return rv.decode("utf-8-sig").replace("\r\n", "\n") + finally: + os.unlink(name) + + +def open_url(url, wait=False, locate=False): + import subprocess + + def _unquote_file(url): + try: + import urllib + except ImportError: + import urllib + if url.startswith("file://"): + url = urllib.unquote(url[7:]) + return url + + if sys.platform == "darwin": + args = ["open"] + if wait: + args.append("-W") + if locate: + args.append("-R") + args.append(_unquote_file(url)) + null = open("/dev/null", "w") + try: + return subprocess.Popen(args, stderr=null).wait() + finally: + null.close() + elif WIN: + if locate: + url = _unquote_file(url) + args = 'explorer /select,"{}"'.format(_unquote_file(url.replace('"', ""))) + else: + args = 'start {} "" "{}"'.format( + "/WAIT" if wait else "", url.replace('"', "") + ) + return os.system(args) + elif CYGWIN: + if locate: + url = _unquote_file(url) + args = 'cygstart "{}"'.format(os.path.dirname(url).replace('"', "")) + else: + args = 'cygstart {} "{}"'.format("-w" if wait else "", url.replace('"', "")) + return os.system(args) + + try: + if locate: + url = os.path.dirname(_unquote_file(url)) or "." + else: + url = _unquote_file(url) + c = subprocess.Popen(["xdg-open", url]) + if wait: + return c.wait() + return 0 + except OSError: + if url.startswith(("http://", "https://")) and not locate and not wait: + import webbrowser + + webbrowser.open(url) + return 0 + return 1 + + +def _translate_ch_to_exc(ch): + if ch == u"\x03": + raise KeyboardInterrupt() + if ch == u"\x04" and not WIN: # Unix-like, Ctrl+D + raise EOFError() + if ch == u"\x1a" and WIN: # Windows, Ctrl+Z + raise EOFError() + + +if WIN: + import msvcrt + + @contextlib.contextmanager + def raw_terminal(): + yield + + def getchar(echo): + # The function `getch` will return a bytes object corresponding to + # the pressed character. Since Windows 10 build 1803, it will also + # return \x00 when called a second time after pressing a regular key. + # + # `getwch` does not share this probably-bugged behavior. Moreover, it + # returns a Unicode object by default, which is what we want. + # + # Either of these functions will return \x00 or \xe0 to indicate + # a special key, and you need to call the same function again to get + # the "rest" of the code. The fun part is that \u00e0 is + # "latin small letter a with grave", so if you type that on a French + # keyboard, you _also_ get a \xe0. + # E.g., consider the Up arrow. This returns \xe0 and then \x48. The + # resulting Unicode string reads as "a with grave" + "capital H". + # This is indistinguishable from when the user actually types + # "a with grave" and then "capital H". + # + # When \xe0 is returned, we assume it's part of a special-key sequence + # and call `getwch` again, but that means that when the user types + # the \u00e0 character, `getchar` doesn't return until a second + # character is typed. + # The alternative is returning immediately, but that would mess up + # cross-platform handling of arrow keys and others that start with + # \xe0. Another option is using `getch`, but then we can't reliably + # read non-ASCII characters, because return values of `getch` are + # limited to the current 8-bit codepage. + # + # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` + # is doing the right thing in more situations than with `getch`. + if echo: + func = msvcrt.getwche + else: + func = msvcrt.getwch + + rv = func() + if rv in (u"\x00", u"\xe0"): + # \x00 and \xe0 are control characters that indicate special key, + # see above. + rv += func() + _translate_ch_to_exc(rv) + return rv + + +else: + import tty + import termios + + @contextlib.contextmanager + def raw_terminal(): + if not isatty(sys.stdin): + f = open("/dev/tty") + fd = f.fileno() + else: + fd = sys.stdin.fileno() + f = None + try: + old_settings = termios.tcgetattr(fd) + try: + tty.setraw(fd) + yield fd + finally: + termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) + sys.stdout.flush() + if f is not None: + f.close() + except termios.error: + pass + + def getchar(echo): + with raw_terminal() as fd: + ch = os.read(fd, 32) + ch = ch.decode(get_best_encoding(sys.stdin), "replace") + if echo and isatty(sys.stdout): + sys.stdout.write(ch) + _translate_ch_to_exc(ch) + return ch diff --git a/openpype/vendor/python/python_2/click/_textwrap.py b/openpype/vendor/python/python_2/click/_textwrap.py new file mode 100644 index 0000000000..6959087b7f --- /dev/null +++ b/openpype/vendor/python/python_2/click/_textwrap.py @@ -0,0 +1,37 @@ +import textwrap +from contextlib import contextmanager + + +class TextWrapper(textwrap.TextWrapper): + def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width): + space_left = max(width - cur_len, 1) + + if self.break_long_words: + last = reversed_chunks[-1] + cut = last[:space_left] + res = last[space_left:] + cur_line.append(cut) + reversed_chunks[-1] = res + elif not cur_line: + cur_line.append(reversed_chunks.pop()) + + @contextmanager + def extra_indent(self, indent): + old_initial_indent = self.initial_indent + old_subsequent_indent = self.subsequent_indent + self.initial_indent += indent + self.subsequent_indent += indent + try: + yield + finally: + self.initial_indent = old_initial_indent + self.subsequent_indent = old_subsequent_indent + + def indent_only(self, text): + rv = [] + for idx, line in enumerate(text.splitlines()): + indent = self.initial_indent + if idx > 0: + indent = self.subsequent_indent + rv.append(indent + line) + return "\n".join(rv) diff --git a/openpype/vendor/python/python_2/click/_unicodefun.py b/openpype/vendor/python/python_2/click/_unicodefun.py new file mode 100644 index 0000000000..781c365227 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_unicodefun.py @@ -0,0 +1,131 @@ +import codecs +import os +import sys + +from ._compat import PY2 + + +def _find_unicode_literals_frame(): + import __future__ + + if not hasattr(sys, "_getframe"): # not all Python implementations have it + return 0 + frm = sys._getframe(1) + idx = 1 + while frm is not None: + if frm.f_globals.get("__name__", "").startswith("click."): + frm = frm.f_back + idx += 1 + elif frm.f_code.co_flags & __future__.unicode_literals.compiler_flag: + return idx + else: + break + return 0 + + +def _check_for_unicode_literals(): + if not __debug__: + return + + from . import disable_unicode_literals_warning + + if not PY2 or disable_unicode_literals_warning: + return + bad_frame = _find_unicode_literals_frame() + if bad_frame <= 0: + return + from warnings import warn + + warn( + Warning( + "Click detected the use of the unicode_literals __future__" + " import. This is heavily discouraged because it can" + " introduce subtle bugs in your code. You should instead" + ' use explicit u"" literals for your unicode strings. For' + " more information see" + " https://click.palletsprojects.com/python3/" + ), + stacklevel=bad_frame, + ) + + +def _verify_python3_env(): + """Ensures that the environment is good for unicode on Python 3.""" + if PY2: + return + try: + import locale + + fs_enc = codecs.lookup(locale.getpreferredencoding()).name + except Exception: + fs_enc = "ascii" + if fs_enc != "ascii": + return + + extra = "" + if os.name == "posix": + import subprocess + + try: + rv = subprocess.Popen( + ["locale", "-a"], stdout=subprocess.PIPE, stderr=subprocess.PIPE + ).communicate()[0] + except OSError: + rv = b"" + good_locales = set() + has_c_utf8 = False + + # Make sure we're operating on text here. + if isinstance(rv, bytes): + rv = rv.decode("ascii", "replace") + + for line in rv.splitlines(): + locale = line.strip() + if locale.lower().endswith((".utf-8", ".utf8")): + good_locales.add(locale) + if locale.lower() in ("c.utf8", "c.utf-8"): + has_c_utf8 = True + + extra += "\n\n" + if not good_locales: + extra += ( + "Additional information: on this system no suitable" + " UTF-8 locales were discovered. This most likely" + " requires resolving by reconfiguring the locale" + " system." + ) + elif has_c_utf8: + extra += ( + "This system supports the C.UTF-8 locale which is" + " recommended. You might be able to resolve your issue" + " by exporting the following environment variables:\n\n" + " export LC_ALL=C.UTF-8\n" + " export LANG=C.UTF-8" + ) + else: + extra += ( + "This system lists a couple of UTF-8 supporting locales" + " that you can pick from. The following suitable" + " locales were discovered: {}".format(", ".join(sorted(good_locales))) + ) + + bad_locale = None + for locale in os.environ.get("LC_ALL"), os.environ.get("LANG"): + if locale and locale.lower().endswith((".utf-8", ".utf8")): + bad_locale = locale + if locale is not None: + break + if bad_locale is not None: + extra += ( + "\n\nClick discovered that you exported a UTF-8 locale" + " but the locale system could not pick up from it" + " because it does not exist. The exported locale is" + " '{}' but it is not supported".format(bad_locale) + ) + + raise RuntimeError( + "Click will abort further execution because Python 3 was" + " configured to use ASCII as encoding for the environment." + " Consult https://click.palletsprojects.com/python3/ for" + " mitigation steps.{}".format(extra) + ) diff --git a/openpype/vendor/python/python_2/click/_winconsole.py b/openpype/vendor/python/python_2/click/_winconsole.py new file mode 100644 index 0000000000..b6c4274af0 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_winconsole.py @@ -0,0 +1,370 @@ +# -*- coding: utf-8 -*- +# This module is based on the excellent work by Adam Bartoš who +# provided a lot of what went into the implementation here in +# the discussion to issue1602 in the Python bug tracker. +# +# There are some general differences in regards to how this works +# compared to the original patches as we do not need to patch +# the entire interpreter but just work in our little world of +# echo and prmopt. +import ctypes +import io +import os +import sys +import time +import zlib +from ctypes import byref +from ctypes import c_char +from ctypes import c_char_p +from ctypes import c_int +from ctypes import c_ssize_t +from ctypes import c_ulong +from ctypes import c_void_p +from ctypes import POINTER +from ctypes import py_object +from ctypes import windll +from ctypes import WinError +from ctypes import WINFUNCTYPE +from ctypes.wintypes import DWORD +from ctypes.wintypes import HANDLE +from ctypes.wintypes import LPCWSTR +from ctypes.wintypes import LPWSTR + +import msvcrt + +from ._compat import _NonClosingTextIOWrapper +from ._compat import PY2 +from ._compat import text_type + +try: + from ctypes import pythonapi + + PyObject_GetBuffer = pythonapi.PyObject_GetBuffer + PyBuffer_Release = pythonapi.PyBuffer_Release +except ImportError: + pythonapi = None + + +c_ssize_p = POINTER(c_ssize_t) + +kernel32 = windll.kernel32 +GetStdHandle = kernel32.GetStdHandle +ReadConsoleW = kernel32.ReadConsoleW +WriteConsoleW = kernel32.WriteConsoleW +GetConsoleMode = kernel32.GetConsoleMode +GetLastError = kernel32.GetLastError +GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) +CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( + ("CommandLineToArgvW", windll.shell32) +) +LocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)( + ("LocalFree", windll.kernel32) +) + + +STDIN_HANDLE = GetStdHandle(-10) +STDOUT_HANDLE = GetStdHandle(-11) +STDERR_HANDLE = GetStdHandle(-12) + + +PyBUF_SIMPLE = 0 +PyBUF_WRITABLE = 1 + +ERROR_SUCCESS = 0 +ERROR_NOT_ENOUGH_MEMORY = 8 +ERROR_OPERATION_ABORTED = 995 + +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + +EOF = b"\x1a" +MAX_BYTES_WRITTEN = 32767 + + +class Py_buffer(ctypes.Structure): + _fields_ = [ + ("buf", c_void_p), + ("obj", py_object), + ("len", c_ssize_t), + ("itemsize", c_ssize_t), + ("readonly", c_int), + ("ndim", c_int), + ("format", c_char_p), + ("shape", c_ssize_p), + ("strides", c_ssize_p), + ("suboffsets", c_ssize_p), + ("internal", c_void_p), + ] + + if PY2: + _fields_.insert(-1, ("smalltable", c_ssize_t * 2)) + + +# On PyPy we cannot get buffers so our ability to operate here is +# serverly limited. +if pythonapi is None: + get_buffer = None +else: + + def get_buffer(obj, writable=False): + buf = Py_buffer() + flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE + PyObject_GetBuffer(py_object(obj), byref(buf), flags) + try: + buffer_type = c_char * buf.len + return buffer_type.from_address(buf.buf) + finally: + PyBuffer_Release(byref(buf)) + + +class _WindowsConsoleRawIOBase(io.RawIOBase): + def __init__(self, handle): + self.handle = handle + + def isatty(self): + io.RawIOBase.isatty(self) + return True + + +class _WindowsConsoleReader(_WindowsConsoleRawIOBase): + def readable(self): + return True + + def readinto(self, b): + bytes_to_be_read = len(b) + if not bytes_to_be_read: + return 0 + elif bytes_to_be_read % 2: + raise ValueError( + "cannot read odd number of bytes from UTF-16-LE encoded console" + ) + + buffer = get_buffer(b, writable=True) + code_units_to_be_read = bytes_to_be_read // 2 + code_units_read = c_ulong() + + rv = ReadConsoleW( + HANDLE(self.handle), + buffer, + code_units_to_be_read, + byref(code_units_read), + None, + ) + if GetLastError() == ERROR_OPERATION_ABORTED: + # wait for KeyboardInterrupt + time.sleep(0.1) + if not rv: + raise OSError("Windows error: {}".format(GetLastError())) + + if buffer[0] == EOF: + return 0 + return 2 * code_units_read.value + + +class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): + def writable(self): + return True + + @staticmethod + def _get_error_message(errno): + if errno == ERROR_SUCCESS: + return "ERROR_SUCCESS" + elif errno == ERROR_NOT_ENOUGH_MEMORY: + return "ERROR_NOT_ENOUGH_MEMORY" + return "Windows error {}".format(errno) + + def write(self, b): + bytes_to_be_written = len(b) + buf = get_buffer(b) + code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 + code_units_written = c_ulong() + + WriteConsoleW( + HANDLE(self.handle), + buf, + code_units_to_be_written, + byref(code_units_written), + None, + ) + bytes_written = 2 * code_units_written.value + + if bytes_written == 0 and bytes_to_be_written > 0: + raise OSError(self._get_error_message(GetLastError())) + return bytes_written + + +class ConsoleStream(object): + def __init__(self, text_stream, byte_stream): + self._text_stream = text_stream + self.buffer = byte_stream + + @property + def name(self): + return self.buffer.name + + def write(self, x): + if isinstance(x, text_type): + return self._text_stream.write(x) + try: + self.flush() + except Exception: + pass + return self.buffer.write(x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __getattr__(self, name): + return getattr(self._text_stream, name) + + def isatty(self): + return self.buffer.isatty() + + def __repr__(self): + return "".format( + self.name, self.encoding + ) + + +class WindowsChunkedWriter(object): + """ + Wraps a stream (such as stdout), acting as a transparent proxy for all + attribute access apart from method 'write()' which we wrap to write in + limited chunks due to a Windows limitation on binary console streams. + """ + + def __init__(self, wrapped): + # double-underscore everything to prevent clashes with names of + # attributes on the wrapped stream object. + self.__wrapped = wrapped + + def __getattr__(self, name): + return getattr(self.__wrapped, name) + + def write(self, text): + total_to_write = len(text) + written = 0 + + while written < total_to_write: + to_write = min(total_to_write - written, MAX_BYTES_WRITTEN) + self.__wrapped.write(text[written : written + to_write]) + written += to_write + + +_wrapped_std_streams = set() + + +def _wrap_std_stream(name): + # Python 2 & Windows 7 and below + if ( + PY2 + and sys.getwindowsversion()[:2] <= (6, 1) + and name not in _wrapped_std_streams + ): + setattr(sys, name, WindowsChunkedWriter(getattr(sys, name))) + _wrapped_std_streams.add(name) + + +def _get_text_stdin(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stdout(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stderr(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +if PY2: + + def _hash_py_argv(): + return zlib.crc32("\x00".join(sys.argv[1:])) + + _initial_argv_hash = _hash_py_argv() + + def _get_windows_argv(): + argc = c_int(0) + argv_unicode = CommandLineToArgvW(GetCommandLineW(), byref(argc)) + if not argv_unicode: + raise WinError() + try: + argv = [argv_unicode[i] for i in range(0, argc.value)] + finally: + LocalFree(argv_unicode) + del argv_unicode + + if not hasattr(sys, "frozen"): + argv = argv[1:] + while len(argv) > 0: + arg = argv[0] + if not arg.startswith("-") or arg == "-": + break + argv = argv[1:] + if arg.startswith(("-c", "-m")): + break + + return argv[1:] + + +_stream_factories = { + 0: _get_text_stdin, + 1: _get_text_stdout, + 2: _get_text_stderr, +} + + +def _is_console(f): + if not hasattr(f, "fileno"): + return False + + try: + fileno = f.fileno() + except OSError: + return False + + handle = msvcrt.get_osfhandle(fileno) + return bool(GetConsoleMode(handle, byref(DWORD()))) + + +def _get_windows_console_stream(f, encoding, errors): + if ( + get_buffer is not None + and encoding in ("utf-16-le", None) + and errors in ("strict", None) + and _is_console(f) + ): + func = _stream_factories.get(f.fileno()) + if func is not None: + if not PY2: + f = getattr(f, "buffer", None) + if f is None: + return None + else: + # If we are on Python 2 we need to set the stream that we + # deal with to binary mode as otherwise the exercise if a + # bit moot. The same problems apply as for + # get_binary_stdin and friends from _compat. + msvcrt.setmode(f.fileno(), os.O_BINARY) + return func(f) diff --git a/openpype/vendor/python/python_2/click/core.py b/openpype/vendor/python/python_2/click/core.py new file mode 100644 index 0000000000..f58bf26d2f --- /dev/null +++ b/openpype/vendor/python/python_2/click/core.py @@ -0,0 +1,2030 @@ +import errno +import inspect +import os +import sys +from contextlib import contextmanager +from functools import update_wrapper +from itertools import repeat + +from ._compat import isidentifier +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types +from ._unicodefun import _check_for_unicode_literals +from ._unicodefun import _verify_python3_env +from .exceptions import Abort +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import Exit +from .exceptions import MissingParameter +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import join_options +from .globals import pop_context +from .globals import push_context +from .parser import OptionParser +from .parser import split_opt +from .termui import confirm +from .termui import prompt +from .termui import style +from .types import BOOL +from .types import convert_type +from .types import IntRange +from .utils import echo +from .utils import get_os_args +from .utils import make_default_short_help +from .utils import make_str +from .utils import PacifyFlushWrapper + +_missing = object() + +SUBCOMMAND_METAVAR = "COMMAND [ARGS]..." +SUBCOMMANDS_METAVAR = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." + +DEPRECATED_HELP_NOTICE = " (DEPRECATED)" +DEPRECATED_INVOKE_NOTICE = "DeprecationWarning: The command %(name)s is deprecated." + + +def _maybe_show_deprecated_notice(cmd): + if cmd.deprecated: + echo(style(DEPRECATED_INVOKE_NOTICE % {"name": cmd.name}, fg="red"), err=True) + + +def fast_exit(code): + """Exit without garbage collection, this speeds up exit by about 10ms for + things like bash completion. + """ + sys.stdout.flush() + sys.stderr.flush() + os._exit(code) + + +def _bashcomplete(cmd, prog_name, complete_var=None): + """Internal handler for the bash completion support.""" + if complete_var is None: + complete_var = "_{}_COMPLETE".format(prog_name.replace("-", "_").upper()) + complete_instr = os.environ.get(complete_var) + if not complete_instr: + return + + from ._bashcomplete import bashcomplete + + if bashcomplete(cmd, prog_name, complete_var, complete_instr): + fast_exit(1) + + +def _check_multicommand(base_command, cmd_name, cmd, register=False): + if not base_command.chain or not isinstance(cmd, MultiCommand): + return + if register: + hint = ( + "It is not possible to add multi commands as children to" + " another multi command that is in chain mode." + ) + else: + hint = ( + "Found a multi command as subcommand to a multi command" + " that is in chain mode. This is not supported." + ) + raise RuntimeError( + "{}. Command '{}' is set to chain and '{}' was added as" + " subcommand but it in itself is a multi command. ('{}' is a {}" + " within a chained {} named '{}').".format( + hint, + base_command.name, + cmd_name, + cmd_name, + cmd.__class__.__name__, + base_command.__class__.__name__, + base_command.name, + ) + ) + + +def batch(iterable, batch_size): + return list(zip(*repeat(iter(iterable), batch_size))) + + +def invoke_param_callback(callback, ctx, param, value): + code = getattr(callback, "__code__", None) + args = getattr(code, "co_argcount", 3) + + if args < 3: + from warnings import warn + + warn( + "Parameter callbacks take 3 args, (ctx, param, value). The" + " 2-arg style is deprecated and will be removed in 8.0.".format(callback), + DeprecationWarning, + stacklevel=3, + ) + return callback(ctx, value) + + return callback(ctx, param, value) + + +@contextmanager +def augment_usage_errors(ctx, param=None): + """Context manager that attaches extra information to exceptions.""" + try: + yield + except BadParameter as e: + if e.ctx is None: + e.ctx = ctx + if param is not None and e.param is None: + e.param = param + raise + except UsageError as e: + if e.ctx is None: + e.ctx = ctx + raise + + +def iter_params_for_processing(invocation_order, declaration_order): + """Given a sequence of parameters in the order as should be considered + for processing and an iterable of parameters that exist, this returns + a list in the correct order as they should be processed. + """ + + def sort_key(item): + try: + idx = invocation_order.index(item) + except ValueError: + idx = float("inf") + return (not item.is_eager, idx) + + return sorted(declaration_order, key=sort_key) + + +class Context(object): + """The context is a special internal object that holds state relevant + for the script execution at every single level. It's normally invisible + to commands unless they opt-in to getting access to it. + + The context is useful as it can pass internal objects around and can + control special execution features such as reading data from + environment variables. + + A context can be used as context manager in which case it will call + :meth:`close` on teardown. + + .. versionadded:: 2.0 + Added the `resilient_parsing`, `help_option_names`, + `token_normalize_func` parameters. + + .. versionadded:: 3.0 + Added the `allow_extra_args` and `allow_interspersed_args` + parameters. + + .. versionadded:: 4.0 + Added the `color`, `ignore_unknown_options`, and + `max_content_width` parameters. + + .. versionadded:: 7.1 + Added the `show_default` parameter. + + :param command: the command class for this context. + :param parent: the parent context. + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it is usually + the name of the script, for commands below it it's + the name of the script. + :param obj: an arbitrary object of user data. + :param auto_envvar_prefix: the prefix to use for automatic environment + variables. If this is `None` then reading + from environment variables is disabled. This + does not affect manually set environment + variables which are always read. + :param default_map: a dictionary (like object) with default values + for parameters. + :param terminal_width: the width of the terminal. The default is + inherit from parent context. If no context + defines the terminal width then auto + detection will be applied. + :param max_content_width: the maximum width for content rendered by + Click (this currently only affects help + pages). This defaults to 80 characters if + not overridden. In other words: even if the + terminal is larger than that, Click will not + format things wider than 80 characters by + default. In addition to that, formatters might + add some safety mapping on the right. + :param resilient_parsing: if this flag is enabled then Click will + parse without any interactivity or callback + invocation. Default values will also be + ignored. This is useful for implementing + things such as completion support. + :param allow_extra_args: if this is set to `True` then extra arguments + at the end will not raise an error and will be + kept on the context. The default is to inherit + from the command. + :param allow_interspersed_args: if this is set to `False` then options + and arguments cannot be mixed. The + default is to inherit from the command. + :param ignore_unknown_options: instructs click to ignore options it does + not know and keeps them for later + processing. + :param help_option_names: optionally a list of strings that define how + the default help parameter is named. The + default is ``['--help']``. + :param token_normalize_func: an optional function that is used to + normalize tokens (options, choices, + etc.). This for instance can be used to + implement case insensitive behavior. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are used in texts that Click prints which is by + default not the case. This for instance would affect + help output. + :param show_default: if True, shows defaults for all options. + Even if an option is later created with show_default=False, + this command-level setting overrides it. + """ + + def __init__( + self, + command, + parent=None, + info_name=None, + obj=None, + auto_envvar_prefix=None, + default_map=None, + terminal_width=None, + max_content_width=None, + resilient_parsing=False, + allow_extra_args=None, + allow_interspersed_args=None, + ignore_unknown_options=None, + help_option_names=None, + token_normalize_func=None, + color=None, + show_default=None, + ): + #: the parent context or `None` if none exists. + self.parent = parent + #: the :class:`Command` for this context. + self.command = command + #: the descriptive information name + self.info_name = info_name + #: the parsed parameters except if the value is hidden in which + #: case it's not remembered. + self.params = {} + #: the leftover arguments. + self.args = [] + #: protected arguments. These are arguments that are prepended + #: to `args` when certain parsing scenarios are encountered but + #: must be never propagated to another arguments. This is used + #: to implement nested parsing. + self.protected_args = [] + if obj is None and parent is not None: + obj = parent.obj + #: the user object stored. + self.obj = obj + self._meta = getattr(parent, "meta", {}) + + #: A dictionary (-like object) with defaults for parameters. + if ( + default_map is None + and parent is not None + and parent.default_map is not None + ): + default_map = parent.default_map.get(info_name) + self.default_map = default_map + + #: This flag indicates if a subcommand is going to be executed. A + #: group callback can use this information to figure out if it's + #: being executed directly or because the execution flow passes + #: onwards to a subcommand. By default it's None, but it can be + #: the name of the subcommand to execute. + #: + #: If chaining is enabled this will be set to ``'*'`` in case + #: any commands are executed. It is however not possible to + #: figure out which ones. If you require this knowledge you + #: should use a :func:`resultcallback`. + self.invoked_subcommand = None + + if terminal_width is None and parent is not None: + terminal_width = parent.terminal_width + #: The width of the terminal (None is autodetection). + self.terminal_width = terminal_width + + if max_content_width is None and parent is not None: + max_content_width = parent.max_content_width + #: The maximum width of formatted content (None implies a sensible + #: default which is 80 for most things). + self.max_content_width = max_content_width + + if allow_extra_args is None: + allow_extra_args = command.allow_extra_args + #: Indicates if the context allows extra args or if it should + #: fail on parsing. + #: + #: .. versionadded:: 3.0 + self.allow_extra_args = allow_extra_args + + if allow_interspersed_args is None: + allow_interspersed_args = command.allow_interspersed_args + #: Indicates if the context allows mixing of arguments and + #: options or not. + #: + #: .. versionadded:: 3.0 + self.allow_interspersed_args = allow_interspersed_args + + if ignore_unknown_options is None: + ignore_unknown_options = command.ignore_unknown_options + #: Instructs click to ignore options that a command does not + #: understand and will store it on the context for later + #: processing. This is primarily useful for situations where you + #: want to call into external programs. Generally this pattern is + #: strongly discouraged because it's not possibly to losslessly + #: forward all arguments. + #: + #: .. versionadded:: 4.0 + self.ignore_unknown_options = ignore_unknown_options + + if help_option_names is None: + if parent is not None: + help_option_names = parent.help_option_names + else: + help_option_names = ["--help"] + + #: The names for the help options. + self.help_option_names = help_option_names + + if token_normalize_func is None and parent is not None: + token_normalize_func = parent.token_normalize_func + + #: An optional normalization function for tokens. This is + #: options, choices, commands etc. + self.token_normalize_func = token_normalize_func + + #: Indicates if resilient parsing is enabled. In that case Click + #: will do its best to not cause any failures and default values + #: will be ignored. Useful for completion. + self.resilient_parsing = resilient_parsing + + # If there is no envvar prefix yet, but the parent has one and + # the command on this level has a name, we can expand the envvar + # prefix automatically. + if auto_envvar_prefix is None: + if ( + parent is not None + and parent.auto_envvar_prefix is not None + and self.info_name is not None + ): + auto_envvar_prefix = "{}_{}".format( + parent.auto_envvar_prefix, self.info_name.upper() + ) + else: + auto_envvar_prefix = auto_envvar_prefix.upper() + if auto_envvar_prefix is not None: + auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") + self.auto_envvar_prefix = auto_envvar_prefix + + if color is None and parent is not None: + color = parent.color + + #: Controls if styling output is wanted or not. + self.color = color + + self.show_default = show_default + + self._close_callbacks = [] + self._depth = 0 + + def __enter__(self): + self._depth += 1 + push_context(self) + return self + + def __exit__(self, exc_type, exc_value, tb): + self._depth -= 1 + if self._depth == 0: + self.close() + pop_context() + + @contextmanager + def scope(self, cleanup=True): + """This helper method can be used with the context object to promote + it to the current thread local (see :func:`get_current_context`). + The default behavior of this is to invoke the cleanup functions which + can be disabled by setting `cleanup` to `False`. The cleanup + functions are typically used for things such as closing file handles. + + If the cleanup is intended the context object can also be directly + used as a context manager. + + Example usage:: + + with ctx.scope(): + assert get_current_context() is ctx + + This is equivalent:: + + with ctx: + assert get_current_context() is ctx + + .. versionadded:: 5.0 + + :param cleanup: controls if the cleanup functions should be run or + not. The default is to run these functions. In + some situations the context only wants to be + temporarily pushed in which case this can be disabled. + Nested pushes automatically defer the cleanup. + """ + if not cleanup: + self._depth += 1 + try: + with self as rv: + yield rv + finally: + if not cleanup: + self._depth -= 1 + + @property + def meta(self): + """This is a dictionary which is shared with all the contexts + that are nested. It exists so that click utilities can store some + state here if they need to. It is however the responsibility of + that code to manage this dictionary well. + + The keys are supposed to be unique dotted strings. For instance + module paths are a good choice for it. What is stored in there is + irrelevant for the operation of click. However what is important is + that code that places data here adheres to the general semantics of + the system. + + Example usage:: + + LANG_KEY = f'{__name__}.lang' + + def set_language(value): + ctx = get_current_context() + ctx.meta[LANG_KEY] = value + + def get_language(): + return get_current_context().meta.get(LANG_KEY, 'en_US') + + .. versionadded:: 5.0 + """ + return self._meta + + def make_formatter(self): + """Creates the formatter for the help and usage output.""" + return HelpFormatter( + width=self.terminal_width, max_width=self.max_content_width + ) + + def call_on_close(self, f): + """This decorator remembers a function as callback that should be + executed when the context tears down. This is most useful to bind + resource handling to the script execution. For instance, file objects + opened by the :class:`File` type will register their close callbacks + here. + + :param f: the function to execute on teardown. + """ + self._close_callbacks.append(f) + return f + + def close(self): + """Invokes all close callbacks.""" + for cb in self._close_callbacks: + cb() + self._close_callbacks = [] + + @property + def command_path(self): + """The computed command path. This is used for the ``usage`` + information on the help page. It's automatically created by + combining the info names of the chain of contexts to the root. + """ + rv = "" + if self.info_name is not None: + rv = self.info_name + if self.parent is not None: + rv = "{} {}".format(self.parent.command_path, rv) + return rv.lstrip() + + def find_root(self): + """Finds the outermost context.""" + node = self + while node.parent is not None: + node = node.parent + return node + + def find_object(self, object_type): + """Finds the closest object of a given type.""" + node = self + while node is not None: + if isinstance(node.obj, object_type): + return node.obj + node = node.parent + + def ensure_object(self, object_type): + """Like :meth:`find_object` but sets the innermost object to a + new instance of `object_type` if it does not exist. + """ + rv = self.find_object(object_type) + if rv is None: + self.obj = rv = object_type() + return rv + + def lookup_default(self, name): + """Looks up the default for a parameter name. This by default + looks into the :attr:`default_map` if available. + """ + if self.default_map is not None: + rv = self.default_map.get(name) + if callable(rv): + rv = rv() + return rv + + def fail(self, message): + """Aborts the execution of the program with a specific error + message. + + :param message: the error message to fail with. + """ + raise UsageError(message, self) + + def abort(self): + """Aborts the script.""" + raise Abort() + + def exit(self, code=0): + """Exits the application with a given exit code.""" + raise Exit(code) + + def get_usage(self): + """Helper method to get formatted usage string for the current + context and command. + """ + return self.command.get_usage(self) + + def get_help(self): + """Helper method to get formatted help page for the current + context and command. + """ + return self.command.get_help(self) + + def invoke(*args, **kwargs): # noqa: B902 + """Invokes a command callback in exactly the way it expects. There + are two ways to invoke this method: + + 1. the first argument can be a callback and all other arguments and + keyword arguments are forwarded directly to the function. + 2. the first argument is a click command object. In that case all + arguments are forwarded as well but proper click parameters + (options and click arguments) must be keyword arguments and Click + will fill in defaults. + + Note that before Click 3.2 keyword arguments were not properly filled + in against the intention of this code and no context was created. For + more information about this change and why it was done in a bugfix + release see :ref:`upgrade-to-3.2`. + """ + self, callback = args[:2] + ctx = self + + # It's also possible to invoke another command which might or + # might not have a callback. In that case we also fill + # in defaults and make a new context for this command. + if isinstance(callback, Command): + other_cmd = callback + callback = other_cmd.callback + ctx = Context(other_cmd, info_name=other_cmd.name, parent=self) + if callback is None: + raise TypeError( + "The given command does not have a callback that can be invoked." + ) + + for param in other_cmd.params: + if param.name not in kwargs and param.expose_value: + kwargs[param.name] = param.get_default(ctx) + + args = args[2:] + with augment_usage_errors(self): + with ctx: + return callback(*args, **kwargs) + + def forward(*args, **kwargs): # noqa: B902 + """Similar to :meth:`invoke` but fills in default keyword + arguments from the current context if the other command expects + it. This cannot invoke callbacks directly, only other commands. + """ + self, cmd = args[:2] + + # It's also possible to invoke another command which might or + # might not have a callback. + if not isinstance(cmd, Command): + raise TypeError("Callback is not a command.") + + for param in self.params: + if param not in kwargs: + kwargs[param] = self.params[param] + + return self.invoke(cmd, **kwargs) + + +class BaseCommand(object): + """The base command implements the minimal API contract of commands. + Most code will never use this as it does not implement a lot of useful + functionality but it can act as the direct subclass of alternative + parsing methods that do not depend on the Click parser. + + For instance, this can be used to bridge Click and other systems like + argparse or docopt. + + Because base commands do not implement a lot of the API that other + parts of Click take for granted, they are not supported for all + operations. For instance, they cannot be used with the decorators + usually and they have no built-in callback system. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + """ + + #: the default for the :attr:`Context.allow_extra_args` flag. + allow_extra_args = False + #: the default for the :attr:`Context.allow_interspersed_args` flag. + allow_interspersed_args = True + #: the default for the :attr:`Context.ignore_unknown_options` flag. + ignore_unknown_options = False + + def __init__(self, name, context_settings=None): + #: the name the command thinks it has. Upon registering a command + #: on a :class:`Group` the group will default the command name + #: with this information. You should instead use the + #: :class:`Context`\'s :attr:`~Context.info_name` attribute. + self.name = name + if context_settings is None: + context_settings = {} + #: an optional dictionary with defaults passed to the context. + self.context_settings = context_settings + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + def get_usage(self, ctx): + raise NotImplementedError("Base commands cannot get usage") + + def get_help(self, ctx): + raise NotImplementedError("Base commands cannot get help") + + def make_context(self, info_name, args, parent=None, **extra): + """This function when given an info name and arguments will kick + off the parsing and create a new :class:`Context`. It does not + invoke the actual command callback though. + + :param info_name: the info name for this invokation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it's usually + the name of the script, for commands below it it's + the name of the script. + :param args: the arguments to parse as list of strings. + :param parent: the parent context if available. + :param extra: extra keyword arguments forwarded to the context + constructor. + """ + for key, value in iteritems(self.context_settings): + if key not in extra: + extra[key] = value + ctx = Context(self, info_name=info_name, parent=parent, **extra) + with ctx.scope(cleanup=False): + self.parse_args(ctx, args) + return ctx + + def parse_args(self, ctx, args): + """Given a context and a list of arguments this creates the parser + and parses the arguments, then modifies the context as necessary. + This is automatically invoked by :meth:`make_context`. + """ + raise NotImplementedError("Base commands do not know how to parse arguments.") + + def invoke(self, ctx): + """Given a context, this invokes the command. The default + implementation is raising a not implemented error. + """ + raise NotImplementedError("Base commands are not invokable by default") + + def main( + self, + args=None, + prog_name=None, + complete_var=None, + standalone_mode=True, + **extra + ): + """This is the way to invoke a script with all the bells and + whistles as a command line application. This will always terminate + the application after a call. If this is not wanted, ``SystemExit`` + needs to be caught. + + This method is also available by directly calling the instance of + a :class:`Command`. + + .. versionadded:: 3.0 + Added the `standalone_mode` flag to control the standalone mode. + + :param args: the arguments that should be used for parsing. If not + provided, ``sys.argv[1:]`` is used. + :param prog_name: the program name that should be used. By default + the program name is constructed by taking the file + name from ``sys.argv[0]``. + :param complete_var: the environment variable that controls the + bash completion support. The default is + ``"__COMPLETE"`` with prog_name in + uppercase. + :param standalone_mode: the default behavior is to invoke the script + in standalone mode. Click will then + handle exceptions and convert them into + error messages and the function will never + return but shut down the interpreter. If + this is set to `False` they will be + propagated to the caller and the return + value of this function is the return value + of :meth:`invoke`. + :param extra: extra keyword arguments are forwarded to the context + constructor. See :class:`Context` for more information. + """ + # If we are in Python 3, we will verify that the environment is + # sane at this point or reject further execution to avoid a + # broken script. + if not PY2: + _verify_python3_env() + else: + _check_for_unicode_literals() + + if args is None: + args = get_os_args() + else: + args = list(args) + + if prog_name is None: + prog_name = make_str( + os.path.basename(sys.argv[0] if sys.argv else __file__) + ) + + # Hook for the Bash completion. This only activates if the Bash + # completion is actually enabled, otherwise this is quite a fast + # noop. + _bashcomplete(self, prog_name, complete_var) + + try: + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt): + echo(file=sys.stderr) + raise Abort() + except ClickException as e: + if not standalone_mode: + raise + e.show() + sys.exit(e.exit_code) + except IOError as e: + if e.errno == errno.EPIPE: + sys.stdout = PacifyFlushWrapper(sys.stdout) + sys.stderr = PacifyFlushWrapper(sys.stderr) + sys.exit(1) + else: + raise + except Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + # in non-standalone mode, return the exit code + # note that this is only reached if `self.invoke` above raises + # an Exit explicitly -- thus bypassing the check there which + # would return its result + # the results of non-standalone execution may therefore be + # somewhat ambiguous: if there are codepaths which lead to + # `ctx.exit(1)` and to `return 1`, the caller won't be able to + # tell the difference between the two + return e.exit_code + except Abort: + if not standalone_mode: + raise + echo("Aborted!", file=sys.stderr) + sys.exit(1) + + def __call__(self, *args, **kwargs): + """Alias for :meth:`main`.""" + return self.main(*args, **kwargs) + + +class Command(BaseCommand): + """Commands are the basic building block of command line interfaces in + Click. A basic command handles command line parsing and might dispatch + more parsing to commands nested below it. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + .. versionchanged:: 7.1 + Added the `no_args_is_help` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + :param callback: the callback to invoke. This is optional. + :param params: the parameters to register with this command. This can + be either :class:`Option` or :class:`Argument` objects. + :param help: the help string to use for this command. + :param epilog: like the help string but it's printed at the end of the + help page after everything else. + :param short_help: the short help to use for this command. This is + shown on the command listing of the parent command. + :param add_help_option: by default each command registers a ``--help`` + option. This can be disabled by this parameter. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is disabled by default. + If enabled this will add ``--help`` as argument + if no arguments are passed + :param hidden: hide this command from help outputs. + + :param deprecated: issues a message indicating that + the command is deprecated. + """ + + def __init__( + self, + name, + context_settings=None, + callback=None, + params=None, + help=None, + epilog=None, + short_help=None, + options_metavar="[OPTIONS]", + add_help_option=True, + no_args_is_help=False, + hidden=False, + deprecated=False, + ): + BaseCommand.__init__(self, name, context_settings) + #: the callback to execute when the command fires. This might be + #: `None` in which case nothing happens. + self.callback = callback + #: the list of parameters for this command in the order they + #: should show up in the help page and execute. Eager parameters + #: will automatically be handled before non eager ones. + self.params = params or [] + # if a form feed (page break) is found in the help text, truncate help + # text to the content preceding the first form feed + if help and "\f" in help: + help = help.split("\f", 1)[0] + self.help = help + self.epilog = epilog + self.options_metavar = options_metavar + self.short_help = short_help + self.add_help_option = add_help_option + self.no_args_is_help = no_args_is_help + self.hidden = hidden + self.deprecated = deprecated + + def get_usage(self, ctx): + """Formats the usage line into a string and returns it. + + Calls :meth:`format_usage` internally. + """ + formatter = ctx.make_formatter() + self.format_usage(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_params(self, ctx): + rv = self.params + help_option = self.get_help_option(ctx) + if help_option is not None: + rv = rv + [help_option] + return rv + + def format_usage(self, ctx, formatter): + """Writes the usage line into the formatter. + + This is a low-level method called by :meth:`get_usage`. + """ + pieces = self.collect_usage_pieces(ctx) + formatter.write_usage(ctx.command_path, " ".join(pieces)) + + def collect_usage_pieces(self, ctx): + """Returns all the pieces that go into the usage line and returns + it as a list of strings. + """ + rv = [self.options_metavar] + for param in self.get_params(ctx): + rv.extend(param.get_usage_pieces(ctx)) + return rv + + def get_help_option_names(self, ctx): + """Returns the names for the help option.""" + all_names = set(ctx.help_option_names) + for param in self.params: + all_names.difference_update(param.opts) + all_names.difference_update(param.secondary_opts) + return all_names + + def get_help_option(self, ctx): + """Returns the help option object.""" + help_options = self.get_help_option_names(ctx) + if not help_options or not self.add_help_option: + return + + def show_help(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + return Option( + help_options, + is_flag=True, + is_eager=True, + expose_value=False, + callback=show_help, + help="Show this message and exit.", + ) + + def make_parser(self, ctx): + """Creates the underlying option parser for this command.""" + parser = OptionParser(ctx) + for param in self.get_params(ctx): + param.add_to_parser(parser, ctx) + return parser + + def get_help(self, ctx): + """Formats the help into a string and returns it. + + Calls :meth:`format_help` internally. + """ + formatter = ctx.make_formatter() + self.format_help(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_short_help_str(self, limit=45): + """Gets short help for the command or makes it by shortening the + long help string. + """ + return ( + self.short_help + or self.help + and make_default_short_help(self.help, limit) + or "" + ) + + def format_help(self, ctx, formatter): + """Writes the help into the formatter if it exists. + + This is a low-level method called by :meth:`get_help`. + + This calls the following methods: + + - :meth:`format_usage` + - :meth:`format_help_text` + - :meth:`format_options` + - :meth:`format_epilog` + """ + self.format_usage(ctx, formatter) + self.format_help_text(ctx, formatter) + self.format_options(ctx, formatter) + self.format_epilog(ctx, formatter) + + def format_help_text(self, ctx, formatter): + """Writes the help text to the formatter if it exists.""" + if self.help: + formatter.write_paragraph() + with formatter.indentation(): + help_text = self.help + if self.deprecated: + help_text += DEPRECATED_HELP_NOTICE + formatter.write_text(help_text) + elif self.deprecated: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(DEPRECATED_HELP_NOTICE) + + def format_options(self, ctx, formatter): + """Writes all the options into the formatter if they exist.""" + opts = [] + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + opts.append(rv) + + if opts: + with formatter.section("Options"): + formatter.write_dl(opts) + + def format_epilog(self, ctx, formatter): + """Writes the epilog into the formatter if it exists.""" + if self.epilog: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(self.epilog) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + parser = self.make_parser(ctx) + opts, args, param_order = parser.parse_args(args=args) + + for param in iter_params_for_processing(param_order, self.get_params(ctx)): + value, args = param.handle_parse_result(ctx, opts, args) + + if args and not ctx.allow_extra_args and not ctx.resilient_parsing: + ctx.fail( + "Got unexpected extra argument{} ({})".format( + "s" if len(args) != 1 else "", " ".join(map(make_str, args)) + ) + ) + + ctx.args = args + return args + + def invoke(self, ctx): + """Given a context, this invokes the attached callback (if it exists) + in the right way. + """ + _maybe_show_deprecated_notice(self) + if self.callback is not None: + return ctx.invoke(self.callback, **ctx.params) + + +class MultiCommand(Command): + """A multi command is the basic implementation of a command that + dispatches to subcommands. The most common version is the + :class:`Group`. + + :param invoke_without_command: this controls how the multi command itself + is invoked. By default it's only invoked + if a subcommand is provided. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is enabled by default if + `invoke_without_command` is disabled or disabled + if it's enabled. If enabled this will add + ``--help`` as argument if no arguments are + passed. + :param subcommand_metavar: the string that is used in the documentation + to indicate the subcommand place. + :param chain: if this is set to `True` chaining of multiple subcommands + is enabled. This restricts the form of commands in that + they cannot have optional arguments but it allows + multiple commands to be chained together. + :param result_callback: the result callback to attach to this multi + command. + """ + + allow_extra_args = True + allow_interspersed_args = False + + def __init__( + self, + name=None, + invoke_without_command=False, + no_args_is_help=None, + subcommand_metavar=None, + chain=False, + result_callback=None, + **attrs + ): + Command.__init__(self, name, **attrs) + if no_args_is_help is None: + no_args_is_help = not invoke_without_command + self.no_args_is_help = no_args_is_help + self.invoke_without_command = invoke_without_command + if subcommand_metavar is None: + if chain: + subcommand_metavar = SUBCOMMANDS_METAVAR + else: + subcommand_metavar = SUBCOMMAND_METAVAR + self.subcommand_metavar = subcommand_metavar + self.chain = chain + #: The result callback that is stored. This can be set or + #: overridden with the :func:`resultcallback` decorator. + self.result_callback = result_callback + + if self.chain: + for param in self.params: + if isinstance(param, Argument) and not param.required: + raise RuntimeError( + "Multi commands in chain mode cannot have" + " optional arguments." + ) + + def collect_usage_pieces(self, ctx): + rv = Command.collect_usage_pieces(self, ctx) + rv.append(self.subcommand_metavar) + return rv + + def format_options(self, ctx, formatter): + Command.format_options(self, ctx, formatter) + self.format_commands(ctx, formatter) + + def resultcallback(self, replace=False): + """Adds a result callback to the chain command. By default if a + result callback is already registered this will chain them but + this can be disabled with the `replace` parameter. The result + callback is invoked with the return value of the subcommand + (or the list of return values from all subcommands if chaining + is enabled) as well as the parameters as they would be passed + to the main callback. + + Example:: + + @click.group() + @click.option('-i', '--input', default=23) + def cli(input): + return 42 + + @cli.resultcallback() + def process_result(result, input): + return result + input + + .. versionadded:: 3.0 + + :param replace: if set to `True` an already existing result + callback will be removed. + """ + + def decorator(f): + old_callback = self.result_callback + if old_callback is None or replace: + self.result_callback = f + return f + + def function(__value, *args, **kwargs): + return f(old_callback(__value, *args, **kwargs), *args, **kwargs) + + self.result_callback = rv = update_wrapper(function, f) + return rv + + return decorator + + def format_commands(self, ctx, formatter): + """Extra format methods for multi methods that adds all the commands + after the options. + """ + commands = [] + for subcommand in self.list_commands(ctx): + cmd = self.get_command(ctx, subcommand) + # What is this, the tool lied about a command. Ignore it + if cmd is None: + continue + if cmd.hidden: + continue + + commands.append((subcommand, cmd)) + + # allow for 3 times the default spacing + if len(commands): + limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) + + rows = [] + for subcommand, cmd in commands: + help = cmd.get_short_help_str(limit) + rows.append((subcommand, help)) + + if rows: + with formatter.section("Commands"): + formatter.write_dl(rows) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + rest = Command.parse_args(self, ctx, args) + if self.chain: + ctx.protected_args = rest + ctx.args = [] + elif rest: + ctx.protected_args, ctx.args = rest[:1], rest[1:] + + return ctx.args + + def invoke(self, ctx): + def _process_result(value): + if self.result_callback is not None: + value = ctx.invoke(self.result_callback, value, **ctx.params) + return value + + if not ctx.protected_args: + # If we are invoked without command the chain flag controls + # how this happens. If we are not in chain mode, the return + # value here is the return value of the command. + # If however we are in chain mode, the return value is the + # return value of the result processor invoked with an empty + # list (which means that no subcommand actually was executed). + if self.invoke_without_command: + if not self.chain: + return Command.invoke(self, ctx) + with ctx: + Command.invoke(self, ctx) + return _process_result([]) + ctx.fail("Missing command.") + + # Fetch args back out + args = ctx.protected_args + ctx.args + ctx.args = [] + ctx.protected_args = [] + + # If we're not in chain mode, we only allow the invocation of a + # single command but we also inform the current context about the + # name of the command to invoke. + if not self.chain: + # Make sure the context is entered so we do not clean up + # resources until the result processor has worked. + with ctx: + cmd_name, cmd, args = self.resolve_command(ctx, args) + ctx.invoked_subcommand = cmd_name + Command.invoke(self, ctx) + sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) + with sub_ctx: + return _process_result(sub_ctx.command.invoke(sub_ctx)) + + # In chain mode we create the contexts step by step, but after the + # base command has been invoked. Because at that point we do not + # know the subcommands yet, the invoked subcommand attribute is + # set to ``*`` to inform the command that subcommands are executed + # but nothing else. + with ctx: + ctx.invoked_subcommand = "*" if args else None + Command.invoke(self, ctx) + + # Otherwise we make every single context and invoke them in a + # chain. In that case the return value to the result processor + # is the list of all invoked subcommand's results. + contexts = [] + while args: + cmd_name, cmd, args = self.resolve_command(ctx, args) + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + ) + contexts.append(sub_ctx) + args, sub_ctx.args = sub_ctx.args, [] + + rv = [] + for sub_ctx in contexts: + with sub_ctx: + rv.append(sub_ctx.command.invoke(sub_ctx)) + return _process_result(rv) + + def resolve_command(self, ctx, args): + cmd_name = make_str(args[0]) + original_cmd_name = cmd_name + + # Get the command + cmd = self.get_command(ctx, cmd_name) + + # If we can't find the command but there is a normalization + # function available, we try with that one. + if cmd is None and ctx.token_normalize_func is not None: + cmd_name = ctx.token_normalize_func(cmd_name) + cmd = self.get_command(ctx, cmd_name) + + # If we don't find the command we want to show an error message + # to the user that it was not provided. However, there is + # something else we should do: if the first argument looks like + # an option we want to kick off parsing again for arguments to + # resolve things like --help which now should go to the main + # place. + if cmd is None and not ctx.resilient_parsing: + if split_opt(cmd_name)[0]: + self.parse_args(ctx, ctx.args) + ctx.fail("No such command '{}'.".format(original_cmd_name)) + + return cmd_name, cmd, args[1:] + + def get_command(self, ctx, cmd_name): + """Given a context and a command name, this returns a + :class:`Command` object if it exists or returns `None`. + """ + raise NotImplementedError() + + def list_commands(self, ctx): + """Returns a list of subcommand names in the order they should + appear. + """ + return [] + + +class Group(MultiCommand): + """A group allows a command to have subcommands attached. This is the + most common way to implement nesting in Click. + + :param commands: a dictionary of commands. + """ + + def __init__(self, name=None, commands=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: the registered subcommands by their exported names. + self.commands = commands or {} + + def add_command(self, cmd, name=None): + """Registers another :class:`Command` with this group. If the name + is not provided, the name of the command is used. + """ + name = name or cmd.name + if name is None: + raise TypeError("Command has no name.") + _check_multicommand(self, name, cmd, register=True) + self.commands[name] = cmd + + def command(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a command to + the group. This takes the same arguments as :func:`command` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import command + + def decorator(f): + cmd = command(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def group(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a group to + the group. This takes the same arguments as :func:`group` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import group + + def decorator(f): + cmd = group(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def get_command(self, ctx, cmd_name): + return self.commands.get(cmd_name) + + def list_commands(self, ctx): + return sorted(self.commands) + + +class CommandCollection(MultiCommand): + """A command collection is a multi command that merges multiple multi + commands together into one. This is a straightforward implementation + that accepts a list of different multi commands as sources and + provides all the commands for each of them. + """ + + def __init__(self, name=None, sources=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: The list of registered multi commands. + self.sources = sources or [] + + def add_source(self, multi_cmd): + """Adds a new multi command to the chain dispatcher.""" + self.sources.append(multi_cmd) + + def get_command(self, ctx, cmd_name): + for source in self.sources: + rv = source.get_command(ctx, cmd_name) + if rv is not None: + if self.chain: + _check_multicommand(self, cmd_name, rv) + return rv + + def list_commands(self, ctx): + rv = set() + for source in self.sources: + rv.update(source.list_commands(ctx)) + return sorted(rv) + + +class Parameter(object): + r"""A parameter to a command comes in two versions: they are either + :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently + not supported by design as some of the internals for parsing are + intentionally not finalized. + + Some settings are supported by both options and arguments. + + :param param_decls: the parameter declarations for this option or + argument. This is a list of flags or argument + names. + :param type: the type that should be used. Either a :class:`ParamType` + or a Python type. The later is converted into the former + automatically if supported. + :param required: controls if this is optional or not. + :param default: the default value if omitted. This can also be a callable, + in which case it's invoked when the default is needed + without any arguments. + :param callback: a callback that should be executed after the parameter + was matched. This is called as ``fn(ctx, param, + value)`` and needs to return the value. + :param nargs: the number of arguments to match. If not ``1`` the return + value is a tuple instead of single value. The default for + nargs is ``1`` (except if the type is a tuple, then it's + the arity of the tuple). + :param metavar: how the value is represented in the help page. + :param expose_value: if this is `True` then the value is passed onwards + to the command callback and stored on the context, + otherwise it's skipped. + :param is_eager: eager values are processed before non eager ones. This + should not be set for arguments or it will inverse the + order of processing. + :param envvar: a string or list of strings that are environment variables + that should be checked. + + .. versionchanged:: 7.1 + Empty environment variables are ignored rather than taking the + empty string value. This makes it possible for scripts to clear + variables if they can't unset them. + + .. versionchanged:: 2.0 + Changed signature for parameter callback to also be passed the + parameter. The old callback format will still work, but it will + raise a warning to give you a chance to migrate the code easier. + """ + param_type_name = "parameter" + + def __init__( + self, + param_decls=None, + type=None, + required=False, + default=None, + callback=None, + nargs=None, + metavar=None, + expose_value=True, + is_eager=False, + envvar=None, + autocompletion=None, + ): + self.name, self.opts, self.secondary_opts = self._parse_decls( + param_decls or (), expose_value + ) + + self.type = convert_type(type, default) + + # Default nargs to what the type tells us if we have that + # information available. + if nargs is None: + if self.type.is_composite: + nargs = self.type.arity + else: + nargs = 1 + + self.required = required + self.callback = callback + self.nargs = nargs + self.multiple = False + self.expose_value = expose_value + self.default = default + self.is_eager = is_eager + self.metavar = metavar + self.envvar = envvar + self.autocompletion = autocompletion + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + @property + def human_readable_name(self): + """Returns the human readable name of this parameter. This is the + same as the name for options, but the metavar for arguments. + """ + return self.name + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + metavar = self.type.get_metavar(self) + if metavar is None: + metavar = self.type.name.upper() + if self.nargs != 1: + metavar += "..." + return metavar + + def get_default(self, ctx): + """Given a context variable this calculates the default value.""" + # Otherwise go with the regular default. + if callable(self.default): + rv = self.default() + else: + rv = self.default + return self.type_cast_value(ctx, rv) + + def add_to_parser(self, parser, ctx): + pass + + def consume_value(self, ctx, opts): + value = opts.get(self.name) + if value is None: + value = self.value_from_envvar(ctx) + if value is None: + value = ctx.lookup_default(self.name) + return value + + def type_cast_value(self, ctx, value): + """Given a value this runs it properly through the type system. + This automatically handles things like `nargs` and `multiple` as + well as composite types. + """ + if self.type.is_composite: + if self.nargs <= 1: + raise TypeError( + "Attempted to invoke composite type but nargs has" + " been set to {}. This is not supported; nargs" + " needs to be set to a fixed value > 1.".format(self.nargs) + ) + if self.multiple: + return tuple(self.type(x or (), self, ctx) for x in value or ()) + return self.type(value or (), self, ctx) + + def _convert(value, level): + if level == 0: + return self.type(value, self, ctx) + return tuple(_convert(x, level - 1) for x in value or ()) + + return _convert(value, (self.nargs != 1) + bool(self.multiple)) + + def process_value(self, ctx, value): + """Given a value and context this runs the logic to convert the + value as necessary. + """ + # If the value we were given is None we do nothing. This way + # code that calls this can easily figure out if something was + # not provided. Otherwise it would be converted into an empty + # tuple for multiple invocations which is inconvenient. + if value is not None: + return self.type_cast_value(ctx, value) + + def value_is_missing(self, value): + if value is None: + return True + if (self.nargs != 1 or self.multiple) and value == (): + return True + return False + + def full_process_value(self, ctx, value): + value = self.process_value(ctx, value) + + if value is None and not ctx.resilient_parsing: + value = self.get_default(ctx) + + if self.required and self.value_is_missing(value): + raise MissingParameter(ctx=ctx, param=self) + + return value + + def resolve_envvar_value(self, ctx): + if self.envvar is None: + return + if isinstance(self.envvar, (tuple, list)): + for envvar in self.envvar: + rv = os.environ.get(envvar) + if rv is not None: + return rv + else: + rv = os.environ.get(self.envvar) + + if rv != "": + return rv + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is not None and self.nargs != 1: + rv = self.type.split_envvar_value(rv) + return rv + + def handle_parse_result(self, ctx, opts, args): + with augment_usage_errors(ctx, param=self): + value = self.consume_value(ctx, opts) + try: + value = self.full_process_value(ctx, value) + except Exception: + if not ctx.resilient_parsing: + raise + value = None + if self.callback is not None: + try: + value = invoke_param_callback(self.callback, ctx, self, value) + except Exception: + if not ctx.resilient_parsing: + raise + + if self.expose_value: + ctx.params[self.name] = value + return value, args + + def get_help_record(self, ctx): + pass + + def get_usage_pieces(self, ctx): + return [] + + def get_error_hint(self, ctx): + """Get a stringified version of the param for use in error messages to + indicate which param caused the error. + """ + hint_list = self.opts or [self.human_readable_name] + return " / ".join(repr(x) for x in hint_list) + + +class Option(Parameter): + """Options are usually optional values on the command line and + have some extra features that arguments don't have. + + All other parameters are passed onwards to the parameter constructor. + + :param show_default: controls if the default value should be shown on the + help page. Normally, defaults are not shown. If this + value is a string, it shows the string instead of the + value. This is particularly useful for dynamic options. + :param show_envvar: controls if an environment variable should be shown on + the help page. Normally, environment variables + are not shown. + :param prompt: if set to `True` or a non empty string then the user will be + prompted for input. If set to `True` the prompt will be the + option name capitalized. + :param confirmation_prompt: if set then the value will need to be confirmed + if it was prompted for. + :param hide_input: if this is `True` then the input on the prompt will be + hidden from the user. This is useful for password + input. + :param is_flag: forces this option to act as a flag. The default is + auto detection. + :param flag_value: which value should be used for this flag if it's + enabled. This is set to a boolean automatically if + the option string contains a slash to mark two options. + :param multiple: if this is set to `True` then the argument is accepted + multiple times and recorded. This is similar to ``nargs`` + in how it works but supports arbitrary number of + arguments. + :param count: this flag makes an option increment an integer. + :param allow_from_autoenv: if this is enabled then the value of this + parameter will be pulled from an environment + variable in case a prefix is defined on the + context. + :param help: the help string. + :param hidden: hide this option from help outputs. + """ + + param_type_name = "option" + + def __init__( + self, + param_decls=None, + show_default=False, + prompt=False, + confirmation_prompt=False, + hide_input=False, + is_flag=None, + flag_value=None, + multiple=False, + count=False, + allow_from_autoenv=True, + type=None, + help=None, + hidden=False, + show_choices=True, + show_envvar=False, + **attrs + ): + default_is_missing = attrs.get("default", _missing) is _missing + Parameter.__init__(self, param_decls, type=type, **attrs) + + if prompt is True: + prompt_text = self.name.replace("_", " ").capitalize() + elif prompt is False: + prompt_text = None + else: + prompt_text = prompt + self.prompt = prompt_text + self.confirmation_prompt = confirmation_prompt + self.hide_input = hide_input + self.hidden = hidden + + # Flags + if is_flag is None: + if flag_value is not None: + is_flag = True + else: + is_flag = bool(self.secondary_opts) + if is_flag and default_is_missing: + self.default = False + if flag_value is None: + flag_value = not self.default + self.is_flag = is_flag + self.flag_value = flag_value + if self.is_flag and isinstance(self.flag_value, bool) and type in [None, bool]: + self.type = BOOL + self.is_bool_flag = True + else: + self.is_bool_flag = False + + # Counting + self.count = count + if count: + if type is None: + self.type = IntRange(min=0) + if default_is_missing: + self.default = 0 + + self.multiple = multiple + self.allow_from_autoenv = allow_from_autoenv + self.help = help + self.show_default = show_default + self.show_choices = show_choices + self.show_envvar = show_envvar + + # Sanity check for stuff we don't support + if __debug__: + if self.nargs < 0: + raise TypeError("Options cannot have nargs < 0") + if self.prompt and self.is_flag and not self.is_bool_flag: + raise TypeError("Cannot prompt for flags that are not bools.") + if not self.is_bool_flag and self.secondary_opts: + raise TypeError("Got secondary option for non boolean flag.") + if self.is_bool_flag and self.hide_input and self.prompt is not None: + raise TypeError("Hidden input does not work with boolean flag prompts.") + if self.count: + if self.multiple: + raise TypeError( + "Options cannot be multiple and count at the same time." + ) + elif self.is_flag: + raise TypeError( + "Options cannot be count and flags at the same time." + ) + + def _parse_decls(self, decls, expose_value): + opts = [] + secondary_opts = [] + name = None + possible_names = [] + + for decl in decls: + if isidentifier(decl): + if name is not None: + raise TypeError("Name defined twice") + name = decl + else: + split_char = ";" if decl[:1] == "/" else "/" + if split_char in decl: + first, second = decl.split(split_char, 1) + first = first.rstrip() + if first: + possible_names.append(split_opt(first)) + opts.append(first) + second = second.lstrip() + if second: + secondary_opts.append(second.lstrip()) + else: + possible_names.append(split_opt(decl)) + opts.append(decl) + + if name is None and possible_names: + possible_names.sort(key=lambda x: -len(x[0])) # group long options first + name = possible_names[0][1].replace("-", "_").lower() + if not isidentifier(name): + name = None + + if name is None: + if not expose_value: + return None, opts, secondary_opts + raise TypeError("Could not determine name for option") + + if not opts and not secondary_opts: + raise TypeError( + "No options defined but a name was passed ({}). Did you" + " mean to declare an argument instead of an option?".format(name) + ) + + return name, opts, secondary_opts + + def add_to_parser(self, parser, ctx): + kwargs = { + "dest": self.name, + "nargs": self.nargs, + "obj": self, + } + + if self.multiple: + action = "append" + elif self.count: + action = "count" + else: + action = "store" + + if self.is_flag: + kwargs.pop("nargs", None) + action_const = "{}_const".format(action) + if self.is_bool_flag and self.secondary_opts: + parser.add_option(self.opts, action=action_const, const=True, **kwargs) + parser.add_option( + self.secondary_opts, action=action_const, const=False, **kwargs + ) + else: + parser.add_option( + self.opts, action=action_const, const=self.flag_value, **kwargs + ) + else: + kwargs["action"] = action + parser.add_option(self.opts, **kwargs) + + def get_help_record(self, ctx): + if self.hidden: + return + any_prefix_is_slash = [] + + def _write_opts(opts): + rv, any_slashes = join_options(opts) + if any_slashes: + any_prefix_is_slash[:] = [True] + if not self.is_flag and not self.count: + rv += " {}".format(self.make_metavar()) + return rv + + rv = [_write_opts(self.opts)] + if self.secondary_opts: + rv.append(_write_opts(self.secondary_opts)) + + help = self.help or "" + extra = [] + if self.show_envvar: + envvar = self.envvar + if envvar is None: + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + if envvar is not None: + extra.append( + "env var: {}".format( + ", ".join(str(d) for d in envvar) + if isinstance(envvar, (list, tuple)) + else envvar + ) + ) + if self.default is not None and (self.show_default or ctx.show_default): + if isinstance(self.show_default, string_types): + default_string = "({})".format(self.show_default) + elif isinstance(self.default, (list, tuple)): + default_string = ", ".join(str(d) for d in self.default) + elif inspect.isfunction(self.default): + default_string = "(dynamic)" + else: + default_string = self.default + extra.append("default: {}".format(default_string)) + + if self.required: + extra.append("required") + if extra: + help = "{}[{}]".format( + "{} ".format(help) if help else "", "; ".join(extra) + ) + + return ("; " if any_prefix_is_slash else " / ").join(rv), help + + def get_default(self, ctx): + # If we're a non boolean flag our default is more complex because + # we need to look at all flags in the same group to figure out + # if we're the the default one in which case we return the flag + # value as default. + if self.is_flag and not self.is_bool_flag: + for param in ctx.command.params: + if param.name == self.name and param.default: + return param.flag_value + return None + return Parameter.get_default(self, ctx) + + def prompt_for_value(self, ctx): + """This is an alternative flow that can be activated in the full + value processing if a value does not exist. It will prompt the + user until a valid value exists and then returns the processed + value as result. + """ + # Calculate the default before prompting anything to be stable. + default = self.get_default(ctx) + + # If this is a prompt for a flag we need to handle this + # differently. + if self.is_bool_flag: + return confirm(self.prompt, default) + + return prompt( + self.prompt, + default=default, + type=self.type, + hide_input=self.hide_input, + show_choices=self.show_choices, + confirmation_prompt=self.confirmation_prompt, + value_proc=lambda x: self.process_value(ctx, x), + ) + + def resolve_envvar_value(self, ctx): + rv = Parameter.resolve_envvar_value(self, ctx) + if rv is not None: + return rv + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + return os.environ.get(envvar) + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is None: + return None + value_depth = (self.nargs != 1) + bool(self.multiple) + if value_depth > 0 and rv is not None: + rv = self.type.split_envvar_value(rv) + if self.multiple and self.nargs != 1: + rv = batch(rv, self.nargs) + return rv + + def full_process_value(self, ctx, value): + if value is None and self.prompt is not None and not ctx.resilient_parsing: + return self.prompt_for_value(ctx) + return Parameter.full_process_value(self, ctx, value) + + +class Argument(Parameter): + """Arguments are positional parameters to a command. They generally + provide fewer features than options but can have infinite ``nargs`` + and are required by default. + + All parameters are passed onwards to the parameter constructor. + """ + + param_type_name = "argument" + + def __init__(self, param_decls, required=None, **attrs): + if required is None: + if attrs.get("default") is not None: + required = False + else: + required = attrs.get("nargs", 1) > 0 + Parameter.__init__(self, param_decls, required=required, **attrs) + if self.default is not None and self.nargs < 0: + raise TypeError( + "nargs=-1 in combination with a default value is not supported." + ) + + @property + def human_readable_name(self): + if self.metavar is not None: + return self.metavar + return self.name.upper() + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + var = self.type.get_metavar(self) + if not var: + var = self.name.upper() + if not self.required: + var = "[{}]".format(var) + if self.nargs != 1: + var += "..." + return var + + def _parse_decls(self, decls, expose_value): + if not decls: + if not expose_value: + return None, [], [] + raise TypeError("Could not determine name for argument") + if len(decls) == 1: + name = arg = decls[0] + name = name.replace("-", "_").lower() + else: + raise TypeError( + "Arguments take exactly one parameter declaration, got" + " {}".format(len(decls)) + ) + return name, [arg], [] + + def get_usage_pieces(self, ctx): + return [self.make_metavar()] + + def get_error_hint(self, ctx): + return repr(self.make_metavar()) + + def add_to_parser(self, parser, ctx): + parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/openpype/vendor/python/python_2/click/decorators.py b/openpype/vendor/python/python_2/click/decorators.py new file mode 100644 index 0000000000..c7b5af6cc5 --- /dev/null +++ b/openpype/vendor/python/python_2/click/decorators.py @@ -0,0 +1,333 @@ +import inspect +import sys +from functools import update_wrapper + +from ._compat import iteritems +from ._unicodefun import _check_for_unicode_literals +from .core import Argument +from .core import Command +from .core import Group +from .core import Option +from .globals import get_current_context +from .utils import echo + + +def pass_context(f): + """Marks a callback as wanting to receive the current context + object as first argument. + """ + + def new_func(*args, **kwargs): + return f(get_current_context(), *args, **kwargs) + + return update_wrapper(new_func, f) + + +def pass_obj(f): + """Similar to :func:`pass_context`, but only pass the object on the + context onwards (:attr:`Context.obj`). This is useful if that object + represents the state of a nested system. + """ + + def new_func(*args, **kwargs): + return f(get_current_context().obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + +def make_pass_decorator(object_type, ensure=False): + """Given an object type this creates a decorator that will work + similar to :func:`pass_obj` but instead of passing the object of the + current context, it will find the innermost context of type + :func:`object_type`. + + This generates a decorator that works roughly like this:: + + from functools import update_wrapper + + def decorator(f): + @pass_context + def new_func(ctx, *args, **kwargs): + obj = ctx.find_object(object_type) + return ctx.invoke(f, obj, *args, **kwargs) + return update_wrapper(new_func, f) + return decorator + + :param object_type: the type of the object to pass. + :param ensure: if set to `True`, a new object will be created and + remembered on the context if it's not there yet. + """ + + def decorator(f): + def new_func(*args, **kwargs): + ctx = get_current_context() + if ensure: + obj = ctx.ensure_object(object_type) + else: + obj = ctx.find_object(object_type) + if obj is None: + raise RuntimeError( + "Managed to invoke callback without a context" + " object of type '{}' existing".format(object_type.__name__) + ) + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + return decorator + + +def _make_command(f, name, attrs, cls): + if isinstance(f, Command): + raise TypeError("Attempted to convert a callback into a command twice.") + try: + params = f.__click_params__ + params.reverse() + del f.__click_params__ + except AttributeError: + params = [] + help = attrs.get("help") + if help is None: + help = inspect.getdoc(f) + if isinstance(help, bytes): + help = help.decode("utf-8") + else: + help = inspect.cleandoc(help) + attrs["help"] = help + _check_for_unicode_literals() + return cls( + name=name or f.__name__.lower().replace("_", "-"), + callback=f, + params=params, + **attrs + ) + + +def command(name=None, cls=None, **attrs): + r"""Creates a new :class:`Command` and uses the decorated function as + callback. This will also automatically attach all decorated + :func:`option`\s and :func:`argument`\s as parameters to the command. + + The name of the command defaults to the name of the function with + underscores replaced by dashes. If you want to change that, you can + pass the intended name as the first argument. + + All keyword arguments are forwarded to the underlying command class. + + Once decorated the function turns into a :class:`Command` instance + that can be invoked as a command line utility or be attached to a + command :class:`Group`. + + :param name: the name of the command. This defaults to the function + name with underscores replaced by dashes. + :param cls: the command class to instantiate. This defaults to + :class:`Command`. + """ + if cls is None: + cls = Command + + def decorator(f): + cmd = _make_command(f, name, attrs, cls) + cmd.__doc__ = f.__doc__ + return cmd + + return decorator + + +def group(name=None, **attrs): + """Creates a new :class:`Group` with a function as callback. This + works otherwise the same as :func:`command` just that the `cls` + parameter is set to :class:`Group`. + """ + attrs.setdefault("cls", Group) + return command(name, **attrs) + + +def _param_memo(f, param): + if isinstance(f, Command): + f.params.append(param) + else: + if not hasattr(f, "__click_params__"): + f.__click_params__ = [] + f.__click_params__.append(param) + + +def argument(*param_decls, **attrs): + """Attaches an argument to the command. All positional arguments are + passed as parameter declarations to :class:`Argument`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Argument` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the argument class to instantiate. This defaults to + :class:`Argument`. + """ + + def decorator(f): + ArgumentClass = attrs.pop("cls", Argument) + _param_memo(f, ArgumentClass(param_decls, **attrs)) + return f + + return decorator + + +def option(*param_decls, **attrs): + """Attaches an option to the command. All positional arguments are + passed as parameter declarations to :class:`Option`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Option` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the option class to instantiate. This defaults to + :class:`Option`. + """ + + def decorator(f): + # Issue 926, copy attrs, so pre-defined options can re-use the same cls= + option_attrs = attrs.copy() + + if "help" in option_attrs: + option_attrs["help"] = inspect.cleandoc(option_attrs["help"]) + OptionClass = option_attrs.pop("cls", Option) + _param_memo(f, OptionClass(param_decls, **option_attrs)) + return f + + return decorator + + +def confirmation_option(*param_decls, **attrs): + """Shortcut for confirmation prompts that can be ignored by passing + ``--yes`` as parameter. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + def callback(ctx, param, value): + if not value: + ctx.abort() + + @click.command() + @click.option('--yes', is_flag=True, callback=callback, + expose_value=False, prompt='Do you want to continue?') + def dropdb(): + pass + """ + + def decorator(f): + def callback(ctx, param, value): + if not value: + ctx.abort() + + attrs.setdefault("is_flag", True) + attrs.setdefault("callback", callback) + attrs.setdefault("expose_value", False) + attrs.setdefault("prompt", "Do you want to continue?") + attrs.setdefault("help", "Confirm the action without prompting.") + return option(*(param_decls or ("--yes",)), **attrs)(f) + + return decorator + + +def password_option(*param_decls, **attrs): + """Shortcut for password prompts. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + @click.command() + @click.option('--password', prompt=True, confirmation_prompt=True, + hide_input=True) + def changeadmin(password): + pass + """ + + def decorator(f): + attrs.setdefault("prompt", True) + attrs.setdefault("confirmation_prompt", True) + attrs.setdefault("hide_input", True) + return option(*(param_decls or ("--password",)), **attrs)(f) + + return decorator + + +def version_option(version=None, *param_decls, **attrs): + """Adds a ``--version`` option which immediately ends the program + printing out the version number. This is implemented as an eager + option that prints the version and exits the program in the callback. + + :param version: the version number to show. If not provided Click + attempts an auto discovery via setuptools. + :param prog_name: the name of the program (defaults to autodetection) + :param message: custom message to show instead of the default + (``'%(prog)s, version %(version)s'``) + :param others: everything else is forwarded to :func:`option`. + """ + if version is None: + if hasattr(sys, "_getframe"): + module = sys._getframe(1).f_globals.get("__name__") + else: + module = "" + + def decorator(f): + prog_name = attrs.pop("prog_name", None) + message = attrs.pop("message", "%(prog)s, version %(version)s") + + def callback(ctx, param, value): + if not value or ctx.resilient_parsing: + return + prog = prog_name + if prog is None: + prog = ctx.find_root().info_name + ver = version + if ver is None: + try: + import pkg_resources + except ImportError: + pass + else: + for dist in pkg_resources.working_set: + scripts = dist.get_entry_map().get("console_scripts") or {} + for _, entry_point in iteritems(scripts): + if entry_point.module_name == module: + ver = dist.version + break + if ver is None: + raise RuntimeError("Could not determine version") + echo(message % {"prog": prog, "version": ver}, color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("is_eager", True) + attrs.setdefault("help", "Show the version and exit.") + attrs["callback"] = callback + return option(*(param_decls or ("--version",)), **attrs)(f) + + return decorator + + +def help_option(*param_decls, **attrs): + """Adds a ``--help`` option which immediately ends the program + printing out the help page. This is usually unnecessary to add as + this is added by default to all commands unless suppressed. + + Like :func:`version_option`, this is implemented as eager option that + prints in the callback and exits. + + All arguments are forwarded to :func:`option`. + """ + + def decorator(f): + def callback(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("help", "Show this message and exit.") + attrs.setdefault("is_eager", True) + attrs["callback"] = callback + return option(*(param_decls or ("--help",)), **attrs)(f) + + return decorator diff --git a/openpype/vendor/python/python_2/click/exceptions.py b/openpype/vendor/python/python_2/click/exceptions.py new file mode 100644 index 0000000000..592ee38f0d --- /dev/null +++ b/openpype/vendor/python/python_2/click/exceptions.py @@ -0,0 +1,253 @@ +from ._compat import filename_to_ui +from ._compat import get_text_stderr +from ._compat import PY2 +from .utils import echo + + +def _join_param_hints(param_hint): + if isinstance(param_hint, (tuple, list)): + return " / ".join(repr(x) for x in param_hint) + return param_hint + + +class ClickException(Exception): + """An exception that Click can handle and show to the user.""" + + #: The exit code for this exception + exit_code = 1 + + def __init__(self, message): + ctor_msg = message + if PY2: + if ctor_msg is not None: + ctor_msg = ctor_msg.encode("utf-8") + Exception.__init__(self, ctor_msg) + self.message = message + + def format_message(self): + return self.message + + def __str__(self): + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.message.encode("utf-8") + + def show(self, file=None): + if file is None: + file = get_text_stderr() + echo("Error: {}".format(self.format_message()), file=file) + + +class UsageError(ClickException): + """An internal exception that signals a usage error. This typically + aborts any further handling. + + :param message: the error message to display. + :param ctx: optionally the context that caused this error. Click will + fill in the context automatically in some situations. + """ + + exit_code = 2 + + def __init__(self, message, ctx=None): + ClickException.__init__(self, message) + self.ctx = ctx + self.cmd = self.ctx.command if self.ctx else None + + def show(self, file=None): + if file is None: + file = get_text_stderr() + color = None + hint = "" + if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None: + hint = "Try '{} {}' for help.\n".format( + self.ctx.command_path, self.ctx.help_option_names[0] + ) + if self.ctx is not None: + color = self.ctx.color + echo("{}\n{}".format(self.ctx.get_usage(), hint), file=file, color=color) + echo("Error: {}".format(self.format_message()), file=file, color=color) + + +class BadParameter(UsageError): + """An exception that formats out a standardized error message for a + bad parameter. This is useful when thrown from a callback or type as + Click will attach contextual information to it (for instance, which + parameter it is). + + .. versionadded:: 2.0 + + :param param: the parameter object that caused this error. This can + be left out, and Click will attach this info itself + if possible. + :param param_hint: a string that shows up as parameter name. This + can be used as alternative to `param` in cases + where custom validation should happen. If it is + a string it's used as such, if it's a list then + each item is quoted and separated. + """ + + def __init__(self, message, ctx=None, param=None, param_hint=None): + UsageError.__init__(self, message, ctx) + self.param = param + self.param_hint = param_hint + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + return "Invalid value: {}".format(self.message) + param_hint = _join_param_hints(param_hint) + + return "Invalid value for {}: {}".format(param_hint, self.message) + + +class MissingParameter(BadParameter): + """Raised if click required an option or argument but it was not + provided when invoking the script. + + .. versionadded:: 4.0 + + :param param_type: a string that indicates the type of the parameter. + The default is to inherit the parameter type from + the given `param`. Valid values are ``'parameter'``, + ``'option'`` or ``'argument'``. + """ + + def __init__( + self, message=None, ctx=None, param=None, param_hint=None, param_type=None + ): + BadParameter.__init__(self, message, ctx, param, param_hint) + self.param_type = param_type + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + param_hint = None + param_hint = _join_param_hints(param_hint) + + param_type = self.param_type + if param_type is None and self.param is not None: + param_type = self.param.param_type_name + + msg = self.message + if self.param is not None: + msg_extra = self.param.type.get_missing_message(self.param) + if msg_extra: + if msg: + msg += ". {}".format(msg_extra) + else: + msg = msg_extra + + return "Missing {}{}{}{}".format( + param_type, + " {}".format(param_hint) if param_hint else "", + ". " if msg else ".", + msg or "", + ) + + def __str__(self): + if self.message is None: + param_name = self.param.name if self.param else None + return "missing parameter: {}".format(param_name) + else: + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.__unicode__().encode("utf-8") + + +class NoSuchOption(UsageError): + """Raised if click attempted to handle an option that does not + exist. + + .. versionadded:: 4.0 + """ + + def __init__(self, option_name, message=None, possibilities=None, ctx=None): + if message is None: + message = "no such option: {}".format(option_name) + UsageError.__init__(self, message, ctx) + self.option_name = option_name + self.possibilities = possibilities + + def format_message(self): + bits = [self.message] + if self.possibilities: + if len(self.possibilities) == 1: + bits.append("Did you mean {}?".format(self.possibilities[0])) + else: + possibilities = sorted(self.possibilities) + bits.append("(Possible options: {})".format(", ".join(possibilities))) + return " ".join(bits) + + +class BadOptionUsage(UsageError): + """Raised if an option is generally supplied but the use of the option + was incorrect. This is for instance raised if the number of arguments + for an option is not correct. + + .. versionadded:: 4.0 + + :param option_name: the name of the option being used incorrectly. + """ + + def __init__(self, option_name, message, ctx=None): + UsageError.__init__(self, message, ctx) + self.option_name = option_name + + +class BadArgumentUsage(UsageError): + """Raised if an argument is generally supplied but the use of the argument + was incorrect. This is for instance raised if the number of values + for an argument is not correct. + + .. versionadded:: 6.0 + """ + + def __init__(self, message, ctx=None): + UsageError.__init__(self, message, ctx) + + +class FileError(ClickException): + """Raised if a file cannot be opened.""" + + def __init__(self, filename, hint=None): + ui_filename = filename_to_ui(filename) + if hint is None: + hint = "unknown error" + ClickException.__init__(self, hint) + self.ui_filename = ui_filename + self.filename = filename + + def format_message(self): + return "Could not open file {}: {}".format(self.ui_filename, self.message) + + +class Abort(RuntimeError): + """An internal signalling exception that signals Click to abort.""" + + +class Exit(RuntimeError): + """An exception that indicates that the application should exit with some + status code. + + :param code: the status code to exit with. + """ + + __slots__ = ("exit_code",) + + def __init__(self, code=0): + self.exit_code = code diff --git a/openpype/vendor/python/python_2/click/formatting.py b/openpype/vendor/python/python_2/click/formatting.py new file mode 100644 index 0000000000..319c7f6163 --- /dev/null +++ b/openpype/vendor/python/python_2/click/formatting.py @@ -0,0 +1,283 @@ +from contextlib import contextmanager + +from ._compat import term_len +from .parser import split_opt +from .termui import get_terminal_size + +# Can force a width. This is used by the test system +FORCED_WIDTH = None + + +def measure_table(rows): + widths = {} + for row in rows: + for idx, col in enumerate(row): + widths[idx] = max(widths.get(idx, 0), term_len(col)) + return tuple(y for x, y in sorted(widths.items())) + + +def iter_rows(rows, col_count): + for row in rows: + row = tuple(row) + yield row + ("",) * (col_count - len(row)) + + +def wrap_text( + text, width=78, initial_indent="", subsequent_indent="", preserve_paragraphs=False +): + """A helper function that intelligently wraps text. By default, it + assumes that it operates on a single paragraph of text but if the + `preserve_paragraphs` parameter is provided it will intelligently + handle paragraphs (defined by two empty lines). + + If paragraphs are handled, a paragraph can be prefixed with an empty + line containing the ``\\b`` character (``\\x08``) to indicate that + no rewrapping should happen in that block. + + :param text: the text that should be rewrapped. + :param width: the maximum width for the text. + :param initial_indent: the initial indent that should be placed on the + first line as a string. + :param subsequent_indent: the indent string that should be placed on + each consecutive line. + :param preserve_paragraphs: if this flag is set then the wrapping will + intelligently handle paragraphs. + """ + from ._textwrap import TextWrapper + + text = text.expandtabs() + wrapper = TextWrapper( + width, + initial_indent=initial_indent, + subsequent_indent=subsequent_indent, + replace_whitespace=False, + ) + if not preserve_paragraphs: + return wrapper.fill(text) + + p = [] + buf = [] + indent = None + + def _flush_par(): + if not buf: + return + if buf[0].strip() == "\b": + p.append((indent or 0, True, "\n".join(buf[1:]))) + else: + p.append((indent or 0, False, " ".join(buf))) + del buf[:] + + for line in text.splitlines(): + if not line: + _flush_par() + indent = None + else: + if indent is None: + orig_len = term_len(line) + line = line.lstrip() + indent = orig_len - term_len(line) + buf.append(line) + _flush_par() + + rv = [] + for indent, raw, text in p: + with wrapper.extra_indent(" " * indent): + if raw: + rv.append(wrapper.indent_only(text)) + else: + rv.append(wrapper.fill(text)) + + return "\n\n".join(rv) + + +class HelpFormatter(object): + """This class helps with formatting text-based help pages. It's + usually just needed for very special internal cases, but it's also + exposed so that developers can write their own fancy outputs. + + At present, it always writes into memory. + + :param indent_increment: the additional increment for each level. + :param width: the width for the text. This defaults to the terminal + width clamped to a maximum of 78. + """ + + def __init__(self, indent_increment=2, width=None, max_width=None): + self.indent_increment = indent_increment + if max_width is None: + max_width = 80 + if width is None: + width = FORCED_WIDTH + if width is None: + width = max(min(get_terminal_size()[0], max_width) - 2, 50) + self.width = width + self.current_indent = 0 + self.buffer = [] + + def write(self, string): + """Writes a unicode string into the internal buffer.""" + self.buffer.append(string) + + def indent(self): + """Increases the indentation.""" + self.current_indent += self.indent_increment + + def dedent(self): + """Decreases the indentation.""" + self.current_indent -= self.indent_increment + + def write_usage(self, prog, args="", prefix="Usage: "): + """Writes a usage line into the buffer. + + :param prog: the program name. + :param args: whitespace separated list of arguments. + :param prefix: the prefix for the first line. + """ + usage_prefix = "{:>{w}}{} ".format(prefix, prog, w=self.current_indent) + text_width = self.width - self.current_indent + + if text_width >= (term_len(usage_prefix) + 20): + # The arguments will fit to the right of the prefix. + indent = " " * term_len(usage_prefix) + self.write( + wrap_text( + args, + text_width, + initial_indent=usage_prefix, + subsequent_indent=indent, + ) + ) + else: + # The prefix is too long, put the arguments on the next line. + self.write(usage_prefix) + self.write("\n") + indent = " " * (max(self.current_indent, term_len(prefix)) + 4) + self.write( + wrap_text( + args, text_width, initial_indent=indent, subsequent_indent=indent + ) + ) + + self.write("\n") + + def write_heading(self, heading): + """Writes a heading into the buffer.""" + self.write("{:>{w}}{}:\n".format("", heading, w=self.current_indent)) + + def write_paragraph(self): + """Writes a paragraph into the buffer.""" + if self.buffer: + self.write("\n") + + def write_text(self, text): + """Writes re-indented text into the buffer. This rewraps and + preserves paragraphs. + """ + text_width = max(self.width - self.current_indent, 11) + indent = " " * self.current_indent + self.write( + wrap_text( + text, + text_width, + initial_indent=indent, + subsequent_indent=indent, + preserve_paragraphs=True, + ) + ) + self.write("\n") + + def write_dl(self, rows, col_max=30, col_spacing=2): + """Writes a definition list into the buffer. This is how options + and commands are usually formatted. + + :param rows: a list of two item tuples for the terms and values. + :param col_max: the maximum width of the first column. + :param col_spacing: the number of spaces between the first and + second column. + """ + rows = list(rows) + widths = measure_table(rows) + if len(widths) != 2: + raise TypeError("Expected two columns for definition list") + + first_col = min(widths[0], col_max) + col_spacing + + for first, second in iter_rows(rows, len(widths)): + self.write("{:>{w}}{}".format("", first, w=self.current_indent)) + if not second: + self.write("\n") + continue + if term_len(first) <= first_col - col_spacing: + self.write(" " * (first_col - term_len(first))) + else: + self.write("\n") + self.write(" " * (first_col + self.current_indent)) + + text_width = max(self.width - first_col - 2, 10) + wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True) + lines = wrapped_text.splitlines() + + if lines: + self.write("{}\n".format(lines[0])) + + for line in lines[1:]: + self.write( + "{:>{w}}{}\n".format( + "", line, w=first_col + self.current_indent + ) + ) + + if len(lines) > 1: + # separate long help from next option + self.write("\n") + else: + self.write("\n") + + @contextmanager + def section(self, name): + """Helpful context manager that writes a paragraph, a heading, + and the indents. + + :param name: the section name that is written as heading. + """ + self.write_paragraph() + self.write_heading(name) + self.indent() + try: + yield + finally: + self.dedent() + + @contextmanager + def indentation(self): + """A context manager that increases the indentation.""" + self.indent() + try: + yield + finally: + self.dedent() + + def getvalue(self): + """Returns the buffer contents.""" + return "".join(self.buffer) + + +def join_options(options): + """Given a list of option strings this joins them in the most appropriate + way and returns them in the form ``(formatted_string, + any_prefix_is_slash)`` where the second item in the tuple is a flag that + indicates if any of the option prefixes was a slash. + """ + rv = [] + any_prefix_is_slash = False + for opt in options: + prefix = split_opt(opt)[0] + if prefix == "/": + any_prefix_is_slash = True + rv.append((len(prefix), opt)) + + rv.sort(key=lambda x: x[0]) + + rv = ", ".join(x[1] for x in rv) + return rv, any_prefix_is_slash diff --git a/openpype/vendor/python/python_2/click/globals.py b/openpype/vendor/python/python_2/click/globals.py new file mode 100644 index 0000000000..1649f9a0bf --- /dev/null +++ b/openpype/vendor/python/python_2/click/globals.py @@ -0,0 +1,47 @@ +from threading import local + +_local = local() + + +def get_current_context(silent=False): + """Returns the current click context. This can be used as a way to + access the current context object from anywhere. This is a more implicit + alternative to the :func:`pass_context` decorator. This function is + primarily useful for helpers such as :func:`echo` which might be + interested in changing its behavior based on the current context. + + To push the current context, :meth:`Context.scope` can be used. + + .. versionadded:: 5.0 + + :param silent: if set to `True` the return value is `None` if no context + is available. The default behavior is to raise a + :exc:`RuntimeError`. + """ + try: + return _local.stack[-1] + except (AttributeError, IndexError): + if not silent: + raise RuntimeError("There is no active click context.") + + +def push_context(ctx): + """Pushes a new context to the current stack.""" + _local.__dict__.setdefault("stack", []).append(ctx) + + +def pop_context(): + """Removes the top level from the stack.""" + _local.stack.pop() + + +def resolve_color_default(color=None): + """"Internal helper to get the default value of the color flag. If a + value is passed it's returned unchanged, otherwise it's looked up from + the current context. + """ + if color is not None: + return color + ctx = get_current_context(silent=True) + if ctx is not None: + return ctx.color diff --git a/openpype/vendor/python/python_2/click/parser.py b/openpype/vendor/python/python_2/click/parser.py new file mode 100644 index 0000000000..f43ebfe9fc --- /dev/null +++ b/openpype/vendor/python/python_2/click/parser.py @@ -0,0 +1,428 @@ +# -*- coding: utf-8 -*- +""" +This module started out as largely a copy paste from the stdlib's +optparse module with the features removed that we do not need from +optparse because we implement them in Click on a higher level (for +instance type handling, help formatting and a lot more). + +The plan is to remove more and more from here over time. + +The reason this is a different module and not optparse from the stdlib +is that there are differences in 2.x and 3.x about the error messages +generated and optparse in the stdlib uses gettext for no good reason +and might cause us issues. + +Click uses parts of optparse written by Gregory P. Ward and maintained +by the Python Software Foundation. This is limited to code in parser.py. + +Copyright 2001-2006 Gregory P. Ward. All rights reserved. +Copyright 2002-2006 Python Software Foundation. All rights reserved. +""" +import re +from collections import deque + +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import NoSuchOption +from .exceptions import UsageError + + +def _unpack_args(args, nargs_spec): + """Given an iterable of arguments and an iterable of nargs specifications, + it returns a tuple with all the unpacked arguments at the first index + and all remaining arguments as the second. + + The nargs specification is the number of arguments that should be consumed + or `-1` to indicate that this position should eat up all the remainders. + + Missing items are filled with `None`. + """ + args = deque(args) + nargs_spec = deque(nargs_spec) + rv = [] + spos = None + + def _fetch(c): + try: + if spos is None: + return c.popleft() + else: + return c.pop() + except IndexError: + return None + + while nargs_spec: + nargs = _fetch(nargs_spec) + if nargs == 1: + rv.append(_fetch(args)) + elif nargs > 1: + x = [_fetch(args) for _ in range(nargs)] + # If we're reversed, we're pulling in the arguments in reverse, + # so we need to turn them around. + if spos is not None: + x.reverse() + rv.append(tuple(x)) + elif nargs < 0: + if spos is not None: + raise TypeError("Cannot have two nargs < 0") + spos = len(rv) + rv.append(None) + + # spos is the position of the wildcard (star). If it's not `None`, + # we fill it with the remainder. + if spos is not None: + rv[spos] = tuple(args) + args = [] + rv[spos + 1 :] = reversed(rv[spos + 1 :]) + + return tuple(rv), list(args) + + +def _error_opt_args(nargs, opt): + if nargs == 1: + raise BadOptionUsage(opt, "{} option requires an argument".format(opt)) + raise BadOptionUsage(opt, "{} option requires {} arguments".format(opt, nargs)) + + +def split_opt(opt): + first = opt[:1] + if first.isalnum(): + return "", opt + if opt[1:2] == first: + return opt[:2], opt[2:] + return first, opt[1:] + + +def normalize_opt(opt, ctx): + if ctx is None or ctx.token_normalize_func is None: + return opt + prefix, opt = split_opt(opt) + return prefix + ctx.token_normalize_func(opt) + + +def split_arg_string(string): + """Given an argument string this attempts to split it into small parts.""" + rv = [] + for match in re.finditer( + r"('([^'\\]*(?:\\.[^'\\]*)*)'|\"([^\"\\]*(?:\\.[^\"\\]*)*)\"|\S+)\s*", + string, + re.S, + ): + arg = match.group().strip() + if arg[:1] == arg[-1:] and arg[:1] in "\"'": + arg = arg[1:-1].encode("ascii", "backslashreplace").decode("unicode-escape") + try: + arg = type(string)(arg) + except UnicodeError: + pass + rv.append(arg) + return rv + + +class Option(object): + def __init__(self, opts, dest, action=None, nargs=1, const=None, obj=None): + self._short_opts = [] + self._long_opts = [] + self.prefixes = set() + + for opt in opts: + prefix, value = split_opt(opt) + if not prefix: + raise ValueError("Invalid start character for option ({})".format(opt)) + self.prefixes.add(prefix[0]) + if len(prefix) == 1 and len(value) == 1: + self._short_opts.append(opt) + else: + self._long_opts.append(opt) + self.prefixes.add(prefix) + + if action is None: + action = "store" + + self.dest = dest + self.action = action + self.nargs = nargs + self.const = const + self.obj = obj + + @property + def takes_value(self): + return self.action in ("store", "append") + + def process(self, value, state): + if self.action == "store": + state.opts[self.dest] = value + elif self.action == "store_const": + state.opts[self.dest] = self.const + elif self.action == "append": + state.opts.setdefault(self.dest, []).append(value) + elif self.action == "append_const": + state.opts.setdefault(self.dest, []).append(self.const) + elif self.action == "count": + state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 + else: + raise ValueError("unknown action '{}'".format(self.action)) + state.order.append(self.obj) + + +class Argument(object): + def __init__(self, dest, nargs=1, obj=None): + self.dest = dest + self.nargs = nargs + self.obj = obj + + def process(self, value, state): + if self.nargs > 1: + holes = sum(1 for x in value if x is None) + if holes == len(value): + value = None + elif holes != 0: + raise BadArgumentUsage( + "argument {} takes {} values".format(self.dest, self.nargs) + ) + state.opts[self.dest] = value + state.order.append(self.obj) + + +class ParsingState(object): + def __init__(self, rargs): + self.opts = {} + self.largs = [] + self.rargs = rargs + self.order = [] + + +class OptionParser(object): + """The option parser is an internal class that is ultimately used to + parse options and arguments. It's modelled after optparse and brings + a similar but vastly simplified API. It should generally not be used + directly as the high level Click classes wrap it for you. + + It's not nearly as extensible as optparse or argparse as it does not + implement features that are implemented on a higher level (such as + types or defaults). + + :param ctx: optionally the :class:`~click.Context` where this parser + should go with. + """ + + def __init__(self, ctx=None): + #: The :class:`~click.Context` for this parser. This might be + #: `None` for some advanced use cases. + self.ctx = ctx + #: This controls how the parser deals with interspersed arguments. + #: If this is set to `False`, the parser will stop on the first + #: non-option. Click uses this to implement nested subcommands + #: safely. + self.allow_interspersed_args = True + #: This tells the parser how to deal with unknown options. By + #: default it will error out (which is sensible), but there is a + #: second mode where it will ignore it and continue processing + #: after shifting all the unknown options into the resulting args. + self.ignore_unknown_options = False + if ctx is not None: + self.allow_interspersed_args = ctx.allow_interspersed_args + self.ignore_unknown_options = ctx.ignore_unknown_options + self._short_opt = {} + self._long_opt = {} + self._opt_prefixes = {"-", "--"} + self._args = [] + + def add_option(self, opts, dest, action=None, nargs=1, const=None, obj=None): + """Adds a new option named `dest` to the parser. The destination + is not inferred (unlike with optparse) and needs to be explicitly + provided. Action can be any of ``store``, ``store_const``, + ``append``, ``appnd_const`` or ``count``. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + opts = [normalize_opt(opt, self.ctx) for opt in opts] + option = Option(opts, dest, action=action, nargs=nargs, const=const, obj=obj) + self._opt_prefixes.update(option.prefixes) + for opt in option._short_opts: + self._short_opt[opt] = option + for opt in option._long_opts: + self._long_opt[opt] = option + + def add_argument(self, dest, nargs=1, obj=None): + """Adds a positional argument named `dest` to the parser. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + self._args.append(Argument(dest=dest, nargs=nargs, obj=obj)) + + def parse_args(self, args): + """Parses positional arguments and returns ``(values, args, order)`` + for the parsed options and arguments as well as the leftover + arguments if there are any. The order is a list of objects as they + appear on the command line. If arguments appear multiple times they + will be memorized multiple times as well. + """ + state = ParsingState(args) + try: + self._process_args_for_options(state) + self._process_args_for_args(state) + except UsageError: + if self.ctx is None or not self.ctx.resilient_parsing: + raise + return state.opts, state.largs, state.order + + def _process_args_for_args(self, state): + pargs, args = _unpack_args( + state.largs + state.rargs, [x.nargs for x in self._args] + ) + + for idx, arg in enumerate(self._args): + arg.process(pargs[idx], state) + + state.largs = args + state.rargs = [] + + def _process_args_for_options(self, state): + while state.rargs: + arg = state.rargs.pop(0) + arglen = len(arg) + # Double dashes always handled explicitly regardless of what + # prefixes are valid. + if arg == "--": + return + elif arg[:1] in self._opt_prefixes and arglen > 1: + self._process_opts(arg, state) + elif self.allow_interspersed_args: + state.largs.append(arg) + else: + state.rargs.insert(0, arg) + return + + # Say this is the original argument list: + # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] + # ^ + # (we are about to process arg(i)). + # + # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of + # [arg0, ..., arg(i-1)] (any options and their arguments will have + # been removed from largs). + # + # The while loop will usually consume 1 or more arguments per pass. + # If it consumes 1 (eg. arg is an option that takes no arguments), + # then after _process_arg() is done the situation is: + # + # largs = subset of [arg0, ..., arg(i)] + # rargs = [arg(i+1), ..., arg(N-1)] + # + # If allow_interspersed_args is false, largs will always be + # *empty* -- still a subset of [arg0, ..., arg(i-1)], but + # not a very interesting subset! + + def _match_long_opt(self, opt, explicit_value, state): + if opt not in self._long_opt: + possibilities = [word for word in self._long_opt if word.startswith(opt)] + raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) + + option = self._long_opt[opt] + if option.takes_value: + # At this point it's safe to modify rargs by injecting the + # explicit value, because no exception is raised in this + # branch. This means that the inserted value will be fully + # consumed. + if explicit_value is not None: + state.rargs.insert(0, explicit_value) + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + elif explicit_value is not None: + raise BadOptionUsage(opt, "{} option does not take a value".format(opt)) + + else: + value = None + + option.process(value, state) + + def _match_short_opt(self, arg, state): + stop = False + i = 1 + prefix = arg[0] + unknown_options = [] + + for ch in arg[1:]: + opt = normalize_opt(prefix + ch, self.ctx) + option = self._short_opt.get(opt) + i += 1 + + if not option: + if self.ignore_unknown_options: + unknown_options.append(ch) + continue + raise NoSuchOption(opt, ctx=self.ctx) + if option.takes_value: + # Any characters left in arg? Pretend they're the + # next arg, and stop consuming characters of arg. + if i < len(arg): + state.rargs.insert(0, arg[i:]) + stop = True + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + else: + value = None + + option.process(value, state) + + if stop: + break + + # If we got any unknown options we re-combinate the string of the + # remaining options and re-attach the prefix, then report that + # to the state as new larg. This way there is basic combinatorics + # that can be achieved while still ignoring unknown arguments. + if self.ignore_unknown_options and unknown_options: + state.largs.append("{}{}".format(prefix, "".join(unknown_options))) + + def _process_opts(self, arg, state): + explicit_value = None + # Long option handling happens in two parts. The first part is + # supporting explicitly attached values. In any case, we will try + # to long match the option first. + if "=" in arg: + long_opt, explicit_value = arg.split("=", 1) + else: + long_opt = arg + norm_long_opt = normalize_opt(long_opt, self.ctx) + + # At this point we will match the (assumed) long option through + # the long option matching code. Note that this allows options + # like "-foo" to be matched as long options. + try: + self._match_long_opt(norm_long_opt, explicit_value, state) + except NoSuchOption: + # At this point the long option matching failed, and we need + # to try with short options. However there is a special rule + # which says, that if we have a two character options prefix + # (applies to "--foo" for instance), we do not dispatch to the + # short option code and will instead raise the no option + # error. + if arg[:2] not in self._opt_prefixes: + return self._match_short_opt(arg, state) + if not self.ignore_unknown_options: + raise + state.largs.append(arg) diff --git a/openpype/vendor/python/python_2/click/termui.py b/openpype/vendor/python/python_2/click/termui.py new file mode 100644 index 0000000000..02ef9e9f04 --- /dev/null +++ b/openpype/vendor/python/python_2/click/termui.py @@ -0,0 +1,681 @@ +import inspect +import io +import itertools +import os +import struct +import sys + +from ._compat import DEFAULT_COLUMNS +from ._compat import get_winterm_size +from ._compat import isatty +from ._compat import raw_input +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_type +from ._compat import WIN +from .exceptions import Abort +from .exceptions import UsageError +from .globals import resolve_color_default +from .types import Choice +from .types import convert_type +from .types import Path +from .utils import echo +from .utils import LazyFile + +# The prompt functions to use. The doc tools currently override these +# functions to customize how they work. +visible_prompt_func = raw_input + +_ansi_colors = { + "black": 30, + "red": 31, + "green": 32, + "yellow": 33, + "blue": 34, + "magenta": 35, + "cyan": 36, + "white": 37, + "reset": 39, + "bright_black": 90, + "bright_red": 91, + "bright_green": 92, + "bright_yellow": 93, + "bright_blue": 94, + "bright_magenta": 95, + "bright_cyan": 96, + "bright_white": 97, +} +_ansi_reset_all = "\033[0m" + + +def hidden_prompt_func(prompt): + import getpass + + return getpass.getpass(prompt) + + +def _build_prompt( + text, suffix, show_default=False, default=None, show_choices=True, type=None +): + prompt = text + if type is not None and show_choices and isinstance(type, Choice): + prompt += " ({})".format(", ".join(map(str, type.choices))) + if default is not None and show_default: + prompt = "{} [{}]".format(prompt, _format_default(default)) + return prompt + suffix + + +def _format_default(default): + if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): + return default.name + + return default + + +def prompt( + text, + default=None, + hide_input=False, + confirmation_prompt=False, + type=None, + value_proc=None, + prompt_suffix=": ", + show_default=True, + err=False, + show_choices=True, +): + """Prompts a user for input. This is a convenience function that can + be used to prompt a user for input later. + + If the user aborts the input by sending a interrupt signal, this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 7.0 + Added the show_choices parameter. + + .. versionadded:: 6.0 + Added unicode support for cmd.exe on Windows. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the text to show for the prompt. + :param default: the default value to use if no input happens. If this + is not given it will prompt until it's aborted. + :param hide_input: if this is set to true then the input value will + be hidden. + :param confirmation_prompt: asks for confirmation for the value. + :param type: the type to use to check the value against. + :param value_proc: if this parameter is provided it's a function that + is invoked instead of the type conversion to + convert a value. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + :param show_choices: Show or hide choices if the passed type is a Choice. + For example if type is a Choice of either day or week, + show_choices is true and text is "Group by" then the + prompt will be "Group by (day, week): ". + """ + result = None + + def prompt_func(text): + f = hidden_prompt_func if hide_input else visible_prompt_func + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(text, nl=False, err=err) + return f("") + except (KeyboardInterrupt, EOFError): + # getpass doesn't print a newline if the user aborts input with ^C. + # Allegedly this behavior is inherited from getpass(3). + # A doc bug has been filed at https://bugs.python.org/issue24711 + if hide_input: + echo(None, err=err) + raise Abort() + + if value_proc is None: + value_proc = convert_type(type, default) + + prompt = _build_prompt( + text, prompt_suffix, show_default, default, show_choices, type + ) + + while 1: + while 1: + value = prompt_func(prompt) + if value: + break + elif default is not None: + if isinstance(value_proc, Path): + # validate Path default value(exists, dir_okay etc.) + value = default + break + return default + try: + result = value_proc(value) + except UsageError as e: + echo("Error: {}".format(e.message), err=err) # noqa: B306 + continue + if not confirmation_prompt: + return result + while 1: + value2 = prompt_func("Repeat for confirmation: ") + if value2: + break + if value == value2: + return result + echo("Error: the two entered values do not match", err=err) + + +def confirm( + text, default=False, abort=False, prompt_suffix=": ", show_default=True, err=False +): + """Prompts for confirmation (yes/no question). + + If the user aborts the input by sending a interrupt signal this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the question to ask. + :param default: the default for the prompt. + :param abort: if this is set to `True` a negative answer aborts the + exception by raising :exc:`Abort`. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + prompt = _build_prompt( + text, prompt_suffix, show_default, "Y/n" if default else "y/N" + ) + while 1: + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(prompt, nl=False, err=err) + value = visible_prompt_func("").lower().strip() + except (KeyboardInterrupt, EOFError): + raise Abort() + if value in ("y", "yes"): + rv = True + elif value in ("n", "no"): + rv = False + elif value == "": + rv = default + else: + echo("Error: invalid input", err=err) + continue + break + if abort and not rv: + raise Abort() + return rv + + +def get_terminal_size(): + """Returns the current size of the terminal as tuple in the form + ``(width, height)`` in columns and rows. + """ + # If shutil has get_terminal_size() (Python 3.3 and later) use that + if sys.version_info >= (3, 3): + import shutil + + shutil_get_terminal_size = getattr(shutil, "get_terminal_size", None) + if shutil_get_terminal_size: + sz = shutil_get_terminal_size() + return sz.columns, sz.lines + + # We provide a sensible default for get_winterm_size() when being invoked + # inside a subprocess. Without this, it would not provide a useful input. + if get_winterm_size is not None: + size = get_winterm_size() + if size == (0, 0): + return (79, 24) + else: + return size + + def ioctl_gwinsz(fd): + try: + import fcntl + import termios + + cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234")) + except Exception: + return + return cr + + cr = ioctl_gwinsz(0) or ioctl_gwinsz(1) or ioctl_gwinsz(2) + if not cr: + try: + fd = os.open(os.ctermid(), os.O_RDONLY) + try: + cr = ioctl_gwinsz(fd) + finally: + os.close(fd) + except Exception: + pass + if not cr or not cr[0] or not cr[1]: + cr = (os.environ.get("LINES", 25), os.environ.get("COLUMNS", DEFAULT_COLUMNS)) + return int(cr[1]), int(cr[0]) + + +def echo_via_pager(text_or_generator, color=None): + """This function takes a text and shows it via an environment specific + pager on stdout. + + .. versionchanged:: 3.0 + Added the `color` flag. + + :param text_or_generator: the text to page, or alternatively, a + generator emitting the text to page. + :param color: controls if the pager supports ANSI colors or not. The + default is autodetection. + """ + color = resolve_color_default(color) + + if inspect.isgeneratorfunction(text_or_generator): + i = text_or_generator() + elif isinstance(text_or_generator, string_types): + i = [text_or_generator] + else: + i = iter(text_or_generator) + + # convert every element of i to a text type if necessary + text_generator = (el if isinstance(el, string_types) else text_type(el) for el in i) + + from ._termui_impl import pager + + return pager(itertools.chain(text_generator, "\n"), color) + + +def progressbar( + iterable=None, + length=None, + label=None, + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + fill_char="#", + empty_char="-", + bar_template="%(label)s [%(bar)s] %(info)s", + info_sep=" ", + width=36, + file=None, + color=None, +): + """This function creates an iterable context manager that can be used + to iterate over something while showing a progress bar. It will + either iterate over the `iterable` or `length` items (that are counted + up). While iteration happens, this function will print a rendered + progress bar to the given `file` (defaults to stdout) and will attempt + to calculate remaining time and more. By default, this progress bar + will not be rendered if the file is not a terminal. + + The context manager creates the progress bar. When the context + manager is entered the progress bar is already created. With every + iteration over the progress bar, the iterable passed to the bar is + advanced and the bar is updated. When the context manager exits, + a newline is printed and the progress bar is finalized on screen. + + Note: The progress bar is currently designed for use cases where the + total progress can be expected to take at least several seconds. + Because of this, the ProgressBar class object won't display + progress that is considered too fast, and progress where the time + between steps is less than a second. + + No printing must happen or the progress bar will be unintentionally + destroyed. + + Example usage:: + + with progressbar(items) as bar: + for item in bar: + do_something_with(item) + + Alternatively, if no iterable is specified, one can manually update the + progress bar through the `update()` method instead of directly + iterating over the progress bar. The update method accepts the number + of steps to increment the bar with:: + + with progressbar(length=chunks.total_bytes) as bar: + for chunk in chunks: + process_chunk(chunk) + bar.update(chunks.bytes) + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `color` parameter. Added a `update` method to the + progressbar object. + + :param iterable: an iterable to iterate over. If not provided the length + is required. + :param length: the number of items to iterate over. By default the + progressbar will attempt to ask the iterator about its + length, which might or might not work. If an iterable is + also provided this parameter can be used to override the + length. If an iterable is not provided the progress bar + will iterate over a range of that length. + :param label: the label to show next to the progress bar. + :param show_eta: enables or disables the estimated time display. This is + automatically disabled if the length cannot be + determined. + :param show_percent: enables or disables the percentage display. The + default is `True` if the iterable has a length or + `False` if not. + :param show_pos: enables or disables the absolute position display. The + default is `False`. + :param item_show_func: a function called with the current item which + can return a string to show the current item + next to the progress bar. Note that the current + item can be `None`! + :param fill_char: the character to use to show the filled part of the + progress bar. + :param empty_char: the character to use to show the non-filled part of + the progress bar. + :param bar_template: the format string to use as template for the bar. + The parameters in it are ``label`` for the label, + ``bar`` for the progress bar and ``info`` for the + info section. + :param info_sep: the separator between multiple info items (eta etc.) + :param width: the width of the progress bar in characters, 0 means full + terminal width + :param file: the file to write to. If this is not a terminal then + only the label is printed. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are included anywhere in the progress bar output + which is not the case by default. + """ + from ._termui_impl import ProgressBar + + color = resolve_color_default(color) + return ProgressBar( + iterable=iterable, + length=length, + show_eta=show_eta, + show_percent=show_percent, + show_pos=show_pos, + item_show_func=item_show_func, + fill_char=fill_char, + empty_char=empty_char, + bar_template=bar_template, + info_sep=info_sep, + file=file, + label=label, + width=width, + color=color, + ) + + +def clear(): + """Clears the terminal screen. This will have the effect of clearing + the whole visible space of the terminal and moving the cursor to the + top left. This does not do anything if not connected to a terminal. + + .. versionadded:: 2.0 + """ + if not isatty(sys.stdout): + return + # If we're on Windows and we don't have colorama available, then we + # clear the screen by shelling out. Otherwise we can use an escape + # sequence. + if WIN: + os.system("cls") + else: + sys.stdout.write("\033[2J\033[1;1H") + + +def style( + text, + fg=None, + bg=None, + bold=None, + dim=None, + underline=None, + blink=None, + reverse=None, + reset=True, +): + """Styles a text with ANSI styles and returns the new string. By + default the styling is self contained which means that at the end + of the string a reset code is issued. This can be prevented by + passing ``reset=False``. + + Examples:: + + click.echo(click.style('Hello World!', fg='green')) + click.echo(click.style('ATTENTION!', blink=True)) + click.echo(click.style('Some things', reverse=True, fg='cyan')) + + Supported color names: + + * ``black`` (might be a gray) + * ``red`` + * ``green`` + * ``yellow`` (might be an orange) + * ``blue`` + * ``magenta`` + * ``cyan`` + * ``white`` (might be light gray) + * ``bright_black`` + * ``bright_red`` + * ``bright_green`` + * ``bright_yellow`` + * ``bright_blue`` + * ``bright_magenta`` + * ``bright_cyan`` + * ``bright_white`` + * ``reset`` (reset the color code only) + + .. versionadded:: 2.0 + + .. versionadded:: 7.0 + Added support for bright colors. + + :param text: the string to style with ansi codes. + :param fg: if provided this will become the foreground color. + :param bg: if provided this will become the background color. + :param bold: if provided this will enable or disable bold mode. + :param dim: if provided this will enable or disable dim mode. This is + badly supported. + :param underline: if provided this will enable or disable underline. + :param blink: if provided this will enable or disable blinking. + :param reverse: if provided this will enable or disable inverse + rendering (foreground becomes background and the + other way round). + :param reset: by default a reset-all code is added at the end of the + string which means that styles do not carry over. This + can be disabled to compose styles. + """ + bits = [] + if fg: + try: + bits.append("\033[{}m".format(_ansi_colors[fg])) + except KeyError: + raise TypeError("Unknown color '{}'".format(fg)) + if bg: + try: + bits.append("\033[{}m".format(_ansi_colors[bg] + 10)) + except KeyError: + raise TypeError("Unknown color '{}'".format(bg)) + if bold is not None: + bits.append("\033[{}m".format(1 if bold else 22)) + if dim is not None: + bits.append("\033[{}m".format(2 if dim else 22)) + if underline is not None: + bits.append("\033[{}m".format(4 if underline else 24)) + if blink is not None: + bits.append("\033[{}m".format(5 if blink else 25)) + if reverse is not None: + bits.append("\033[{}m".format(7 if reverse else 27)) + bits.append(text) + if reset: + bits.append(_ansi_reset_all) + return "".join(bits) + + +def unstyle(text): + """Removes ANSI styling information from a string. Usually it's not + necessary to use this function as Click's echo function will + automatically remove styling if necessary. + + .. versionadded:: 2.0 + + :param text: the text to remove style information from. + """ + return strip_ansi(text) + + +def secho(message=None, file=None, nl=True, err=False, color=None, **styles): + """This function combines :func:`echo` and :func:`style` into one + call. As such the following two calls are the same:: + + click.secho('Hello World!', fg='green') + click.echo(click.style('Hello World!', fg='green')) + + All keyword arguments are forwarded to the underlying functions + depending on which one they go with. + + .. versionadded:: 2.0 + """ + if message is not None: + message = style(message, **styles) + return echo(message, file=file, nl=nl, err=err, color=color) + + +def edit( + text=None, editor=None, env=None, require_save=True, extension=".txt", filename=None +): + r"""Edits the given text in the defined editor. If an editor is given + (should be the full path to the executable but the regular operating + system search path is used for finding the executable) it overrides + the detected editor. Optionally, some environment variables can be + used. If the editor is closed without changes, `None` is returned. In + case a file is edited directly the return value is always `None` and + `require_save` and `extension` are ignored. + + If the editor cannot be opened a :exc:`UsageError` is raised. + + Note for Windows: to simplify cross-platform usage, the newlines are + automatically converted from POSIX to Windows and vice versa. As such, + the message here will have ``\n`` as newline markers. + + :param text: the text to edit. + :param editor: optionally the editor to use. Defaults to automatic + detection. + :param env: environment variables to forward to the editor. + :param require_save: if this is true, then not saving in the editor + will make the return value become `None`. + :param extension: the extension to tell the editor about. This defaults + to `.txt` but changing this might change syntax + highlighting. + :param filename: if provided it will edit this file instead of the + provided text contents. It will not use a temporary + file as an indirection in that case. + """ + from ._termui_impl import Editor + + editor = Editor( + editor=editor, env=env, require_save=require_save, extension=extension + ) + if filename is None: + return editor.edit(text) + editor.edit_file(filename) + + +def launch(url, wait=False, locate=False): + """This function launches the given URL (or filename) in the default + viewer application for this file type. If this is an executable, it + might launch the executable in a new session. The return value is + the exit code of the launched application. Usually, ``0`` indicates + success. + + Examples:: + + click.launch('https://click.palletsprojects.com/') + click.launch('/my/downloaded/file', locate=True) + + .. versionadded:: 2.0 + + :param url: URL or filename of the thing to launch. + :param wait: waits for the program to stop. + :param locate: if this is set to `True` then instead of launching the + application associated with the URL it will attempt to + launch a file manager with the file located. This + might have weird effects if the URL does not point to + the filesystem. + """ + from ._termui_impl import open_url + + return open_url(url, wait=wait, locate=locate) + + +# If this is provided, getchar() calls into this instead. This is used +# for unittesting purposes. +_getchar = None + + +def getchar(echo=False): + """Fetches a single character from the terminal and returns it. This + will always return a unicode character and under certain rare + circumstances this might return more than one character. The + situations which more than one character is returned is when for + whatever reason multiple characters end up in the terminal buffer or + standard input was not actually a terminal. + + Note that this will always read from the terminal, even if something + is piped into the standard input. + + Note for Windows: in rare cases when typing non-ASCII characters, this + function might wait for a second character and then return both at once. + This is because certain Unicode characters look like special-key markers. + + .. versionadded:: 2.0 + + :param echo: if set to `True`, the character read will also show up on + the terminal. The default is to not show it. + """ + f = _getchar + if f is None: + from ._termui_impl import getchar as f + return f(echo) + + +def raw_terminal(): + from ._termui_impl import raw_terminal as f + + return f() + + +def pause(info="Press any key to continue ...", err=False): + """This command stops execution and waits for the user to press any + key to continue. This is similar to the Windows batch "pause" + command. If the program is not run through a terminal, this command + will instead do nothing. + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param info: the info string to print before pausing. + :param err: if set to message goes to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + if not isatty(sys.stdin) or not isatty(sys.stdout): + return + try: + if info: + echo(info, nl=False, err=err) + try: + getchar() + except (KeyboardInterrupt, EOFError): + pass + finally: + if info: + echo(err=err) diff --git a/openpype/vendor/python/python_2/click/testing.py b/openpype/vendor/python/python_2/click/testing.py new file mode 100644 index 0000000000..a3dba3b301 --- /dev/null +++ b/openpype/vendor/python/python_2/click/testing.py @@ -0,0 +1,382 @@ +import contextlib +import os +import shlex +import shutil +import sys +import tempfile + +from . import formatting +from . import termui +from . import utils +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types + + +if PY2: + from cStringIO import StringIO +else: + import io + from ._compat import _find_binary_reader + + +class EchoingStdin(object): + def __init__(self, input, output): + self._input = input + self._output = output + + def __getattr__(self, x): + return getattr(self._input, x) + + def _echo(self, rv): + self._output.write(rv) + return rv + + def read(self, n=-1): + return self._echo(self._input.read(n)) + + def readline(self, n=-1): + return self._echo(self._input.readline(n)) + + def readlines(self): + return [self._echo(x) for x in self._input.readlines()] + + def __iter__(self): + return iter(self._echo(x) for x in self._input) + + def __repr__(self): + return repr(self._input) + + +def make_input_stream(input, charset): + # Is already an input stream. + if hasattr(input, "read"): + if PY2: + return input + rv = _find_binary_reader(input) + if rv is not None: + return rv + raise TypeError("Could not find binary reader for input stream.") + + if input is None: + input = b"" + elif not isinstance(input, bytes): + input = input.encode(charset) + if PY2: + return StringIO(input) + return io.BytesIO(input) + + +class Result(object): + """Holds the captured result of an invoked CLI script.""" + + def __init__( + self, runner, stdout_bytes, stderr_bytes, exit_code, exception, exc_info=None + ): + #: The runner that created the result + self.runner = runner + #: The standard output as bytes. + self.stdout_bytes = stdout_bytes + #: The standard error as bytes, or None if not available + self.stderr_bytes = stderr_bytes + #: The exit code as integer. + self.exit_code = exit_code + #: The exception that happened if one did. + self.exception = exception + #: The traceback + self.exc_info = exc_info + + @property + def output(self): + """The (standard) output as unicode string.""" + return self.stdout + + @property + def stdout(self): + """The standard output as unicode string.""" + return self.stdout_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + @property + def stderr(self): + """The standard error as unicode string.""" + if self.stderr_bytes is None: + raise ValueError("stderr not separately captured") + return self.stderr_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + def __repr__(self): + return "<{} {}>".format( + type(self).__name__, repr(self.exception) if self.exception else "okay" + ) + + +class CliRunner(object): + """The CLI runner provides functionality to invoke a Click command line + script for unittesting purposes in a isolated environment. This only + works in single-threaded systems without any concurrency as it changes the + global interpreter state. + + :param charset: the character set for the input and output data. This is + UTF-8 by default and should not be changed currently as + the reporting to Click only works in Python 2 properly. + :param env: a dictionary with environment variables for overriding. + :param echo_stdin: if this is set to `True`, then reading from stdin writes + to stdout. This is useful for showing examples in + some circumstances. Note that regular prompts + will automatically echo the input. + :param mix_stderr: if this is set to `False`, then stdout and stderr are + preserved as independent streams. This is useful for + Unix-philosophy apps that have predictable stdout and + noisy stderr, such that each may be measured + independently + """ + + def __init__(self, charset=None, env=None, echo_stdin=False, mix_stderr=True): + if charset is None: + charset = "utf-8" + self.charset = charset + self.env = env or {} + self.echo_stdin = echo_stdin + self.mix_stderr = mix_stderr + + def get_default_prog_name(self, cli): + """Given a command object it will return the default program name + for it. The default is the `name` attribute or ``"root"`` if not + set. + """ + return cli.name or "root" + + def make_env(self, overrides=None): + """Returns the environment overrides for invoking a script.""" + rv = dict(self.env) + if overrides: + rv.update(overrides) + return rv + + @contextlib.contextmanager + def isolation(self, input=None, env=None, color=False): + """A context manager that sets up the isolation for invoking of a + command line tool. This sets up stdin with the given input data + and `os.environ` with the overrides from the given dictionary. + This also rebinds some internals in Click to be mocked (like the + prompt functionality). + + This is automatically done in the :meth:`invoke` method. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param input: the input stream to put into sys.stdin. + :param env: the environment overrides as dictionary. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + input = make_input_stream(input, self.charset) + + old_stdin = sys.stdin + old_stdout = sys.stdout + old_stderr = sys.stderr + old_forced_width = formatting.FORCED_WIDTH + formatting.FORCED_WIDTH = 80 + + env = self.make_env(env) + + if PY2: + bytes_output = StringIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + sys.stdout = bytes_output + if not self.mix_stderr: + bytes_error = StringIO() + sys.stderr = bytes_error + else: + bytes_output = io.BytesIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + input = io.TextIOWrapper(input, encoding=self.charset) + sys.stdout = io.TextIOWrapper(bytes_output, encoding=self.charset) + if not self.mix_stderr: + bytes_error = io.BytesIO() + sys.stderr = io.TextIOWrapper(bytes_error, encoding=self.charset) + + if self.mix_stderr: + sys.stderr = sys.stdout + + sys.stdin = input + + def visible_input(prompt=None): + sys.stdout.write(prompt or "") + val = input.readline().rstrip("\r\n") + sys.stdout.write("{}\n".format(val)) + sys.stdout.flush() + return val + + def hidden_input(prompt=None): + sys.stdout.write("{}\n".format(prompt or "")) + sys.stdout.flush() + return input.readline().rstrip("\r\n") + + def _getchar(echo): + char = sys.stdin.read(1) + if echo: + sys.stdout.write(char) + sys.stdout.flush() + return char + + default_color = color + + def should_strip_ansi(stream=None, color=None): + if color is None: + return not default_color + return not color + + old_visible_prompt_func = termui.visible_prompt_func + old_hidden_prompt_func = termui.hidden_prompt_func + old__getchar_func = termui._getchar + old_should_strip_ansi = utils.should_strip_ansi + termui.visible_prompt_func = visible_input + termui.hidden_prompt_func = hidden_input + termui._getchar = _getchar + utils.should_strip_ansi = should_strip_ansi + + old_env = {} + try: + for key, value in iteritems(env): + old_env[key] = os.environ.get(key) + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + yield (bytes_output, not self.mix_stderr and bytes_error) + finally: + for key, value in iteritems(old_env): + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + sys.stdout = old_stdout + sys.stderr = old_stderr + sys.stdin = old_stdin + termui.visible_prompt_func = old_visible_prompt_func + termui.hidden_prompt_func = old_hidden_prompt_func + termui._getchar = old__getchar_func + utils.should_strip_ansi = old_should_strip_ansi + formatting.FORCED_WIDTH = old_forced_width + + def invoke( + self, + cli, + args=None, + input=None, + env=None, + catch_exceptions=True, + color=False, + **extra + ): + """Invokes a command in an isolated environment. The arguments are + forwarded directly to the command line script, the `extra` keyword + arguments are passed to the :meth:`~clickpkg.Command.main` function of + the command. + + This returns a :class:`Result` object. + + .. versionadded:: 3.0 + The ``catch_exceptions`` parameter was added. + + .. versionchanged:: 3.0 + The result object now has an `exc_info` attribute with the + traceback if available. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param cli: the command to invoke + :param args: the arguments to invoke. It may be given as an iterable + or a string. When given as string it will be interpreted + as a Unix shell command. More details at + :func:`shlex.split`. + :param input: the input data for `sys.stdin`. + :param env: the environment overrides. + :param catch_exceptions: Whether to catch any other exceptions than + ``SystemExit``. + :param extra: the keyword arguments to pass to :meth:`main`. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + exc_info = None + with self.isolation(input=input, env=env, color=color) as outstreams: + exception = None + exit_code = 0 + + if isinstance(args, string_types): + args = shlex.split(args) + + try: + prog_name = extra.pop("prog_name") + except KeyError: + prog_name = self.get_default_prog_name(cli) + + try: + cli.main(args=args or (), prog_name=prog_name, **extra) + except SystemExit as e: + exc_info = sys.exc_info() + exit_code = e.code + if exit_code is None: + exit_code = 0 + + if exit_code != 0: + exception = e + + if not isinstance(exit_code, int): + sys.stdout.write(str(exit_code)) + sys.stdout.write("\n") + exit_code = 1 + + except Exception as e: + if not catch_exceptions: + raise + exception = e + exit_code = 1 + exc_info = sys.exc_info() + finally: + sys.stdout.flush() + stdout = outstreams[0].getvalue() + if self.mix_stderr: + stderr = None + else: + stderr = outstreams[1].getvalue() + + return Result( + runner=self, + stdout_bytes=stdout, + stderr_bytes=stderr, + exit_code=exit_code, + exception=exception, + exc_info=exc_info, + ) + + @contextlib.contextmanager + def isolated_filesystem(self): + """A context manager that creates a temporary folder and changes + the current working directory to it for isolated filesystem tests. + """ + cwd = os.getcwd() + t = tempfile.mkdtemp() + os.chdir(t) + try: + yield t + finally: + os.chdir(cwd) + try: + shutil.rmtree(t) + except (OSError, IOError): # noqa: B014 + pass diff --git a/openpype/vendor/python/python_2/click/types.py b/openpype/vendor/python/python_2/click/types.py new file mode 100644 index 0000000000..505c39f850 --- /dev/null +++ b/openpype/vendor/python/python_2/click/types.py @@ -0,0 +1,762 @@ +import os +import stat +from datetime import datetime + +from ._compat import _get_argv_encoding +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import open_stream +from ._compat import PY2 +from ._compat import text_type +from .exceptions import BadParameter +from .utils import LazyFile +from .utils import safecall + + +class ParamType(object): + """Helper for converting values through types. The following is + necessary for a valid type: + + * it needs a name + * it needs to pass through None unchanged + * it needs to convert from a string + * it needs to convert its result type through unchanged + (eg: needs to be idempotent) + * it needs to be able to deal with param and context being `None`. + This can be the case when the object is used with prompt + inputs. + """ + + is_composite = False + + #: the descriptive name of this type + name = None + + #: if a list of this type is expected and the value is pulled from a + #: string environment variable, this is what splits it up. `None` + #: means any whitespace. For all parameters the general rule is that + #: whitespace splits them up. The exception are paths and files which + #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on + #: Windows). + envvar_list_splitter = None + + def __call__(self, value, param=None, ctx=None): + if value is not None: + return self.convert(value, param, ctx) + + def get_metavar(self, param): + """Returns the metavar default for this param if it provides one.""" + + def get_missing_message(self, param): + """Optionally might return extra information about a missing + parameter. + + .. versionadded:: 2.0 + """ + + def convert(self, value, param, ctx): + """Converts the value. This is not invoked for values that are + `None` (the missing value). + """ + return value + + def split_envvar_value(self, rv): + """Given a value from an environment variable this splits it up + into small chunks depending on the defined envvar list splitter. + + If the splitter is set to `None`, which means that whitespace splits, + then leading and trailing whitespace is ignored. Otherwise, leading + and trailing splitters usually lead to empty items being included. + """ + return (rv or "").split(self.envvar_list_splitter) + + def fail(self, message, param=None, ctx=None): + """Helper method to fail with an invalid value message.""" + raise BadParameter(message, ctx=ctx, param=param) + + +class CompositeParamType(ParamType): + is_composite = True + + @property + def arity(self): + raise NotImplementedError() + + +class FuncParamType(ParamType): + def __init__(self, func): + self.name = func.__name__ + self.func = func + + def convert(self, value, param, ctx): + try: + return self.func(value) + except ValueError: + try: + value = text_type(value) + except UnicodeError: + value = str(value).decode("utf-8", "replace") + self.fail(value, param, ctx) + + +class UnprocessedParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + return value + + def __repr__(self): + return "UNPROCESSED" + + +class StringParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + if isinstance(value, bytes): + enc = _get_argv_encoding() + try: + value = value.decode(enc) + except UnicodeError: + fs_enc = get_filesystem_encoding() + if fs_enc != enc: + try: + value = value.decode(fs_enc) + except UnicodeError: + value = value.decode("utf-8", "replace") + else: + value = value.decode("utf-8", "replace") + return value + return value + + def __repr__(self): + return "STRING" + + +class Choice(ParamType): + """The choice type allows a value to be checked against a fixed set + of supported values. All of these values have to be strings. + + You should only pass a list or tuple of choices. Other iterables + (like generators) may lead to surprising results. + + The resulting value will always be one of the originally passed choices + regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` + being specified. + + See :ref:`choice-opts` for an example. + + :param case_sensitive: Set to false to make choices case + insensitive. Defaults to true. + """ + + name = "choice" + + def __init__(self, choices, case_sensitive=True): + self.choices = choices + self.case_sensitive = case_sensitive + + def get_metavar(self, param): + return "[{}]".format("|".join(self.choices)) + + def get_missing_message(self, param): + return "Choose from:\n\t{}.".format(",\n\t".join(self.choices)) + + def convert(self, value, param, ctx): + # Match through normalization and case sensitivity + # first do token_normalize_func, then lowercase + # preserve original `value` to produce an accurate message in + # `self.fail` + normed_value = value + normed_choices = {choice: choice for choice in self.choices} + + if ctx is not None and ctx.token_normalize_func is not None: + normed_value = ctx.token_normalize_func(value) + normed_choices = { + ctx.token_normalize_func(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if not self.case_sensitive: + if PY2: + lower = str.lower + else: + lower = str.casefold + + normed_value = lower(normed_value) + normed_choices = { + lower(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if normed_value in normed_choices: + return normed_choices[normed_value] + + self.fail( + "invalid choice: {}. (choose from {})".format( + value, ", ".join(self.choices) + ), + param, + ctx, + ) + + def __repr__(self): + return "Choice('{}')".format(list(self.choices)) + + +class DateTime(ParamType): + """The DateTime type converts date strings into `datetime` objects. + + The format strings which are checked are configurable, but default to some + common (non-timezone aware) ISO 8601 formats. + + When specifying *DateTime* formats, you should only pass a list or a tuple. + Other iterables, like generators, may lead to surprising results. + + The format strings are processed using ``datetime.strptime``, and this + consequently defines the format strings which are allowed. + + Parsing is tried using each format, in order, and the first format which + parses successfully is used. + + :param formats: A list or tuple of date format strings, in the order in + which they should be tried. Defaults to + ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, + ``'%Y-%m-%d %H:%M:%S'``. + """ + + name = "datetime" + + def __init__(self, formats=None): + self.formats = formats or ["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S"] + + def get_metavar(self, param): + return "[{}]".format("|".join(self.formats)) + + def _try_to_convert_date(self, value, format): + try: + return datetime.strptime(value, format) + except ValueError: + return None + + def convert(self, value, param, ctx): + # Exact match + for format in self.formats: + dtime = self._try_to_convert_date(value, format) + if dtime: + return dtime + + self.fail( + "invalid datetime format: {}. (choose from {})".format( + value, ", ".join(self.formats) + ) + ) + + def __repr__(self): + return "DateTime" + + +class IntParamType(ParamType): + name = "integer" + + def convert(self, value, param, ctx): + try: + return int(value) + except ValueError: + self.fail("{} is not a valid integer".format(value), param, ctx) + + def __repr__(self): + return "INT" + + +class IntRange(IntParamType): + """A parameter that works similar to :data:`click.INT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "integer range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = IntParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "IntRange({}, {})".format(self.min, self.max) + + +class FloatParamType(ParamType): + name = "float" + + def convert(self, value, param, ctx): + try: + return float(value) + except ValueError: + self.fail( + "{} is not a valid floating point value".format(value), param, ctx + ) + + def __repr__(self): + return "FLOAT" + + +class FloatRange(FloatParamType): + """A parameter that works similar to :data:`click.FLOAT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "float range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = FloatParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "FloatRange({}, {})".format(self.min, self.max) + + +class BoolParamType(ParamType): + name = "boolean" + + def convert(self, value, param, ctx): + if isinstance(value, bool): + return bool(value) + value = value.lower() + if value in ("true", "t", "1", "yes", "y"): + return True + elif value in ("false", "f", "0", "no", "n"): + return False + self.fail("{} is not a valid boolean".format(value), param, ctx) + + def __repr__(self): + return "BOOL" + + +class UUIDParameterType(ParamType): + name = "uuid" + + def convert(self, value, param, ctx): + import uuid + + try: + if PY2 and isinstance(value, text_type): + value = value.encode("ascii") + return uuid.UUID(value) + except ValueError: + self.fail("{} is not a valid UUID value".format(value), param, ctx) + + def __repr__(self): + return "UUID" + + +class File(ParamType): + """Declares a parameter to be a file for reading or writing. The file + is automatically closed once the context tears down (after the command + finished working). + + Files can be opened for reading or writing. The special value ``-`` + indicates stdin or stdout depending on the mode. + + By default, the file is opened for reading text data, but it can also be + opened in binary mode or for writing. The encoding parameter can be used + to force a specific encoding. + + The `lazy` flag controls if the file should be opened immediately or upon + first IO. The default is to be non-lazy for standard input and output + streams as well as files opened for reading, `lazy` otherwise. When opening a + file lazily for reading, it is still opened temporarily for validation, but + will not be held open until first IO. lazy is mainly useful when opening + for writing to avoid creating the file until it is needed. + + Starting with Click 2.0, files can also be opened atomically in which + case all writes go into a separate file in the same folder and upon + completion the file will be moved over to the original location. This + is useful if a file regularly read by other users is modified. + + See :ref:`file-args` for more information. + """ + + name = "filename" + envvar_list_splitter = os.path.pathsep + + def __init__( + self, mode="r", encoding=None, errors="strict", lazy=None, atomic=False + ): + self.mode = mode + self.encoding = encoding + self.errors = errors + self.lazy = lazy + self.atomic = atomic + + def resolve_lazy_flag(self, value): + if self.lazy is not None: + return self.lazy + if value == "-": + return False + elif "w" in self.mode: + return True + return False + + def convert(self, value, param, ctx): + try: + if hasattr(value, "read") or hasattr(value, "write"): + return value + + lazy = self.resolve_lazy_flag(value) + + if lazy: + f = LazyFile( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + if ctx is not None: + ctx.call_on_close(f.close_intelligently) + return f + + f, should_close = open_stream( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + # If a context is provided, we automatically close the file + # at the end of the context execution (or flush out). If a + # context does not exist, it's the caller's responsibility to + # properly close the file. This for instance happens when the + # type is used with prompts. + if ctx is not None: + if should_close: + ctx.call_on_close(safecall(f.close)) + else: + ctx.call_on_close(safecall(f.flush)) + return f + except (IOError, OSError) as e: # noqa: B014 + self.fail( + "Could not open file: {}: {}".format( + filename_to_ui(value), get_streerror(e) + ), + param, + ctx, + ) + + +class Path(ParamType): + """The path type is similar to the :class:`File` type but it performs + different checks. First of all, instead of returning an open file + handle it returns just the filename. Secondly, it can perform various + basic checks about what the file or directory should be. + + .. versionchanged:: 6.0 + `allow_dash` was added. + + :param exists: if set to true, the file or directory needs to exist for + this value to be valid. If this is not required and a + file does indeed not exist, then all further checks are + silently skipped. + :param file_okay: controls if a file is a possible value. + :param dir_okay: controls if a directory is a possible value. + :param writable: if true, a writable check is performed. + :param readable: if true, a readable check is performed. + :param resolve_path: if this is true, then the path is fully resolved + before the value is passed onwards. This means + that it's absolute and symlinks are resolved. It + will not expand a tilde-prefix, as this is + supposed to be done by the shell only. + :param allow_dash: If this is set to `True`, a single dash to indicate + standard streams is permitted. + :param path_type: optionally a string type that should be used to + represent the path. The default is `None` which + means the return value will be either bytes or + unicode depending on what makes most sense given the + input data Click deals with. + """ + + envvar_list_splitter = os.path.pathsep + + def __init__( + self, + exists=False, + file_okay=True, + dir_okay=True, + writable=False, + readable=True, + resolve_path=False, + allow_dash=False, + path_type=None, + ): + self.exists = exists + self.file_okay = file_okay + self.dir_okay = dir_okay + self.writable = writable + self.readable = readable + self.resolve_path = resolve_path + self.allow_dash = allow_dash + self.type = path_type + + if self.file_okay and not self.dir_okay: + self.name = "file" + self.path_type = "File" + elif self.dir_okay and not self.file_okay: + self.name = "directory" + self.path_type = "Directory" + else: + self.name = "path" + self.path_type = "Path" + + def coerce_path_result(self, rv): + if self.type is not None and not isinstance(rv, self.type): + if self.type is text_type: + rv = rv.decode(get_filesystem_encoding()) + else: + rv = rv.encode(get_filesystem_encoding()) + return rv + + def convert(self, value, param, ctx): + rv = value + + is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") + + if not is_dash: + if self.resolve_path: + rv = os.path.realpath(rv) + + try: + st = os.stat(rv) + except OSError: + if not self.exists: + return self.coerce_path_result(rv) + self.fail( + "{} '{}' does not exist.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + if not self.file_okay and stat.S_ISREG(st.st_mode): + self.fail( + "{} '{}' is a file.".format(self.path_type, filename_to_ui(value)), + param, + ctx, + ) + if not self.dir_okay and stat.S_ISDIR(st.st_mode): + self.fail( + "{} '{}' is a directory.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.writable and not os.access(value, os.W_OK): + self.fail( + "{} '{}' is not writable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.readable and not os.access(value, os.R_OK): + self.fail( + "{} '{}' is not readable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + return self.coerce_path_result(rv) + + +class Tuple(CompositeParamType): + """The default behavior of Click is to apply a type on a value directly. + This works well in most cases, except for when `nargs` is set to a fixed + count and different types should be used for different items. In this + case the :class:`Tuple` type can be used. This type can only be used + if `nargs` is set to a fixed number. + + For more information see :ref:`tuple-type`. + + This can be selected by using a Python tuple literal as a type. + + :param types: a list of types that should be used for the tuple items. + """ + + def __init__(self, types): + self.types = [convert_type(ty) for ty in types] + + @property + def name(self): + return "<{}>".format(" ".join(ty.name for ty in self.types)) + + @property + def arity(self): + return len(self.types) + + def convert(self, value, param, ctx): + if len(value) != len(self.types): + raise TypeError( + "It would appear that nargs is set to conflict with the" + " composite type arity." + ) + return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) + + +def convert_type(ty, default=None): + """Converts a callable or python type into the most appropriate + param type. + """ + guessed_type = False + if ty is None and default is not None: + if isinstance(default, tuple): + ty = tuple(map(type, default)) + else: + ty = type(default) + guessed_type = True + + if isinstance(ty, tuple): + return Tuple(ty) + if isinstance(ty, ParamType): + return ty + if ty is text_type or ty is str or ty is None: + return STRING + if ty is int: + return INT + # Booleans are only okay if not guessed. This is done because for + # flags the default value is actually a bit of a lie in that it + # indicates which of the flags is the one we want. See get_default() + # for more information. + if ty is bool and not guessed_type: + return BOOL + if ty is float: + return FLOAT + if guessed_type: + return STRING + + # Catch a common mistake + if __debug__: + try: + if issubclass(ty, ParamType): + raise AssertionError( + "Attempted to use an uninstantiated parameter type ({}).".format(ty) + ) + except TypeError: + pass + return FuncParamType(ty) + + +#: A dummy parameter type that just does nothing. From a user's +#: perspective this appears to just be the same as `STRING` but internally +#: no string conversion takes place. This is necessary to achieve the +#: same bytes/unicode behavior on Python 2/3 in situations where you want +#: to not convert argument types. This is usually useful when working +#: with file paths as they can appear in bytes and unicode. +#: +#: For path related uses the :class:`Path` type is a better choice but +#: there are situations where an unprocessed type is useful which is why +#: it is is provided. +#: +#: .. versionadded:: 4.0 +UNPROCESSED = UnprocessedParamType() + +#: A unicode string parameter type which is the implicit default. This +#: can also be selected by using ``str`` as type. +STRING = StringParamType() + +#: An integer parameter. This can also be selected by using ``int`` as +#: type. +INT = IntParamType() + +#: A floating point value parameter. This can also be selected by using +#: ``float`` as type. +FLOAT = FloatParamType() + +#: A boolean parameter. This is the default for boolean flags. This can +#: also be selected by using ``bool`` as a type. +BOOL = BoolParamType() + +#: A UUID parameter. +UUID = UUIDParameterType() diff --git a/openpype/vendor/python/python_2/click/utils.py b/openpype/vendor/python/python_2/click/utils.py new file mode 100644 index 0000000000..79265e732d --- /dev/null +++ b/openpype/vendor/python/python_2/click/utils.py @@ -0,0 +1,455 @@ +import os +import sys + +from ._compat import _default_text_stderr +from ._compat import _default_text_stdout +from ._compat import auto_wrap_for_ansi +from ._compat import binary_streams +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import is_bytes +from ._compat import open_stream +from ._compat import PY2 +from ._compat import should_strip_ansi +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_streams +from ._compat import text_type +from ._compat import WIN +from .globals import resolve_color_default + +if not PY2: + from ._compat import _find_binary_writer +elif WIN: + from ._winconsole import _get_windows_argv + from ._winconsole import _hash_py_argv + from ._winconsole import _initial_argv_hash + +echo_native_types = string_types + (bytes, bytearray) + + +def _posixify(name): + return "-".join(name.split()).lower() + + +def safecall(func): + """Wraps a function so that it swallows exceptions.""" + + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except Exception: + pass + + return wrapper + + +def make_str(value): + """Converts a value into a valid string.""" + if isinstance(value, bytes): + try: + return value.decode(get_filesystem_encoding()) + except UnicodeError: + return value.decode("utf-8", "replace") + return text_type(value) + + +def make_default_short_help(help, max_length=45): + """Return a condensed version of help string.""" + words = help.split() + total_length = 0 + result = [] + done = False + + for word in words: + if word[-1:] == ".": + done = True + new_length = 1 + len(word) if result else len(word) + if total_length + new_length > max_length: + result.append("...") + done = True + else: + if result: + result.append(" ") + result.append(word) + if done: + break + total_length += new_length + + return "".join(result) + + +class LazyFile(object): + """A lazy file works like a regular file but it does not fully open + the file but it does perform some basic checks early to see if the + filename parameter does make sense. This is useful for safely opening + files for writing. + """ + + def __init__( + self, filename, mode="r", encoding=None, errors="strict", atomic=False + ): + self.name = filename + self.mode = mode + self.encoding = encoding + self.errors = errors + self.atomic = atomic + + if filename == "-": + self._f, self.should_close = open_stream(filename, mode, encoding, errors) + else: + if "r" in mode: + # Open and close the file in case we're opening it for + # reading so that we can catch at least some errors in + # some cases early. + open(filename, mode).close() + self._f = None + self.should_close = True + + def __getattr__(self, name): + return getattr(self.open(), name) + + def __repr__(self): + if self._f is not None: + return repr(self._f) + return "".format(self.name, self.mode) + + def open(self): + """Opens the file if it's not yet open. This call might fail with + a :exc:`FileError`. Not handling this error will produce an error + that Click shows. + """ + if self._f is not None: + return self._f + try: + rv, self.should_close = open_stream( + self.name, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + except (IOError, OSError) as e: # noqa: E402 + from .exceptions import FileError + + raise FileError(self.name, hint=get_streerror(e)) + self._f = rv + return rv + + def close(self): + """Closes the underlying file, no matter what.""" + if self._f is not None: + self._f.close() + + def close_intelligently(self): + """This function only closes the file if it was opened by the lazy + file wrapper. For instance this will never close stdin. + """ + if self.should_close: + self.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close_intelligently() + + def __iter__(self): + self.open() + return iter(self._f) + + +class KeepOpenFile(object): + def __init__(self, file): + self._file = file + + def __getattr__(self, name): + return getattr(self._file, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + pass + + def __repr__(self): + return repr(self._file) + + def __iter__(self): + return iter(self._file) + + +def echo(message=None, file=None, nl=True, err=False, color=None): + """Prints a message plus a newline to the given file or stdout. On + first sight, this looks like the print function, but it has improved + support for handling Unicode and binary data that does not fail no + matter how badly configured the system is. + + Primarily it means that you can print binary data as well as Unicode + data on both 2.x and 3.x to the given file in the most appropriate way + possible. This is a very carefree function in that it will try its + best to not fail. As of Click 6.0 this includes support for unicode + output on the Windows console. + + In addition to that, if `colorama`_ is installed, the echo function will + also support clever handling of ANSI codes. Essentially it will then + do the following: + + - add transparent handling of ANSI color codes on Windows. + - hide ANSI codes automatically if the destination file is not a + terminal. + + .. _colorama: https://pypi.org/project/colorama/ + + .. versionchanged:: 6.0 + As of Click 6.0 the echo function will properly support unicode + output on the windows console. Not that click does not modify + the interpreter in any way which means that `sys.stdout` or the + print statement or function will still not provide unicode support. + + .. versionchanged:: 2.0 + Starting with version 2.0 of Click, the echo function will work + with colorama if it's installed. + + .. versionadded:: 3.0 + The `err` parameter was added. + + .. versionchanged:: 4.0 + Added the `color` flag. + + :param message: the message to print + :param file: the file to write to (defaults to ``stdout``) + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``. This is faster and easier than calling + :func:`get_text_stderr` yourself. + :param nl: if set to `True` (the default) a newline is printed afterwards. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. + """ + if file is None: + if err: + file = _default_text_stderr() + else: + file = _default_text_stdout() + + # Convert non bytes/text into the native string type. + if message is not None and not isinstance(message, echo_native_types): + message = text_type(message) + + if nl: + message = message or u"" + if isinstance(message, text_type): + message += u"\n" + else: + message += b"\n" + + # If there is a message, and we're in Python 3, and the value looks + # like bytes, we manually need to find the binary stream and write the + # message in there. This is done separately so that most stream + # types will work as you would expect. Eg: you can write to StringIO + # for other cases. + if message and not PY2 and is_bytes(message): + binary_file = _find_binary_writer(file) + if binary_file is not None: + file.flush() + binary_file.write(message) + binary_file.flush() + return + + # ANSI-style support. If there is no message or we are dealing with + # bytes nothing is happening. If we are connected to a file we want + # to strip colors. If we are on windows we either wrap the stream + # to strip the color or we use the colorama support to translate the + # ansi codes to API calls. + if message and not is_bytes(message): + color = resolve_color_default(color) + if should_strip_ansi(file, color): + message = strip_ansi(message) + elif WIN: + if auto_wrap_for_ansi is not None: + file = auto_wrap_for_ansi(file) + elif not color: + message = strip_ansi(message) + + if message: + file.write(message) + file.flush() + + +def get_binary_stream(name): + """Returns a system stream for byte processing. This essentially + returns the stream from the sys module with the given name but it + solves some compatibility issues between different Python versions. + Primarily this function is necessary for getting binary streams on + Python 3. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + """ + opener = binary_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener() + + +def get_text_stream(name, encoding=None, errors="strict"): + """Returns a system stream for text processing. This usually returns + a wrapped stream around a binary stream returned from + :func:`get_binary_stream` but it also can take shortcuts on Python 3 + for already correctly configured streams. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + :param encoding: overrides the detected default encoding. + :param errors: overrides the default error mode. + """ + opener = text_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener(encoding, errors) + + +def open_file( + filename, mode="r", encoding=None, errors="strict", lazy=False, atomic=False +): + """This is similar to how the :class:`File` works but for manual + usage. Files are opened non lazy by default. This can open regular + files as well as stdin/stdout if ``'-'`` is passed. + + If stdin/stdout is returned the stream is wrapped so that the context + manager will not close the stream accidentally. This makes it possible + to always use the function like this without having to worry to + accidentally close a standard stream:: + + with open_file(filename) as f: + ... + + .. versionadded:: 3.0 + + :param filename: the name of the file to open (or ``'-'`` for stdin/stdout). + :param mode: the mode in which to open the file. + :param encoding: the encoding to use. + :param errors: the error handling for this file. + :param lazy: can be flipped to true to open the file lazily. + :param atomic: in atomic mode writes go into a temporary file and it's + moved on close. + """ + if lazy: + return LazyFile(filename, mode, encoding, errors, atomic=atomic) + f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic) + if not should_close: + f = KeepOpenFile(f) + return f + + +def get_os_args(): + """This returns the argument part of sys.argv in the most appropriate + form for processing. What this means is that this return value is in + a format that works for Click to process but does not necessarily + correspond well to what's actually standard for the interpreter. + + On most environments the return value is ``sys.argv[:1]`` unchanged. + However if you are on Windows and running Python 2 the return value + will actually be a list of unicode strings instead because the + default behavior on that platform otherwise will not be able to + carry all possible values that sys.argv can have. + + .. versionadded:: 6.0 + """ + # We can only extract the unicode argv if sys.argv has not been + # changed since the startup of the application. + if PY2 and WIN and _initial_argv_hash == _hash_py_argv(): + return _get_windows_argv() + return sys.argv[1:] + + +def format_filename(filename, shorten=False): + """Formats a filename for user display. The main purpose of this + function is to ensure that the filename can be displayed at all. This + will decode the filename to unicode if necessary in a way that it will + not fail. Optionally, it can shorten the filename to not include the + full path to the filename. + + :param filename: formats a filename for UI display. This will also convert + the filename into unicode without failing. + :param shorten: this optionally shortens the filename to strip of the + path that leads up to it. + """ + if shorten: + filename = os.path.basename(filename) + return filename_to_ui(filename) + + +def get_app_dir(app_name, roaming=True, force_posix=False): + r"""Returns the config folder for the application. The default behavior + is to return whatever is most appropriate for the operating system. + + To give you an idea, for an app called ``"Foo Bar"``, something like + the following folders could be returned: + + Mac OS X: + ``~/Library/Application Support/Foo Bar`` + Mac OS X (POSIX): + ``~/.foo-bar`` + Unix: + ``~/.config/foo-bar`` + Unix (POSIX): + ``~/.foo-bar`` + Win XP (roaming): + ``C:\Documents and Settings\\Local Settings\Application Data\Foo Bar`` + Win XP (not roaming): + ``C:\Documents and Settings\\Application Data\Foo Bar`` + Win 7 (roaming): + ``C:\Users\\AppData\Roaming\Foo Bar`` + Win 7 (not roaming): + ``C:\Users\\AppData\Local\Foo Bar`` + + .. versionadded:: 2.0 + + :param app_name: the application name. This should be properly capitalized + and can contain whitespace. + :param roaming: controls if the folder should be roaming or not on Windows. + Has no affect otherwise. + :param force_posix: if this is set to `True` then on any POSIX system the + folder will be stored in the home folder with a leading + dot instead of the XDG config home or darwin's + application support folder. + """ + if WIN: + key = "APPDATA" if roaming else "LOCALAPPDATA" + folder = os.environ.get(key) + if folder is None: + folder = os.path.expanduser("~") + return os.path.join(folder, app_name) + if force_posix: + return os.path.join(os.path.expanduser("~/.{}".format(_posixify(app_name)))) + if sys.platform == "darwin": + return os.path.join( + os.path.expanduser("~/Library/Application Support"), app_name + ) + return os.path.join( + os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), + _posixify(app_name), + ) + + +class PacifyFlushWrapper(object): + """This wrapper is used to catch and suppress BrokenPipeErrors resulting + from ``.flush()`` being called on broken pipe during the shutdown/final-GC + of the Python interpreter. Notably ``.flush()`` is always called on + ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any + other cleanup code, and the case where the underlying file is not a broken + pipe, all calls and attributes are proxied. + """ + + def __init__(self, wrapped): + self.wrapped = wrapped + + def flush(self): + try: + self.wrapped.flush() + except IOError as e: + import errno + + if e.errno != errno.EPIPE: + raise + + def __getattr__(self, attr): + return getattr(self.wrapped, attr) diff --git a/openpype/version.py b/openpype/version.py index 4a6131a26a..f98d4c1cf5 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.12-nightly.3" +__version__ = "3.17.3-nightly.1" diff --git a/poetry.lock b/poetry.lock index f71611cb6f..d074a0c3d9 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,10 +1,9 @@ -# This file is automatically @generated by Poetry and should not be changed by hand. +# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. [[package]] name = "acre" version = "1.0.0" description = "Lightweight cross-platform environment management Python package that makes it trivial to launch applications in their own configurable working environment." -category = "main" optional = false python-versions = ">=2.7" files = [] @@ -18,106 +17,105 @@ resolved_reference = "126f7a188cfe36718f707f42ebbc597e86aa86c3" [[package]] name = "aiohttp" -version = "3.8.3" +version = "3.8.4" description = "Async http client/server framework (asyncio)" -category = "main" optional = false python-versions = ">=3.6" files = [ - {file = "aiohttp-3.8.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ba71c9b4dcbb16212f334126cc3d8beb6af377f6703d9dc2d9fb3874fd667ee9"}, - {file = "aiohttp-3.8.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d24b8bb40d5c61ef2d9b6a8f4528c2f17f1c5d2d31fed62ec860f6006142e83e"}, - {file = "aiohttp-3.8.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f88df3a83cf9df566f171adba39d5bd52814ac0b94778d2448652fc77f9eb491"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b97decbb3372d4b69e4d4c8117f44632551c692bb1361b356a02b97b69e18a62"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:309aa21c1d54b8ef0723181d430347d7452daaff93e8e2363db8e75c72c2fb2d"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad5383a67514e8e76906a06741febd9126fc7c7ff0f599d6fcce3e82b80d026f"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20acae4f268317bb975671e375493dbdbc67cddb5f6c71eebdb85b34444ac46b"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:05a3c31c6d7cd08c149e50dc7aa2568317f5844acd745621983380597f027a18"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d6f76310355e9fae637c3162936e9504b4767d5c52ca268331e2756e54fd4ca5"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:256deb4b29fe5e47893fa32e1de2d73c3afe7407738bd3c63829874661d4822d"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:5c59fcd80b9049b49acd29bd3598cada4afc8d8d69bd4160cd613246912535d7"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:059a91e88f2c00fe40aed9031b3606c3f311414f86a90d696dd982e7aec48142"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:2feebbb6074cdbd1ac276dbd737b40e890a1361b3cc30b74ac2f5e24aab41f7b"}, - {file = "aiohttp-3.8.3-cp310-cp310-win32.whl", hash = "sha256:5bf651afd22d5f0c4be16cf39d0482ea494f5c88f03e75e5fef3a85177fecdeb"}, - {file = "aiohttp-3.8.3-cp310-cp310-win_amd64.whl", hash = "sha256:653acc3880459f82a65e27bd6526e47ddf19e643457d36a2250b85b41a564715"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:86fc24e58ecb32aee09f864cb11bb91bc4c1086615001647dbfc4dc8c32f4008"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:75e14eac916f024305db517e00a9252714fce0abcb10ad327fb6dcdc0d060f1d"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d1fde0f44029e02d02d3993ad55ce93ead9bb9b15c6b7ccd580f90bd7e3de476"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ab94426ddb1ecc6a0b601d832d5d9d421820989b8caa929114811369673235c"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89d2e02167fa95172c017732ed7725bc8523c598757f08d13c5acca308e1a061"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:02f9a2c72fc95d59b881cf38a4b2be9381b9527f9d328771e90f72ac76f31ad8"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c7149272fb5834fc186328e2c1fa01dda3e1fa940ce18fded6d412e8f2cf76d"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:512bd5ab136b8dc0ffe3fdf2dfb0c4b4f49c8577f6cae55dca862cd37a4564e2"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7018ecc5fe97027214556afbc7c502fbd718d0740e87eb1217b17efd05b3d276"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:88c70ed9da9963d5496d38320160e8eb7e5f1886f9290475a881db12f351ab5d"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:da22885266bbfb3f78218dc40205fed2671909fbd0720aedba39b4515c038091"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:e65bc19919c910127c06759a63747ebe14f386cda573d95bcc62b427ca1afc73"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:08c78317e950e0762c2983f4dd58dc5e6c9ff75c8a0efeae299d363d439c8e34"}, - {file = "aiohttp-3.8.3-cp311-cp311-win32.whl", hash = "sha256:45d88b016c849d74ebc6f2b6e8bc17cabf26e7e40c0661ddd8fae4c00f015697"}, - {file = "aiohttp-3.8.3-cp311-cp311-win_amd64.whl", hash = "sha256:96372fc29471646b9b106ee918c8eeb4cca423fcbf9a34daa1b93767a88a2290"}, - {file = "aiohttp-3.8.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c971bf3786b5fad82ce5ad570dc6ee420f5b12527157929e830f51c55dc8af77"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff25f48fc8e623d95eca0670b8cc1469a83783c924a602e0fbd47363bb54aaca"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e381581b37db1db7597b62a2e6b8b57c3deec95d93b6d6407c5b61ddc98aca6d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:db19d60d846283ee275d0416e2a23493f4e6b6028825b51290ac05afc87a6f97"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25892c92bee6d9449ffac82c2fe257f3a6f297792cdb18ad784737d61e7a9a85"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:398701865e7a9565d49189f6c90868efaca21be65c725fc87fc305906be915da"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:4a4fbc769ea9b6bd97f4ad0b430a6807f92f0e5eb020f1e42ece59f3ecfc4585"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:b29bfd650ed8e148f9c515474a6ef0ba1090b7a8faeee26b74a8ff3b33617502"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:1e56b9cafcd6531bab5d9b2e890bb4937f4165109fe98e2b98ef0dcfcb06ee9d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:ec40170327d4a404b0d91855d41bfe1fe4b699222b2b93e3d833a27330a87a6d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:2df5f139233060578d8c2c975128fb231a89ca0a462b35d4b5fcf7c501ebdbe1"}, - {file = "aiohttp-3.8.3-cp36-cp36m-win32.whl", hash = "sha256:f973157ffeab5459eefe7b97a804987876dd0a55570b8fa56b4e1954bf11329b"}, - {file = "aiohttp-3.8.3-cp36-cp36m-win_amd64.whl", hash = "sha256:437399385f2abcd634865705bdc180c8314124b98299d54fe1d4c8990f2f9494"}, - {file = "aiohttp-3.8.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:09e28f572b21642128ef31f4e8372adb6888846f32fecb288c8b0457597ba61a"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f3553510abdbec67c043ca85727396ceed1272eef029b050677046d3387be8d"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e168a7560b7c61342ae0412997b069753f27ac4862ec7867eff74f0fe4ea2ad9"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:db4c979b0b3e0fa7e9e69ecd11b2b3174c6963cebadeecfb7ad24532ffcdd11a"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e164e0a98e92d06da343d17d4e9c4da4654f4a4588a20d6c73548a29f176abe2"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e8a78079d9a39ca9ca99a8b0ac2fdc0c4d25fc80c8a8a82e5c8211509c523363"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:21b30885a63c3f4ff5b77a5d6caf008b037cb521a5f33eab445dc566f6d092cc"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:4b0f30372cef3fdc262f33d06e7b411cd59058ce9174ef159ad938c4a34a89da"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:8135fa153a20d82ffb64f70a1b5c2738684afa197839b34cc3e3c72fa88d302c"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:ad61a9639792fd790523ba072c0555cd6be5a0baf03a49a5dd8cfcf20d56df48"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:978b046ca728073070e9abc074b6299ebf3501e8dee5e26efacb13cec2b2dea0"}, - {file = "aiohttp-3.8.3-cp37-cp37m-win32.whl", hash = "sha256:0d2c6d8c6872df4a6ec37d2ede71eff62395b9e337b4e18efd2177de883a5033"}, - {file = "aiohttp-3.8.3-cp37-cp37m-win_amd64.whl", hash = "sha256:21d69797eb951f155026651f7e9362877334508d39c2fc37bd04ff55b2007091"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ca9af5f8f5812d475c5259393f52d712f6d5f0d7fdad9acdb1107dd9e3cb7eb"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d90043c1882067f1bd26196d5d2db9aa6d268def3293ed5fb317e13c9413ea4"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d737fc67b9a970f3234754974531dc9afeea11c70791dcb7db53b0cf81b79784"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebf909ea0a3fc9596e40d55d8000702a85e27fd578ff41a5500f68f20fd32e6c"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5835f258ca9f7c455493a57ee707b76d2d9634d84d5d7f62e77be984ea80b849"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da37dcfbf4b7f45d80ee386a5f81122501ec75672f475da34784196690762f4b"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87f44875f2804bc0511a69ce44a9595d5944837a62caecc8490bbdb0e18b1342"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:527b3b87b24844ea7865284aabfab08eb0faf599b385b03c2aa91fc6edd6e4b6"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5ba88df9aa5e2f806650fcbeedbe4f6e8736e92fc0e73b0400538fd25a4dd96"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:e7b8813be97cab8cb52b1375f41f8e6804f6507fe4660152e8ca5c48f0436017"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:2dea10edfa1a54098703cb7acaa665c07b4e7568472a47f4e64e6319d3821ccf"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:713d22cd9643ba9025d33c4af43943c7a1eb8547729228de18d3e02e278472b6"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2d252771fc85e0cf8da0b823157962d70639e63cb9b578b1dec9868dd1f4f937"}, - {file = "aiohttp-3.8.3-cp38-cp38-win32.whl", hash = "sha256:66bd5f950344fb2b3dbdd421aaa4e84f4411a1a13fca3aeb2bcbe667f80c9f76"}, - {file = "aiohttp-3.8.3-cp38-cp38-win_amd64.whl", hash = "sha256:84b14f36e85295fe69c6b9789b51a0903b774046d5f7df538176516c3e422446"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:16c121ba0b1ec2b44b73e3a8a171c4f999b33929cd2397124a8c7fcfc8cd9e06"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8d6aaa4e7155afaf994d7924eb290abbe81a6905b303d8cb61310a2aba1c68ba"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:43046a319664a04b146f81b40e1545d4c8ac7b7dd04c47e40bf09f65f2437346"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:599418aaaf88a6d02a8c515e656f6faf3d10618d3dd95866eb4436520096c84b"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:92a2964319d359f494f16011e23434f6f8ef0434acd3cf154a6b7bec511e2fb7"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:73a4131962e6d91109bca6536416aa067cf6c4efb871975df734f8d2fd821b37"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:598adde339d2cf7d67beaccda3f2ce7c57b3b412702f29c946708f69cf8222aa"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:75880ed07be39beff1881d81e4a907cafb802f306efd6d2d15f2b3c69935f6fb"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a0239da9fbafd9ff82fd67c16704a7d1bccf0d107a300e790587ad05547681c8"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4e3a23ec214e95c9fe85a58470b660efe6534b83e6cbe38b3ed52b053d7cb6ad"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:47841407cc89a4b80b0c52276f3cc8138bbbfba4b179ee3acbd7d77ae33f7ac4"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:54d107c89a3ebcd13228278d68f1436d3f33f2dd2af5415e3feaeb1156e1a62c"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c37c5cce780349d4d51739ae682dec63573847a2a8dcb44381b174c3d9c8d403"}, - {file = "aiohttp-3.8.3-cp39-cp39-win32.whl", hash = "sha256:f178d2aadf0166be4df834c4953da2d7eef24719e8aec9a65289483eeea9d618"}, - {file = "aiohttp-3.8.3-cp39-cp39-win_amd64.whl", hash = "sha256:88e5be56c231981428f4f506c68b6a46fa25c4123a2e86d156c58a8369d31ab7"}, - {file = "aiohttp-3.8.3.tar.gz", hash = "sha256:3828fb41b7203176b82fe5d699e0d845435f2374750a44b480ea6b930f6be269"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:5ce45967538fb747370308d3145aa68a074bdecb4f3a300869590f725ced69c1"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b744c33b6f14ca26b7544e8d8aadff6b765a80ad6164fb1a430bbadd593dfb1a"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1a45865451439eb320784918617ba54b7a377e3501fb70402ab84d38c2cd891b"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a86d42d7cba1cec432d47ab13b6637bee393a10f664c425ea7b305d1301ca1a3"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ee3c36df21b5714d49fc4580247947aa64bcbe2939d1b77b4c8dcb8f6c9faecc"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:176a64b24c0935869d5bbc4c96e82f89f643bcdf08ec947701b9dbb3c956b7dd"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c844fd628851c0bc309f3c801b3a3d58ce430b2ce5b359cd918a5a76d0b20cb5"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5393fb786a9e23e4799fec788e7e735de18052f83682ce2dfcabaf1c00c2c08e"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e4b09863aae0dc965c3ef36500d891a3ff495a2ea9ae9171e4519963c12ceefd"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:adfbc22e87365a6e564c804c58fc44ff7727deea782d175c33602737b7feadb6"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:147ae376f14b55f4f3c2b118b95be50a369b89b38a971e80a17c3fd623f280c9"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:eafb3e874816ebe2a92f5e155f17260034c8c341dad1df25672fb710627c6949"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c6cc15d58053c76eacac5fa9152d7d84b8d67b3fde92709195cb984cfb3475ea"}, + {file = "aiohttp-3.8.4-cp310-cp310-win32.whl", hash = "sha256:59f029a5f6e2d679296db7bee982bb3d20c088e52a2977e3175faf31d6fb75d1"}, + {file = "aiohttp-3.8.4-cp310-cp310-win_amd64.whl", hash = "sha256:fe7ba4a51f33ab275515f66b0a236bcde4fb5561498fe8f898d4e549b2e4509f"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3d8ef1a630519a26d6760bc695842579cb09e373c5f227a21b67dc3eb16cfea4"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5b3f2e06a512e94722886c0827bee9807c86a9f698fac6b3aee841fab49bbfb4"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3a80464982d41b1fbfe3154e440ba4904b71c1a53e9cd584098cd41efdb188ef"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b631e26df63e52f7cce0cce6507b7a7f1bc9b0c501fcde69742130b32e8782f"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3f43255086fe25e36fd5ed8f2ee47477408a73ef00e804cb2b5cba4bf2ac7f5e"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4d347a172f866cd1d93126d9b239fcbe682acb39b48ee0873c73c933dd23bd0f"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a3fec6a4cb5551721cdd70473eb009d90935b4063acc5f40905d40ecfea23e05"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:80a37fe8f7c1e6ce8f2d9c411676e4bc633a8462844e38f46156d07a7d401654"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d1e6a862b76f34395a985b3cd39a0d949ca80a70b6ebdea37d3ab39ceea6698a"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cd468460eefef601ece4428d3cf4562459157c0f6523db89365202c31b6daebb"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:618c901dd3aad4ace71dfa0f5e82e88b46ef57e3239fc7027773cb6d4ed53531"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:652b1bff4f15f6287550b4670546a2947f2a4575b6c6dff7760eafb22eacbf0b"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80575ba9377c5171407a06d0196b2310b679dc752d02a1fcaa2bc20b235dbf24"}, + {file = "aiohttp-3.8.4-cp311-cp311-win32.whl", hash = "sha256:bbcf1a76cf6f6dacf2c7f4d2ebd411438c275faa1dc0c68e46eb84eebd05dd7d"}, + {file = "aiohttp-3.8.4-cp311-cp311-win_amd64.whl", hash = "sha256:6e74dd54f7239fcffe07913ff8b964e28b712f09846e20de78676ce2a3dc0bfc"}, + {file = "aiohttp-3.8.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:880e15bb6dad90549b43f796b391cfffd7af373f4646784795e20d92606b7a51"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb96fa6b56bb536c42d6a4a87dfca570ff8e52de2d63cabebfd6fb67049c34b6"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4a6cadebe132e90cefa77e45f2d2f1a4b2ce5c6b1bfc1656c1ddafcfe4ba8131"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f352b62b45dff37b55ddd7b9c0c8672c4dd2eb9c0f9c11d395075a84e2c40f75"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ab43061a0c81198d88f39aaf90dae9a7744620978f7ef3e3708339b8ed2ef01"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c9cb1565a7ad52e096a6988e2ee0397f72fe056dadf75d17fa6b5aebaea05622"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:1b3ea7edd2d24538959c1c1abf97c744d879d4e541d38305f9bd7d9b10c9ec41"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7c7837fe8037e96b6dd5cfcf47263c1620a9d332a87ec06a6ca4564e56bd0f36"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:3b90467ebc3d9fa5b0f9b6489dfb2c304a1db7b9946fa92aa76a831b9d587e99"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:cab9401de3ea52b4b4c6971db5fb5c999bd4260898af972bf23de1c6b5dd9d71"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d1f9282c5f2b5e241034a009779e7b2a1aa045f667ff521e7948ea9b56e0c5ff"}, + {file = "aiohttp-3.8.4-cp36-cp36m-win32.whl", hash = "sha256:5e14f25765a578a0a634d5f0cd1e2c3f53964553a00347998dfdf96b8137f777"}, + {file = "aiohttp-3.8.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4c745b109057e7e5f1848c689ee4fb3a016c8d4d92da52b312f8a509f83aa05e"}, + {file = "aiohttp-3.8.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:aede4df4eeb926c8fa70de46c340a1bc2c6079e1c40ccf7b0eae1313ffd33519"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ddaae3f3d32fc2cb4c53fab020b69a05c8ab1f02e0e59665c6f7a0d3a5be54f"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4eb3b82ca349cf6fadcdc7abcc8b3a50ab74a62e9113ab7a8ebc268aad35bb9"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9bcb89336efa095ea21b30f9e686763f2be4478f1b0a616969551982c4ee4c3b"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c08e8ed6fa3d477e501ec9db169bfac8140e830aa372d77e4a43084d8dd91ab"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c6cd05ea06daca6ad6a4ca3ba7fe7dc5b5de063ff4daec6170ec0f9979f6c332"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7a00a9ed8d6e725b55ef98b1b35c88013245f35f68b1b12c5cd4100dddac333"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:de04b491d0e5007ee1b63a309956eaed959a49f5bb4e84b26c8f5d49de140fa9"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:40653609b3bf50611356e6b6554e3a331f6879fa7116f3959b20e3528783e699"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dbf3a08a06b3f433013c143ebd72c15cac33d2914b8ea4bea7ac2c23578815d6"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:854f422ac44af92bfe172d8e73229c270dc09b96535e8a548f99c84f82dde241"}, + {file = "aiohttp-3.8.4-cp37-cp37m-win32.whl", hash = "sha256:aeb29c84bb53a84b1a81c6c09d24cf33bb8432cc5c39979021cc0f98c1292a1a"}, + {file = "aiohttp-3.8.4-cp37-cp37m-win_amd64.whl", hash = "sha256:db3fc6120bce9f446d13b1b834ea5b15341ca9ff3f335e4a951a6ead31105480"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fabb87dd8850ef0f7fe2b366d44b77d7e6fa2ea87861ab3844da99291e81e60f"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:91f6d540163f90bbaef9387e65f18f73ffd7c79f5225ac3d3f61df7b0d01ad15"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d265f09a75a79a788237d7f9054f929ced2e69eb0bb79de3798c468d8a90f945"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d89efa095ca7d442a6d0cbc755f9e08190ba40069b235c9886a8763b03785da"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4dac314662f4e2aa5009977b652d9b8db7121b46c38f2073bfeed9f4049732cd"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fe11310ae1e4cd560035598c3f29d86cef39a83d244c7466f95c27ae04850f10"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6ddb2a2026c3f6a68c3998a6c47ab6795e4127315d2e35a09997da21865757f8"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e75b89ac3bd27d2d043b234aa7b734c38ba1b0e43f07787130a0ecac1e12228a"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6e601588f2b502c93c30cd5a45bfc665faaf37bbe835b7cfd461753068232074"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a5d794d1ae64e7753e405ba58e08fcfa73e3fad93ef9b7e31112ef3c9a0efb52"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:a1f4689c9a1462f3df0a1f7e797791cd6b124ddbee2b570d34e7f38ade0e2c71"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:3032dcb1c35bc330134a5b8a5d4f68c1a87252dfc6e1262c65a7e30e62298275"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8189c56eb0ddbb95bfadb8f60ea1b22fcfa659396ea36f6adcc521213cd7b44d"}, + {file = "aiohttp-3.8.4-cp38-cp38-win32.whl", hash = "sha256:33587f26dcee66efb2fff3c177547bd0449ab7edf1b73a7f5dea1e38609a0c54"}, + {file = "aiohttp-3.8.4-cp38-cp38-win_amd64.whl", hash = "sha256:e595432ac259af2d4630008bf638873d69346372d38255774c0e286951e8b79f"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5a7bdf9e57126dc345b683c3632e8ba317c31d2a41acd5800c10640387d193ed"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:22f6eab15b6db242499a16de87939a342f5a950ad0abaf1532038e2ce7d31567"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7235604476a76ef249bd64cb8274ed24ccf6995c4a8b51a237005ee7a57e8643"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea9eb976ffdd79d0e893869cfe179a8f60f152d42cb64622fca418cd9b18dc2a"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:92c0cea74a2a81c4c76b62ea1cac163ecb20fb3ba3a75c909b9fa71b4ad493cf"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:493f5bc2f8307286b7799c6d899d388bbaa7dfa6c4caf4f97ef7521b9cb13719"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a63f03189a6fa7c900226e3ef5ba4d3bd047e18f445e69adbd65af433add5a2"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:10c8cefcff98fd9168cdd86c4da8b84baaa90bf2da2269c6161984e6737bf23e"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:bca5f24726e2919de94f047739d0a4fc01372801a3672708260546aa2601bf57"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:03baa76b730e4e15a45f81dfe29a8d910314143414e528737f8589ec60cf7391"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:8c29c77cc57e40f84acef9bfb904373a4e89a4e8b74e71aa8075c021ec9078c2"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:03543dcf98a6619254b409be2d22b51f21ec66272be4ebda7b04e6412e4b2e14"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:17b79c2963db82086229012cff93ea55196ed31f6493bb1ccd2c62f1724324e4"}, + {file = "aiohttp-3.8.4-cp39-cp39-win32.whl", hash = "sha256:34ce9f93a4a68d1272d26030655dd1b58ff727b3ed2a33d80ec433561b03d67a"}, + {file = "aiohttp-3.8.4-cp39-cp39-win_amd64.whl", hash = "sha256:41a86a69bb63bb2fc3dc9ad5ea9f10f1c9c8e282b471931be0268ddd09430b04"}, + {file = "aiohttp-3.8.4.tar.gz", hash = "sha256:bf2e1a9162c1e441bf805a1fd166e249d574ca04e03b34f97e2928769e91ab5c"}, ] [package.dependencies] aiosignal = ">=1.1.2" async-timeout = ">=4.0.0a3,<5.0" attrs = ">=17.3.0" -charset-normalizer = ">=2.0,<3.0" +charset-normalizer = ">=2.0,<4.0" frozenlist = ">=1.1.1" multidict = ">=4.5,<7.0" yarl = ">=1.0,<2.0" @@ -129,7 +127,6 @@ speedups = ["Brotli", "aiodns", "cchardet"] name = "aiohttp-json-rpc" version = "0.13.3" description = "Implementation JSON-RPC 2.0 server and client using aiohttp on top of websockets transport" -category = "main" optional = false python-versions = ">=3.5" files = [ @@ -144,7 +141,6 @@ aiohttp = ">=3,<4" name = "aiohttp-middlewares" version = "2.2.0" description = "Collection of useful middlewares for aiohttp applications." -category = "main" optional = false python-versions = ">=3.7,<4.0" files = [ @@ -161,7 +157,6 @@ yarl = ">=1.5.1,<2.0.0" name = "aiosignal" version = "1.3.1" description = "aiosignal: a list of registered asynchronous callbacks" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -176,7 +171,6 @@ frozenlist = ">=1.1.0" name = "alabaster" version = "0.7.13" description = "A configurable sidebar-enabled Sphinx theme" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -188,7 +182,6 @@ files = [ name = "ansicon" version = "1.89.0" description = "Python wrapper for loading Jason Hood's ANSICON" -category = "main" optional = false python-versions = "*" files = [ @@ -200,7 +193,6 @@ files = [ name = "appdirs" version = "1.4.4" description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" files = [] @@ -210,13 +202,12 @@ develop = false type = "git" url = "https://github.com/ActiveState/appdirs.git" reference = "master" -resolved_reference = "211708144ddcbba1f02e26a43efec9aef57bc9fc" +resolved_reference = "8734277956c1df3b85385e6b308e954910533884" [[package]] name = "arrow" version = "0.17.0" description = "Better dates & times for Python" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" files = [ @@ -229,26 +220,24 @@ python-dateutil = ">=2.7.0" [[package]] name = "astroid" -version = "2.13.2" +version = "2.15.5" description = "An abstract syntax tree for Python with inference support." -category = "dev" optional = false python-versions = ">=3.7.2" files = [ - {file = "astroid-2.13.2-py3-none-any.whl", hash = "sha256:8f6a8d40c4ad161d6fc419545ae4b2f275ed86d1c989c97825772120842ee0d2"}, - {file = "astroid-2.13.2.tar.gz", hash = "sha256:3bc7834720e1a24ca797fd785d77efb14f7a28ee8e635ef040b6e2d80ccb3303"}, + {file = "astroid-2.15.5-py3-none-any.whl", hash = "sha256:078e5212f9885fa85fbb0cf0101978a336190aadea6e13305409d099f71b2324"}, + {file = "astroid-2.15.5.tar.gz", hash = "sha256:1039262575027b441137ab4a62a793a9b43defb42c32d5670f38686207cd780f"}, ] [package.dependencies] lazy-object-proxy = ">=1.4.0" -typing-extensions = ">=4.0.0" +typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.11\""} wrapt = {version = ">=1.11,<2", markers = "python_version < \"3.11\""} [[package]] name = "async-timeout" version = "4.0.2" description = "Timeout context manager for asyncio programs" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -260,7 +249,6 @@ files = [ name = "atomicwrites" version = "1.4.1" description = "Atomic file writes." -category = "dev" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -269,33 +257,31 @@ files = [ [[package]] name = "attrs" -version = "22.2.0" +version = "23.1.0" description = "Classes Without Boilerplate" -category = "main" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"}, - {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"}, + {file = "attrs-23.1.0-py3-none-any.whl", hash = "sha256:1f28b4522cdc2fb4256ac1a020c78acf9cba2c6b461ccd2c126f3aa8e8335d04"}, + {file = "attrs-23.1.0.tar.gz", hash = "sha256:6279836d581513a26f1bf235f9acd333bc9115683f14f7e8fae46c98fc50e015"}, ] [package.extras] -cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"] -dev = ["attrs[docs,tests]"] -docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"] -tests = ["attrs[tests-no-zope]", "zope.interface"] -tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"] +cov = ["attrs[tests]", "coverage[toml] (>=5.3)"] +dev = ["attrs[docs,tests]", "pre-commit"] +docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"] +tests = ["attrs[tests-no-zope]", "zope-interface"] +tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] [[package]] name = "autopep8" -version = "2.0.1" +version = "2.0.2" description = "A tool that automatically formats Python code to conform to the PEP 8 style guide" -category = "dev" optional = false python-versions = ">=3.6" files = [ - {file = "autopep8-2.0.1-py2.py3-none-any.whl", hash = "sha256:be5bc98c33515b67475420b7b1feafc8d32c1a69862498eda4983b45bffd2687"}, - {file = "autopep8-2.0.1.tar.gz", hash = "sha256:d27a8929d8dcd21c0f4b3859d2d07c6c25273727b98afc984c039df0f0d86566"}, + {file = "autopep8-2.0.2-py2.py3-none-any.whl", hash = "sha256:86e9303b5e5c8160872b2f5ef611161b2893e9bfe8ccc7e2f76385947d57a2f1"}, + {file = "autopep8-2.0.2.tar.gz", hash = "sha256:f9849cdd62108cb739dbcdbfb7fdcc9a30d1b63c4cc3e1c1f893b5360941b61c"}, ] [package.dependencies] @@ -304,24 +290,19 @@ tomli = {version = "*", markers = "python_version < \"3.11\""} [[package]] name = "babel" -version = "2.11.0" +version = "2.12.1" description = "Internationalization utilities" -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"}, - {file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"}, + {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"}, + {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"}, ] -[package.dependencies] -pytz = ">=2015.7" - [[package]] name = "bcrypt" version = "4.0.1" description = "Modern password hashing for your software and your servers" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -352,16 +333,31 @@ files = [ tests = ["pytest (>=3.2.1,!=3.3.0)"] typecheck = ["mypy"] +[[package]] +name = "bidict" +version = "0.22.1" +description = "The bidirectional mapping library for Python." +optional = false +python-versions = ">=3.7" +files = [ + {file = "bidict-0.22.1-py3-none-any.whl", hash = "sha256:6ef212238eb884b664f28da76f33f1d28b260f665fc737b413b287d5487d1e7b"}, + {file = "bidict-0.22.1.tar.gz", hash = "sha256:1e0f7f74e4860e6d0943a05d4134c63a2fad86f3d4732fb265bd79e4e856d81d"}, +] + +[package.extras] +docs = ["furo", "sphinx", "sphinx-copybutton"] +lint = ["pre-commit"] +test = ["hypothesis", "pytest", "pytest-benchmark[histogram]", "pytest-cov", "pytest-xdist", "sortedcollections", "sortedcontainers", "sphinx"] + [[package]] name = "blessed" -version = "1.19.1" +version = "1.20.0" description = "Easy, practical library for making terminal apps, by providing an elegant, well-documented interface to Colors, Keyboard input, and screen Positioning capabilities." -category = "main" optional = false python-versions = ">=2.7" files = [ - {file = "blessed-1.19.1-py2.py3-none-any.whl", hash = "sha256:63b8554ae2e0e7f43749b6715c734cc8f3883010a809bf16790102563e6cf25b"}, - {file = "blessed-1.19.1.tar.gz", hash = "sha256:9a0d099695bf621d4680dd6c73f6ad547f6a3442fbdbe80c4b1daa1edbc492fc"}, + {file = "blessed-1.20.0-py2.py3-none-any.whl", hash = "sha256:0c542922586a265e699188e52d5f5ac5ec0dd517e5a1041d90d2bbf23f906058"}, + {file = "blessed-1.20.0.tar.gz", hash = "sha256:2cdd67f8746e048f00df47a2880f4d6acbcdb399031b604e34ba8f71d5787680"}, ] [package.dependencies] @@ -371,33 +367,30 @@ wcwidth = ">=0.1.4" [[package]] name = "cachetools" -version = "5.2.1" +version = "5.3.1" description = "Extensible memoizing collections and decorators" -category = "main" optional = false -python-versions = "~=3.7" +python-versions = ">=3.7" files = [ - {file = "cachetools-5.2.1-py3-none-any.whl", hash = "sha256:8462eebf3a6c15d25430a8c27c56ac61340b2ecf60c9ce57afc2b97e450e47da"}, - {file = "cachetools-5.2.1.tar.gz", hash = "sha256:5991bc0e08a1319bb618d3195ca5b6bc76646a49c21d55962977197b301cc1fe"}, + {file = "cachetools-5.3.1-py3-none-any.whl", hash = "sha256:95ef631eeaea14ba2e36f06437f36463aac3a096799e876ee55e5cdccb102590"}, + {file = "cachetools-5.3.1.tar.gz", hash = "sha256:dce83f2d9b4e1f732a8cd44af8e8fab2dbe46201467fc98b3ef8f269092bf62b"}, ] [[package]] name = "certifi" -version = "2022.12.7" +version = "2023.7.22" description = "Python package for providing Mozilla's CA Bundle." -category = "main" optional = false python-versions = ">=3.6" files = [ - {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"}, - {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"}, + {file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"}, + {file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"}, ] [[package]] name = "cffi" version = "1.15.1" description = "Foreign Function Interface for Python calling C code." -category = "main" optional = false python-versions = "*" files = [ @@ -474,7 +467,6 @@ pycparser = "*" name = "cfgv" version = "3.3.1" description = "Validate configuration and produce human readable error messages." -category = "dev" optional = false python-versions = ">=3.6.1" files = [ @@ -484,24 +476,92 @@ files = [ [[package]] name = "charset-normalizer" -version = "2.1.1" +version = "3.1.0" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -category = "main" optional = false -python-versions = ">=3.6.0" +python-versions = ">=3.7.0" files = [ - {file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"}, - {file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"}, + {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"}, + {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"}, ] -[package.extras] -unicode-backport = ["unicodedata2"] - [[package]] name = "click" version = "7.1.2" description = "Composable command line interface toolkit" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" files = [ @@ -513,7 +573,6 @@ files = [ name = "clique" version = "1.6.1" description = "Manage collections with common numerical component" -category = "main" optional = false python-versions = ">=2.7, <4.0" files = [ @@ -530,7 +589,6 @@ test = ["pytest (>=2.3.5,<5)", "pytest-cov (>=2,<3)", "pytest-runner (>=2.7,<3)" name = "colorama" version = "0.4.6" description = "Cross-platform colored terminal text." -category = "dev" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" files = [ @@ -542,7 +600,6 @@ files = [ name = "commonmark" version = "0.9.1" description = "Python parser for the CommonMark Markdown spec" -category = "dev" optional = false python-versions = "*" files = [ @@ -557,7 +614,6 @@ test = ["flake8 (==3.7.8)", "hypothesis (==3.55.3)"] name = "coolname" version = "2.2.0" description = "Random name and slug generator" -category = "main" optional = false python-versions = "*" files = [ @@ -567,63 +623,71 @@ files = [ [[package]] name = "coverage" -version = "7.0.5" +version = "7.2.7" description = "Code coverage measurement for Python" -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "coverage-7.0.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2a7f23bbaeb2a87f90f607730b45564076d870f1fb07b9318d0c21f36871932b"}, - {file = "coverage-7.0.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c18d47f314b950dbf24a41787ced1474e01ca816011925976d90a88b27c22b89"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef14d75d86f104f03dea66c13188487151760ef25dd6b2dbd541885185f05f40"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66e50680e888840c0995f2ad766e726ce71ca682e3c5f4eee82272c7671d38a2"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9fed35ca8c6e946e877893bbac022e8563b94404a605af1d1e6accc7eb73289"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d8d04e755934195bdc1db45ba9e040b8d20d046d04d6d77e71b3b34a8cc002d0"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e109f1c9a3ece676597831874126555997c48f62bddbcace6ed17be3e372de8"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0a1890fca2962c4f1ad16551d660b46ea77291fba2cc21c024cd527b9d9c8809"}, - {file = "coverage-7.0.5-cp310-cp310-win32.whl", hash = "sha256:be9fcf32c010da0ba40bf4ee01889d6c737658f4ddff160bd7eb9cac8f094b21"}, - {file = "coverage-7.0.5-cp310-cp310-win_amd64.whl", hash = "sha256:cbfcba14a3225b055a28b3199c3d81cd0ab37d2353ffd7f6fd64844cebab31ad"}, - {file = "coverage-7.0.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:30b5fec1d34cc932c1bc04017b538ce16bf84e239378b8f75220478645d11fca"}, - {file = "coverage-7.0.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1caed2367b32cc80a2b7f58a9f46658218a19c6cfe5bc234021966dc3daa01f0"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d254666d29540a72d17cc0175746cfb03d5123db33e67d1020e42dae611dc196"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19245c249aa711d954623d94f23cc94c0fd65865661f20b7781210cb97c471c0"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b05ed4b35bf6ee790832f68932baf1f00caa32283d66cc4d455c9e9d115aafc"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:29de916ba1099ba2aab76aca101580006adfac5646de9b7c010a0f13867cba45"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:e057e74e53db78122a3979f908973e171909a58ac20df05c33998d52e6d35757"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:411d4ff9d041be08fdfc02adf62e89c735b9468f6d8f6427f8a14b6bb0a85095"}, - {file = "coverage-7.0.5-cp311-cp311-win32.whl", hash = "sha256:52ab14b9e09ce052237dfe12d6892dd39b0401690856bcfe75d5baba4bfe2831"}, - {file = "coverage-7.0.5-cp311-cp311-win_amd64.whl", hash = "sha256:1f66862d3a41674ebd8d1a7b6f5387fe5ce353f8719040a986551a545d7d83ea"}, - {file = "coverage-7.0.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69522b168a6b64edf0c33ba53eac491c0a8f5cc94fa4337f9c6f4c8f2f5296c"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:436e103950d05b7d7f55e39beeb4d5be298ca3e119e0589c0227e6d0b01ee8c7"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8c56bec53d6e3154eaff6ea941226e7bd7cc0d99f9b3756c2520fc7a94e6d96"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a38362528a9115a4e276e65eeabf67dcfaf57698e17ae388599568a78dcb029"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:f67472c09a0c7486e27f3275f617c964d25e35727af952869dd496b9b5b7f6a3"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:220e3fa77d14c8a507b2d951e463b57a1f7810a6443a26f9b7591ef39047b1b2"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ecb0f73954892f98611e183f50acdc9e21a4653f294dfbe079da73c6378a6f47"}, - {file = "coverage-7.0.5-cp37-cp37m-win32.whl", hash = "sha256:d8f3e2e0a1d6777e58e834fd5a04657f66affa615dae61dd67c35d1568c38882"}, - {file = "coverage-7.0.5-cp37-cp37m-win_amd64.whl", hash = "sha256:9e662e6fc4f513b79da5d10a23edd2b87685815b337b1a30cd11307a6679148d"}, - {file = "coverage-7.0.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:790e4433962c9f454e213b21b0fd4b42310ade9c077e8edcb5113db0818450cb"}, - {file = "coverage-7.0.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:49640bda9bda35b057b0e65b7c43ba706fa2335c9a9896652aebe0fa399e80e6"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d66187792bfe56f8c18ba986a0e4ae44856b1c645336bd2c776e3386da91e1dd"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:276f4cd0001cd83b00817c8db76730938b1ee40f4993b6a905f40a7278103b3a"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95304068686545aa368b35dfda1cdfbbdbe2f6fe43de4a2e9baa8ebd71be46e2"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:17e01dd8666c445025c29684d4aabf5a90dc6ef1ab25328aa52bedaa95b65ad7"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea76dbcad0b7b0deb265d8c36e0801abcddf6cc1395940a24e3595288b405ca0"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:50a6adc2be8edd7ee67d1abc3cd20678987c7b9d79cd265de55941e3d0d56499"}, - {file = "coverage-7.0.5-cp38-cp38-win32.whl", hash = "sha256:e4ce984133b888cc3a46867c8b4372c7dee9cee300335e2925e197bcd45b9e16"}, - {file = "coverage-7.0.5-cp38-cp38-win_amd64.whl", hash = "sha256:4a950f83fd3f9bca23b77442f3a2b2ea4ac900944d8af9993743774c4fdc57af"}, - {file = "coverage-7.0.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3c2155943896ac78b9b0fd910fb381186d0c345911f5333ee46ac44c8f0e43ab"}, - {file = "coverage-7.0.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:54f7e9705e14b2c9f6abdeb127c390f679f6dbe64ba732788d3015f7f76ef637"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ee30375b409d9a7ea0f30c50645d436b6f5dfee254edffd27e45a980ad2c7f4"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b78729038abea6a5df0d2708dce21e82073463b2d79d10884d7d591e0f385ded"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13250b1f0bd023e0c9f11838bdeb60214dd5b6aaf8e8d2f110c7e232a1bff83b"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2c407b1950b2d2ffa091f4e225ca19a66a9bd81222f27c56bd12658fc5ca1209"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c76a3075e96b9c9ff00df8b5f7f560f5634dffd1658bafb79eb2682867e94f78"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:f26648e1b3b03b6022b48a9b910d0ae209e2d51f50441db5dce5b530fad6d9b1"}, - {file = "coverage-7.0.5-cp39-cp39-win32.whl", hash = "sha256:ba3027deb7abf02859aca49c865ece538aee56dcb4871b4cced23ba4d5088904"}, - {file = "coverage-7.0.5-cp39-cp39-win_amd64.whl", hash = "sha256:949844af60ee96a376aac1ded2a27e134b8c8d35cc006a52903fc06c24a3296f"}, - {file = "coverage-7.0.5-pp37.pp38.pp39-none-any.whl", hash = "sha256:b9727ac4f5cf2cbf87880a63870b5b9730a8ae3a4a360241a0fdaa2f71240ff0"}, - {file = "coverage-7.0.5.tar.gz", hash = "sha256:051afcbd6d2ac39298d62d340f94dbb6a1f31de06dfaf6fcef7b759dd3860c45"}, + {file = "coverage-7.2.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d39b5b4f2a66ccae8b7263ac3c8170994b65266797fb96cbbfd3fb5b23921db8"}, + {file = "coverage-7.2.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6d040ef7c9859bb11dfeb056ff5b3872436e3b5e401817d87a31e1750b9ae2fb"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba90a9563ba44a72fda2e85302c3abc71c5589cea608ca16c22b9804262aaeb6"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e7d9405291c6928619403db1d10bd07888888ec1abcbd9748fdaa971d7d661b2"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31563e97dae5598556600466ad9beea39fb04e0229e61c12eaa206e0aa202063"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ebba1cd308ef115925421d3e6a586e655ca5a77b5bf41e02eb0e4562a111f2d1"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:cb017fd1b2603ef59e374ba2063f593abe0fc45f2ad9abdde5b4d83bd922a353"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62a5c7dad11015c66fbb9d881bc4caa5b12f16292f857842d9d1871595f4495"}, + {file = "coverage-7.2.7-cp310-cp310-win32.whl", hash = "sha256:ee57190f24fba796e36bb6d3aa8a8783c643d8fa9760c89f7a98ab5455fbf818"}, + {file = "coverage-7.2.7-cp310-cp310-win_amd64.whl", hash = "sha256:f75f7168ab25dd93110c8a8117a22450c19976afbc44234cbf71481094c1b850"}, + {file = "coverage-7.2.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:06a9a2be0b5b576c3f18f1a241f0473575c4a26021b52b2a85263a00f034d51f"}, + {file = "coverage-7.2.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5baa06420f837184130752b7c5ea0808762083bf3487b5038d68b012e5937dbe"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdec9e8cbf13a5bf63290fc6013d216a4c7232efb51548594ca3631a7f13c3a3"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:52edc1a60c0d34afa421c9c37078817b2e67a392cab17d97283b64c5833f427f"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63426706118b7f5cf6bb6c895dc215d8a418d5952544042c8a2d9fe87fcf09cb"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:afb17f84d56068a7c29f5fa37bfd38d5aba69e3304af08ee94da8ed5b0865833"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:48c19d2159d433ccc99e729ceae7d5293fbffa0bdb94952d3579983d1c8c9d97"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0e1f928eaf5469c11e886fe0885ad2bf1ec606434e79842a879277895a50942a"}, + {file = "coverage-7.2.7-cp311-cp311-win32.whl", hash = "sha256:33d6d3ea29d5b3a1a632b3c4e4f4ecae24ef170b0b9ee493883f2df10039959a"}, + {file = "coverage-7.2.7-cp311-cp311-win_amd64.whl", hash = "sha256:5b7540161790b2f28143191f5f8ec02fb132660ff175b7747b95dcb77ac26562"}, + {file = "coverage-7.2.7-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f2f67fe12b22cd130d34d0ef79206061bfb5eda52feb6ce0dba0644e20a03cf4"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a342242fe22407f3c17f4b499276a02b01e80f861f1682ad1d95b04018e0c0d4"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:171717c7cb6b453aebac9a2ef603699da237f341b38eebfee9be75d27dc38e01"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49969a9f7ffa086d973d91cec8d2e31080436ef0fb4a359cae927e742abfaaa6"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b46517c02ccd08092f4fa99f24c3b83d8f92f739b4657b0f146246a0ca6a831d"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a3d33a6b3eae87ceaefa91ffdc130b5e8536182cd6dfdbfc1aa56b46ff8c86de"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:976b9c42fb2a43ebf304fa7d4a310e5f16cc99992f33eced91ef6f908bd8f33d"}, + {file = "coverage-7.2.7-cp312-cp312-win32.whl", hash = "sha256:8de8bb0e5ad103888d65abef8bca41ab93721647590a3f740100cd65c3b00511"}, + {file = "coverage-7.2.7-cp312-cp312-win_amd64.whl", hash = "sha256:9e31cb64d7de6b6f09702bb27c02d1904b3aebfca610c12772452c4e6c21a0d3"}, + {file = "coverage-7.2.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:58c2ccc2f00ecb51253cbe5d8d7122a34590fac9646a960d1430d5b15321d95f"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d22656368f0e6189e24722214ed8d66b8022db19d182927b9a248a2a8a2f67eb"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a895fcc7b15c3fc72beb43cdcbdf0ddb7d2ebc959edac9cef390b0d14f39f8a9"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84606b74eb7de6ff581a7915e2dab7a28a0517fbe1c9239eb227e1354064dcd"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0a5f9e1dbd7fbe30196578ca36f3fba75376fb99888c395c5880b355e2875f8a"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:419bfd2caae268623dd469eff96d510a920c90928b60f2073d79f8fe2bbc5959"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2aee274c46590717f38ae5e4650988d1af340fe06167546cc32fe2f58ed05b02"}, + {file = "coverage-7.2.7-cp37-cp37m-win32.whl", hash = "sha256:61b9a528fb348373c433e8966535074b802c7a5d7f23c4f421e6c6e2f1697a6f"}, + {file = "coverage-7.2.7-cp37-cp37m-win_amd64.whl", hash = "sha256:b1c546aca0ca4d028901d825015dc8e4d56aac4b541877690eb76490f1dc8ed0"}, + {file = "coverage-7.2.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:54b896376ab563bd38453cecb813c295cf347cf5906e8b41d340b0321a5433e5"}, + {file = "coverage-7.2.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3d376df58cc111dc8e21e3b6e24606b5bb5dee6024f46a5abca99124b2229ef5"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e330fc79bd7207e46c7d7fd2bb4af2963f5f635703925543a70b99574b0fea9"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e9d683426464e4a252bf70c3498756055016f99ddaec3774bf368e76bbe02b6"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d13c64ee2d33eccf7437961b6ea7ad8673e2be040b4f7fd4fd4d4d28d9ccb1e"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b7aa5f8a41217360e600da646004f878250a0d6738bcdc11a0a39928d7dc2050"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8fa03bce9bfbeeef9f3b160a8bed39a221d82308b4152b27d82d8daa7041fee5"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:245167dd26180ab4c91d5e1496a30be4cd721a5cf2abf52974f965f10f11419f"}, + {file = "coverage-7.2.7-cp38-cp38-win32.whl", hash = "sha256:d2c2db7fd82e9b72937969bceac4d6ca89660db0a0967614ce2481e81a0b771e"}, + {file = "coverage-7.2.7-cp38-cp38-win_amd64.whl", hash = "sha256:2e07b54284e381531c87f785f613b833569c14ecacdcb85d56b25c4622c16c3c"}, + {file = "coverage-7.2.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:537891ae8ce59ef63d0123f7ac9e2ae0fc8b72c7ccbe5296fec45fd68967b6c9"}, + {file = "coverage-7.2.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:06fb182e69f33f6cd1d39a6c597294cff3143554b64b9825d1dc69d18cc2fff2"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:201e7389591af40950a6480bd9edfa8ed04346ff80002cec1a66cac4549c1ad7"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f6951407391b639504e3b3be51b7ba5f3528adbf1a8ac3302b687ecababf929e"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f48351d66575f535669306aa7d6d6f71bc43372473b54a832222803eb956fd1"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b29019c76039dc3c0fd815c41392a044ce555d9bcdd38b0fb60fb4cd8e475ba9"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:81c13a1fc7468c40f13420732805a4c38a105d89848b7c10af65a90beff25250"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:975d70ab7e3c80a3fe86001d8751f6778905ec723f5b110aed1e450da9d4b7f2"}, + {file = "coverage-7.2.7-cp39-cp39-win32.whl", hash = "sha256:7ee7d9d4822c8acc74a5e26c50604dff824710bc8de424904c0982e25c39c6cb"}, + {file = "coverage-7.2.7-cp39-cp39-win_amd64.whl", hash = "sha256:eb393e5ebc85245347950143969b241d08b52b88a3dc39479822e073a1a8eb27"}, + {file = "coverage-7.2.7-pp37.pp38.pp39-none-any.whl", hash = "sha256:b7b4c971f05e6ae490fef852c218b0e79d4e52f79ef0c8475566584a8fb3e01d"}, + {file = "coverage-7.2.7.tar.gz", hash = "sha256:924d94291ca674905fe9481f12294eb11f2d3d3fd1adb20314ba89e94f44ed59"}, ] [package.dependencies] @@ -636,7 +700,6 @@ toml = ["tomli"] name = "cryptography" version = "39.0.0" description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -680,7 +743,6 @@ test = ["hypothesis (>=1.11.4,!=3.79.2)", "iso8601", "pretend", "pytest (>=6.2.0 name = "cx-freeze" version = "6.12.0" description = "Create standalone executables from Python scripts" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -738,7 +800,6 @@ test = ["nose (==1.3.7)", "pygments (>=2.11.2)", "pytest (>=7.0.1)", "pytest-cov name = "cx-logging" version = "3.1.0" description = "Python and C interfaces for logging" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -765,29 +826,10 @@ files = [ {file = "cx_Logging-3.1.0.tar.gz", hash = "sha256:8a06834d8527aa904a68b25c9c1a5fa09f0dfdc94dbd9f86b81cd8d2f7a0e487"}, ] -[[package]] -name = "deprecated" -version = "1.2.13" -description = "Python @deprecated decorator to deprecate old python classes, functions or methods." -category = "main" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -files = [ - {file = "Deprecated-1.2.13-py2.py3-none-any.whl", hash = "sha256:64756e3e14c8c5eea9795d93c524551432a0be75629f8f29e67ab8caf076c76d"}, - {file = "Deprecated-1.2.13.tar.gz", hash = "sha256:43ac5335da90c31c24ba028af536a91d41d53f9e6901ddb021bcc572ce44e38d"}, -] - -[package.dependencies] -wrapt = ">=1.10,<2" - -[package.extras] -dev = ["PyTest", "PyTest (<5)", "PyTest-Cov", "PyTest-Cov (<2.6)", "bump2version (<1)", "configparser (<5)", "importlib-metadata (<3)", "importlib-resources (<4)", "sphinx (<2)", "sphinxcontrib-websupport (<2)", "tox", "zipp (<2)"] - [[package]] name = "dill" version = "0.3.6" description = "serialize all of python" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -802,7 +844,6 @@ graph = ["objgraph (>=1.7.2)"] name = "distlib" version = "0.3.6" description = "Distribution utilities" -category = "dev" optional = false python-versions = "*" files = [ @@ -814,7 +855,6 @@ files = [ name = "dnspython" version = "2.3.0" description = "DNS toolkit" -category = "main" optional = false python-versions = ">=3.7,<4.0" files = [ @@ -835,7 +875,6 @@ wmi = ["wmi (>=1.5.1,<2.0.0)"] name = "docutils" version = "0.19" description = "Docutils -- Python Documentation Utilities" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -847,7 +886,6 @@ files = [ name = "dropbox" version = "11.36.0" description = "Official Dropbox API Client" -category = "main" optional = false python-versions = "*" files = [ @@ -863,14 +901,13 @@ stone = ">=2" [[package]] name = "enlighten" -version = "1.11.1" +version = "1.11.2" description = "Enlighten Progress Bar" -category = "main" optional = false python-versions = "*" files = [ - {file = "enlighten-1.11.1-py2.py3-none-any.whl", hash = "sha256:e825eb534ca80778bb7d46e5581527b2a6fae559b6cf09e290a7952c6e11961e"}, - {file = "enlighten-1.11.1.tar.gz", hash = "sha256:57abd98a3d3f83484ef9f91f9255f4d23c8b3097ecdb647c7b9b0049d600b7f8"}, + {file = "enlighten-1.11.2-py2.py3-none-any.whl", hash = "sha256:98c9eb20e022b6a57f1c8d4f17e16760780b6881e6d658c40f52d21255ea45f3"}, + {file = "enlighten-1.11.2.tar.gz", hash = "sha256:9284861dee5a272e0e1a3758cd3f3b7180b1bd1754875da76876f2a7f46ccb61"}, ] [package.dependencies] @@ -879,36 +916,33 @@ prefixed = ">=0.3.2" [[package]] name = "evdev" -version = "1.6.0" +version = "1.6.1" description = "Bindings to the Linux input handling subsystem" -category = "main" optional = false python-versions = "*" files = [ - {file = "evdev-1.6.0.tar.gz", hash = "sha256:ecfa01b5c84f7e8c6ced3367ac95288f43cd84efbfd7dd7d0cdbfc0d18c87a6a"}, + {file = "evdev-1.6.1.tar.gz", hash = "sha256:299db8628cc73b237fc1cc57d3c2948faa0756e2a58b6194b5bf81dc2081f1e3"}, ] [[package]] name = "filelock" -version = "3.9.0" +version = "3.12.0" description = "A platform independent file lock." -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "filelock-3.9.0-py3-none-any.whl", hash = "sha256:f58d535af89bb9ad5cd4df046f741f8553a418c01a7856bf0d173bbc9f6bd16d"}, - {file = "filelock-3.9.0.tar.gz", hash = "sha256:7b319f24340b51f55a2bf7a12ac0755a9b03e718311dac567a0f4f7fabd2f5de"}, + {file = "filelock-3.12.0-py3-none-any.whl", hash = "sha256:ad98852315c2ab702aeb628412cbf7e95b7ce8c3bf9565670b4eaecf1db370a9"}, + {file = "filelock-3.12.0.tar.gz", hash = "sha256:fc03ae43288c013d2ea83c8597001b1129db351aad9c57fe2409327916b8e718"}, ] [package.extras] -docs = ["furo (>=2022.12.7)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"] -testing = ["covdefaults (>=2.2.2)", "coverage (>=7.0.1)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-timeout (>=2.1)"] +docs = ["furo (>=2023.3.27)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "diff-cover (>=7.5)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)", "pytest-timeout (>=2.1)"] [[package]] name = "flake8" version = "6.0.0" description = "the modular source code checker: pep8 pyflakes and co" -category = "dev" optional = false python-versions = ">=3.8.1" files = [ @@ -925,7 +959,6 @@ pyflakes = ">=3.0.0,<3.1.0" name = "frozenlist" version = "1.3.3" description = "A list-like structure which implements collections.abc.MutableSequence" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1007,14 +1040,13 @@ files = [ [[package]] name = "ftrack-python-api" -version = "2.3.3" +version = "2.5.0" description = "Python API for ftrack." -category = "main" optional = false -python-versions = ">=2.7.9, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, < 3.10" +python-versions = ">=2.7.9, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" files = [ - {file = "ftrack-python-api-2.3.3.tar.gz", hash = "sha256:358f37e5b1c5635eab107c19e27a0c890d512877f78af35b1ac416e90c037295"}, - {file = "ftrack_python_api-2.3.3-py2.py3-none-any.whl", hash = "sha256:82834c4d5def5557a2ea547a7e6f6ba84d3129e8f90457d8bbd85b287a2c39f6"}, + {file = "ftrack-python-api-2.5.0.tar.gz", hash = "sha256:95205022552b1abadec5e9dcb225762b8e8b9f16ebeadba374e56c25e69e6954"}, + {file = "ftrack_python_api-2.5.0-py2.py3-none-any.whl", hash = "sha256:59ef3f1d47e5c1df8c3f7ebcc937bbc9a5613b147f9ed083f10cff6370f0750d"}, ] [package.dependencies] @@ -1032,7 +1064,6 @@ websocket-client = ">=0.40.0,<1" name = "future" version = "0.18.3" description = "Clean single-source support for Python 3 and 2" -category = "main" optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" files = [ @@ -1041,29 +1072,27 @@ files = [ [[package]] name = "gazu" -version = "0.8.34" +version = "0.9.3" description = "Gazu is a client for Zou, the API to store the data of your CG production." -category = "main" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +python-versions = ">= 2.7, != 3.0.*, != 3.1.*, != 3.2.*, != 3.3.*, != 3.4.*, != 3.5.*, != 3.6.1, != 3.6.2" files = [ - {file = "gazu-0.8.34-py2.py3-none-any.whl", hash = "sha256:a78a8c5e61108aeaab6185646af78b0402dbdb29097e8ba5882bd55410f38c4b"}, + {file = "gazu-0.9.3-py2.py3-none-any.whl", hash = "sha256:daa6e4bdaa364b68a048ad97837aec011a0060d12edc3a5ac6ae34c13a05cb2b"}, ] [package.dependencies] -deprecated = "1.2.13" -python-socketio = {version = "4.6.1", extras = ["client"], markers = "python_version >= \"3.5\""} -requests = ">=2.25.1,<=2.28.1" +python-socketio = {version = "5.8.0", extras = ["client"], markers = "python_version != \"2.7\""} +requests = ">=2.25.1" [package.extras] dev = ["wheel"] -test = ["black (<=22.8.0)", "pre-commit (<=2.20.0)", "pytest (<=7.1.3)", "pytest-cov (<=3.0.0)", "requests-mock (==1.10.0)"] +lint = ["black (==23.3.0)", "pre-commit (==3.2.2)"] +test = ["pytest", "pytest-cov", "requests-mock"] [[package]] name = "gitdb" version = "4.0.10" description = "Git Object Database" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -1076,14 +1105,13 @@ smmap = ">=3.0.1,<6" [[package]] name = "gitpython" -version = "3.1.30" -description = "GitPython is a python library used to interact with Git repositories" -category = "dev" +version = "3.1.31" +description = "GitPython is a Python library used to interact with Git repositories" optional = false python-versions = ">=3.7" files = [ - {file = "GitPython-3.1.30-py3-none-any.whl", hash = "sha256:cd455b0000615c60e286208ba540271af9fe531fa6a87cc590a7298785ab2882"}, - {file = "GitPython-3.1.30.tar.gz", hash = "sha256:769c2d83e13f5d938b7688479da374c4e3d49f71549aaf462b646db9602ea6f8"}, + {file = "GitPython-3.1.31-py3-none-any.whl", hash = "sha256:f04893614f6aa713a60cbbe1e6a97403ef633103cdd0ef5eb6efe0deb98dbe8d"}, + {file = "GitPython-3.1.31.tar.gz", hash = "sha256:8ce3bcf69adfdf7c7d503e78fd3b1c492af782d58893b650adb2ac8912ddd573"}, ] [package.dependencies] @@ -1093,7 +1121,6 @@ gitdb = ">=4.0.1,<5" name = "google-api-core" version = "2.11.0" description = "Google API client core library" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1116,7 +1143,6 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0dev)"] name = "google-api-python-client" version = "1.12.11" description = "Google API Client Library for Python" -category = "main" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*" files = [ @@ -1134,14 +1160,13 @@ uritemplate = ">=3.0.0,<4dev" [[package]] name = "google-auth" -version = "2.16.0" +version = "2.17.3" description = "Google Authentication Library" -category = "main" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*" files = [ - {file = "google-auth-2.16.0.tar.gz", hash = "sha256:ed7057a101af1146f0554a769930ac9de506aeca4fd5af6543ebe791851a9fbd"}, - {file = "google_auth-2.16.0-py2.py3-none-any.whl", hash = "sha256:5045648c821fb72384cdc0e82cc326df195f113a33049d9b62b74589243d2acc"}, + {file = "google-auth-2.17.3.tar.gz", hash = "sha256:ce311e2bc58b130fddf316df57c9b3943c2a7b4f6ec31de9663a9333e4064efc"}, + {file = "google_auth-2.17.3-py2.py3-none-any.whl", hash = "sha256:f586b274d3eb7bd932ea424b1c702a30e0393a2e2bc4ca3eae8263ffd8be229f"}, ] [package.dependencies] @@ -1161,7 +1186,6 @@ requests = ["requests (>=2.20.0,<3.0.0dev)"] name = "google-auth-httplib2" version = "0.1.0" description = "Google Authentication Library: httplib2 transport" -category = "main" optional = false python-versions = "*" files = [ @@ -1176,14 +1200,13 @@ six = "*" [[package]] name = "googleapis-common-protos" -version = "1.58.0" +version = "1.59.0" description = "Common protobufs used in Google APIs" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "googleapis-common-protos-1.58.0.tar.gz", hash = "sha256:c727251ec025947d545184ba17e3578840fc3a24a0516a020479edab660457df"}, - {file = "googleapis_common_protos-1.58.0-py2.py3-none-any.whl", hash = "sha256:ca3befcd4580dab6ad49356b46bf165bb68ff4b32389f028f1abd7c10ab9519a"}, + {file = "googleapis-common-protos-1.59.0.tar.gz", hash = "sha256:4168fcb568a826a52f23510412da405abd93f4d23ba544bb68d943b14ba3cb44"}, + {file = "googleapis_common_protos-1.59.0-py2.py3-none-any.whl", hash = "sha256:b287dc48449d1d41af0c69f4ea26242b5ae4c3d7249a38b0984c86a4caffff1f"}, ] [package.dependencies] @@ -1194,14 +1217,13 @@ grpc = ["grpcio (>=1.44.0,<2.0.0dev)"] [[package]] name = "httplib2" -version = "0.21.0" +version = "0.22.0" description = "A comprehensive HTTP client library." -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ - {file = "httplib2-0.21.0-py3-none-any.whl", hash = "sha256:987c8bb3eb82d3fa60c68699510a692aa2ad9c4bd4f123e51dfb1488c14cdd01"}, - {file = "httplib2-0.21.0.tar.gz", hash = "sha256:fc144f091c7286b82bec71bdbd9b27323ba709cc612568d3000893bfd9cb4b34"}, + {file = "httplib2-0.22.0-py3-none-any.whl", hash = "sha256:14ae0a53c1ba8f3d37e9e27cf37eabb0fb9980f435ba405d546948b009dd64dc"}, + {file = "httplib2-0.22.0.tar.gz", hash = "sha256:d7a10bc5ef5ab08322488bde8c726eeee5c8618723fdb399597ec58f3d82df81"}, ] [package.dependencies] @@ -1209,14 +1231,13 @@ pyparsing = {version = ">=2.4.2,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.0.2 || >3.0 [[package]] name = "identify" -version = "2.5.13" +version = "2.5.24" description = "File identification library for Python" -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "identify-2.5.13-py2.py3-none-any.whl", hash = "sha256:8aa48ce56e38c28b6faa9f261075dea0a942dfbb42b341b4e711896cbb40f3f7"}, - {file = "identify-2.5.13.tar.gz", hash = "sha256:abb546bca6f470228785338a01b539de8a85bbf46491250ae03363956d8ebb10"}, + {file = "identify-2.5.24-py2.py3-none-any.whl", hash = "sha256:986dbfb38b1140e763e413e6feb44cd731faf72d1909543178aa79b0e258265d"}, + {file = "identify-2.5.24.tar.gz", hash = "sha256:0aac67d5b4812498056d28a9a512a483f5085cc28640b02b258a59dac34301d4"}, ] [package.extras] @@ -1226,7 +1247,6 @@ license = ["ukkonen"] name = "idna" version = "3.4" description = "Internationalized Domain Names in Applications (IDNA)" -category = "main" optional = false python-versions = ">=3.5" files = [ @@ -1238,7 +1258,6 @@ files = [ name = "imagesize" version = "1.4.1" description = "Getting image size from png/jpeg/jpeg2000/gif file" -category = "dev" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -1248,14 +1267,13 @@ files = [ [[package]] name = "importlib-metadata" -version = "6.0.0" +version = "6.6.0" description = "Read metadata from Python packages" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "importlib_metadata-6.0.0-py3-none-any.whl", hash = "sha256:7efb448ec9a5e313a57655d35aa54cd3e01b7e1fbcf72dce1bf06119420f5bad"}, - {file = "importlib_metadata-6.0.0.tar.gz", hash = "sha256:e354bedeb60efa6affdcc8ae121b73544a7aa74156d047311948f6d711cd378d"}, + {file = "importlib_metadata-6.6.0-py3-none-any.whl", hash = "sha256:43dd286a2cd8995d5eaef7fee2066340423b818ed3fd70adf0bad5f1fac53fed"}, + {file = "importlib_metadata-6.6.0.tar.gz", hash = "sha256:92501cdf9cc66ebd3e612f1b4f0c0765dfa42f0fa38ffb319b6bd84dd675d705"}, ] [package.dependencies] @@ -1270,7 +1288,6 @@ testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packag name = "iniconfig" version = "2.0.0" description = "brain-dead simple config-ini parsing" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -1280,19 +1297,18 @@ files = [ [[package]] name = "isort" -version = "5.11.4" +version = "5.12.0" description = "A Python utility / library to sort Python imports." -category = "dev" optional = false -python-versions = ">=3.7.0" +python-versions = ">=3.8.0" files = [ - {file = "isort-5.11.4-py3-none-any.whl", hash = "sha256:c033fd0edb91000a7f09527fe5c75321878f98322a77ddcc81adbd83724afb7b"}, - {file = "isort-5.11.4.tar.gz", hash = "sha256:6db30c5ded9815d813932c04c2f85a360bcdd35fed496f4d8f35495ef0a261b6"}, + {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"}, + {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"}, ] [package.extras] -colors = ["colorama (>=0.4.3,<0.5.0)"] -pipfile-deprecated-finder = ["pipreqs", "requirementslib"] +colors = ["colorama (>=0.4.3)"] +pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"] plugins = ["setuptools"] requirements-deprecated-finder = ["pip-api", "pipreqs"] @@ -1300,7 +1316,6 @@ requirements-deprecated-finder = ["pip-api", "pipreqs"] name = "jedi" version = "0.13.3" description = "An autocompletion tool for Python that can be used for text editors." -category = "dev" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -1318,7 +1333,6 @@ testing = ["colorama", "docopt", "pytest (>=3.1.0)"] name = "jeepney" version = "0.8.0" description = "Low-level, pure Python DBus protocol wrapper." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1334,7 +1348,6 @@ trio = ["async_generator", "trio"] name = "jinja2" version = "3.1.2" description = "A very fast and expressive template engine." -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -1352,7 +1365,6 @@ i18n = ["Babel (>=2.7)"] name = "jinxed" version = "1.2.0" description = "Jinxed Terminal Library" -category = "main" optional = false python-versions = "*" files = [ @@ -1367,7 +1379,6 @@ ansicon = {version = "*", markers = "platform_system == \"Windows\""} name = "jsonschema" version = "2.6.0" description = "An implementation of JSON Schema validation for Python" -category = "main" optional = false python-versions = "*" files = [ @@ -1382,7 +1393,6 @@ format = ["rfc3987", "strict-rfc3339", "webcolors"] name = "keyring" version = "22.4.0" description = "Store and access your passwords safely." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1404,7 +1414,6 @@ testing = ["pytest (>=4.6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=1.2. name = "lazy-object-proxy" version = "1.9.0" description = "A fast and thorough lazy object proxy." -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -1448,50 +1457,60 @@ files = [ [[package]] name = "lief" -version = "0.12.3" +version = "0.13.1" description = "Library to instrument executable formats" -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "lief-0.12.3-cp310-cp310-macosx_10_14_arm64.whl", hash = "sha256:66724f337e6a36cea1a9380f13b59923f276c49ca837becae2e7be93a2e245d9"}, - {file = "lief-0.12.3-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6d18aafa2028587c98f6d4387bec94346e92f2b5a8a5002f70b1cf35b1c045cc"}, - {file = "lief-0.12.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c078d6230279ffd3bca717c79664fb8368666f610b577deb24b374607936e9c1"}, - {file = "lief-0.12.3-cp310-cp310-win32.whl", hash = "sha256:e3a6af926532d0aac9e7501946134513d63217bacba666e6f7f5a0b7e15ba236"}, - {file = "lief-0.12.3-cp310-cp310-win_amd64.whl", hash = "sha256:0750b72e3aa161e1fb0e2e7f571121ae05d2428aafd742ff05a7656ad2288447"}, - {file = "lief-0.12.3-cp311-cp311-macosx_10_14_arm64.whl", hash = "sha256:b5c123cb99a7879d754c059e299198b34e7e30e3b64cf22e8962013db0099f47"}, - {file = "lief-0.12.3-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:8bc58fa26a830df6178e36f112cb2bbdd65deff593f066d2d51434ff78386ba5"}, - {file = "lief-0.12.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:04eb6b70d646fb5bd6183575928ee23715550f161f2832cbcd8c6ff2071fb408"}, - {file = "lief-0.12.3-cp311-cp311-win32.whl", hash = "sha256:7e2d0a53c403769b04adcf8df92e83c5e25f9103a052aa7f17b0a9cf057735fb"}, - {file = "lief-0.12.3-cp311-cp311-win_amd64.whl", hash = "sha256:7f6395c12ee1bc4a5162f567cba96d0c72dfb660e7902e84d4f3029daf14fe33"}, - {file = "lief-0.12.3-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:71327fdc764fd2b1f3cd371d8ac5e0b801bde32b71cfcf7dccee506d46768539"}, - {file = "lief-0.12.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d320fb80ed5b42b354b8e4f251ab05a51929c162c57c377b5e95ad4b1c1b415d"}, - {file = "lief-0.12.3-cp36-cp36m-win32.whl", hash = "sha256:176fa6c342dd480195cda34a20f62ac76dfae103b22ca7583b762e0b434ee1f3"}, - {file = "lief-0.12.3-cp36-cp36m-win_amd64.whl", hash = "sha256:3a18fe108fb82a2640864deef933731afe77413b1226551796ef2c373a1b3a2a"}, - {file = "lief-0.12.3-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c73e990cd2737d1060b8c1e8edcc128832806995b69d1d6bf191409e2cea7bde"}, - {file = "lief-0.12.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:5fa2b1c8ffe47ee66b2507c2bb4e3fd628965532b7888c0627d10e690b5ef20c"}, - {file = "lief-0.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f224e9a261e88099f86160f121d088d30894c2946e3e551cf11c678daadcf2b"}, - {file = "lief-0.12.3-cp37-cp37m-win32.whl", hash = "sha256:3481d7c9fb3d3a1acff53851f40efd1a5a05d354312d367294bc2e310b736826"}, - {file = "lief-0.12.3-cp37-cp37m-win_amd64.whl", hash = "sha256:4e5173e1be5ebf43594f4eb187cbcb04758761942bc0a1e685ea1cb9047dc0d9"}, - {file = "lief-0.12.3-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:54d6a45e01260b9c8bf1c99f58257cff5338aee5c02eacfeee789f9d15cf38c6"}, - {file = "lief-0.12.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:4501dc399fb15dc7a3c8df4a76264a86be6d581d99098dafc3a67626149d8ff1"}, - {file = "lief-0.12.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c848aadac0816268aeb9dde7cefdb54bf24f78e664a19e97e74c92d3be1bb147"}, - {file = "lief-0.12.3-cp38-cp38-win32.whl", hash = "sha256:d7e35f9ee9dd6e79add3b343f83659b71c05189e5cb224e02a1902ddc7654e96"}, - {file = "lief-0.12.3-cp38-cp38-win_amd64.whl", hash = "sha256:b00667257b43e93d94166c959055b6147d46d302598f3ee55c194b40414c89cc"}, - {file = "lief-0.12.3-cp39-cp39-macosx_10_14_arm64.whl", hash = "sha256:e6a1b5b389090d524621c2455795e1262f62dc9381bedd96f0cd72b878c4066d"}, - {file = "lief-0.12.3-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:ae773196df814202c0c51056163a1478941b299512b09660a3c37be3c7fac81e"}, - {file = "lief-0.12.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:4a47f410032c63ac3be051d963d0337d6b47f0e94bfe8e946ab4b6c428f4d0f8"}, - {file = "lief-0.12.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbd11367c2259bd1131a6c8755dcde33314324de5ea029227bfbc7d3755871e6"}, - {file = "lief-0.12.3-cp39-cp39-win32.whl", hash = "sha256:2ce53e311918c3e5b54c815ef420a747208d2a88200c41cd476f3dd1eb876bcf"}, - {file = "lief-0.12.3-cp39-cp39-win_amd64.whl", hash = "sha256:446e53ccf0ebd1616c5d573470662ff71ca6df3cd62ec1764e303764f3f03cca"}, - {file = "lief-0.12.3.zip", hash = "sha256:62e81d2f1a827d43152aed12446a604627e8833493a51dca027026eed0ce7128"}, + {file = "lief-0.13.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:b53317d78f8b7528e3f2f358b3f9334a1a84fae88c5aec1a3b7717ed31bfb066"}, + {file = "lief-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bb8b285a6c670df590c36fc0c19b9d2e32b99f17e57afa29bb3052f1d55aa50f"}, + {file = "lief-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:be871116faa698b6d9da76b0caec2ec5b7e7b8781cfb3a4ac0c4e348fb37ab49"}, + {file = "lief-0.13.1-cp310-cp310-manylinux_2_24_x86_64.whl", hash = "sha256:c6839df875e912edd3fc553ab5d1b916527adee9c57ba85c69314a93f7ba2e15"}, + {file = "lief-0.13.1-cp310-cp310-win32.whl", hash = "sha256:b1f295dbb57094443926ac6051bee9a1945d92344f470da1cb506060eb2f91ac"}, + {file = "lief-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8439805a389cc67b6d4ea7d757a3211f22298edce53c5b064fdf8bf05fabba54"}, + {file = "lief-0.13.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:3cfbc6c50f9e3a8015cd5ee88dfe83f423562c025439143bbd5c086a3f9fe599"}, + {file = "lief-0.13.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:661abaa48bc032b9a7529e0b73d2ced3e4a1f13381592f6b9e940750b07a5ac2"}, + {file = "lief-0.13.1-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:23617d96d162081f8bf315d9b0494845891f8d0f04ad60991b83367ee9e261aa"}, + {file = "lief-0.13.1-cp311-cp311-manylinux_2_24_x86_64.whl", hash = "sha256:aa7f45c5125be80a513624d3a5f6bd50751c2edc6de5357fde218580111c8535"}, + {file = "lief-0.13.1-cp311-cp311-win32.whl", hash = "sha256:018b542f09fe2305e1585a3e63a7e5132927b835062b456e5c8c571db7784d1e"}, + {file = "lief-0.13.1-cp311-cp311-win_amd64.whl", hash = "sha256:bfbf8885a3643ea9aaf663d039f50ca58b228886c3fe412725b22851aeda3b77"}, + {file = "lief-0.13.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:a0472636ab15b9afecf8b5d55966912af8cb4de2f05b98fc05c87d51880d0208"}, + {file = "lief-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:ccfba33c02f21d4ede26ab85eb6539a00e74e236569c13dcbab2e157b73673c4"}, + {file = "lief-0.13.1-cp38-cp38-manylinux_2_24_x86_64.whl", hash = "sha256:e414d6c23f26053f4824d080885ab1b75482122796cba7d09cbf157900646289"}, + {file = "lief-0.13.1-cp38-cp38-win32.whl", hash = "sha256:a18fee5cf69adf9d5ee977778ccd46c39c450960f806231b26b69011f81bc712"}, + {file = "lief-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:04c87039d1e68ebc467f83136179626403547dd1ce851541345f8ca0b1fe6c5b"}, + {file = "lief-0.13.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:0283a4c749afe58be8e21cdd9be79c657c51ca9b8346f75f4b97349b1f022851"}, + {file = "lief-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95a4b6d1f8dba9360aecf7542e54ce5eb02c0e88f2d827b5445594d5d51109f5"}, + {file = "lief-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:16753bd72b1e3932d94d088a93b64e08c1f6c8bce1b064b47fe66ed73d9562b2"}, + {file = "lief-0.13.1-cp39-cp39-manylinux_2_24_x86_64.whl", hash = "sha256:965fadb1301d1a81f16067e4fa743d2be3f6aa71391a83b752ff811ec74b0766"}, + {file = "lief-0.13.1-cp39-cp39-win32.whl", hash = "sha256:57bdb0471760c4ff520f5e5d005e503cc7ea3ebe22df307bb579a1a561b8c4e9"}, + {file = "lief-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:a3c900f49c3d3135c728faeb386d13310bb3511eb2d4e1c9b109b48ae2658361"}, ] +[[package]] +name = "linkify-it-py" +version = "2.0.2" +description = "Links recognition library with FULL unicode support." +optional = false +python-versions = ">=3.7" +files = [ + {file = "linkify-it-py-2.0.2.tar.gz", hash = "sha256:19f3060727842c254c808e99d465c80c49d2c7306788140987a1a7a29b0d6ad2"}, + {file = "linkify_it_py-2.0.2-py3-none-any.whl", hash = "sha256:a3a24428f6c96f27370d7fe61d2ac0be09017be5190d68d8658233171f1b6541"}, +] + +[package.dependencies] +uc-micro-py = "*" + +[package.extras] +benchmark = ["pytest", "pytest-benchmark"] +dev = ["black", "flake8", "isort", "pre-commit", "pyproject-flake8"] +doc = ["myst-parser", "sphinx", "sphinx-book-theme"] +test = ["coverage", "pytest", "pytest-cov"] + [[package]] name = "log4mongo" version = "1.7.0" description = "mongo database handler for python logging" -category = "main" optional = false python-versions = "*" files = [ @@ -1501,11 +1520,49 @@ files = [ [package.dependencies] pymongo = "*" +[[package]] +name = "m2r2" +version = "0.3.3.post2" +description = "Markdown and reStructuredText in a single file." +optional = false +python-versions = ">=3.7" +files = [ + {file = "m2r2-0.3.3.post2-py3-none-any.whl", hash = "sha256:86157721eb6eabcd54d4eea7195890cc58fa6188b8d0abea633383cfbb5e11e3"}, + {file = "m2r2-0.3.3.post2.tar.gz", hash = "sha256:e62bcb0e74b3ce19cda0737a0556b04cf4a43b785072fcef474558f2c1482ca8"}, +] + +[package.dependencies] +docutils = ">=0.19" +mistune = "0.8.4" + +[[package]] +name = "markdown-it-py" +version = "2.2.0" +description = "Python port of markdown-it. Markdown parsing, done right!" +optional = false +python-versions = ">=3.7" +files = [ + {file = "markdown-it-py-2.2.0.tar.gz", hash = "sha256:7c9a5e412688bc771c67432cbfebcdd686c93ce6484913dccf06cb5a0bea35a1"}, + {file = "markdown_it_py-2.2.0-py3-none-any.whl", hash = "sha256:5a35f8d1870171d9acc47b99612dc146129b631baf04970128b568f190d0cc30"}, +] + +[package.dependencies] +mdurl = ">=0.1,<1.0" + +[package.extras] +benchmarking = ["psutil", "pytest", "pytest-benchmark"] +code-style = ["pre-commit (>=3.0,<4.0)"] +compare = ["commonmark (>=0.9,<1.0)", "markdown (>=3.4,<4.0)", "mistletoe (>=1.0,<2.0)", "mistune (>=2.0,<3.0)", "panflute (>=2.3,<3.0)"] +linkify = ["linkify-it-py (>=1,<3)"] +plugins = ["mdit-py-plugins"] +profiling = ["gprof2dot"] +rtd = ["attrs", "myst-parser", "pyyaml", "sphinx", "sphinx-copybutton", "sphinx-design", "sphinx_book_theme"] +testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"] + [[package]] name = "markupsafe" version = "2.0.1" description = "Safely add untrusted strings to HTML/XML markup." -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -1584,7 +1641,6 @@ files = [ name = "mccabe" version = "0.7.0" description = "McCabe checker, plugin for flake8" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -1592,11 +1648,51 @@ files = [ {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"}, ] +[[package]] +name = "mdit-py-plugins" +version = "0.3.5" +description = "Collection of plugins for markdown-it-py" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mdit-py-plugins-0.3.5.tar.gz", hash = "sha256:eee0adc7195e5827e17e02d2a258a2ba159944a0748f59c5099a4a27f78fcf6a"}, + {file = "mdit_py_plugins-0.3.5-py3-none-any.whl", hash = "sha256:ca9a0714ea59a24b2b044a1831f48d817dd0c817e84339f20e7889f392d77c4e"}, +] + +[package.dependencies] +markdown-it-py = ">=1.0.0,<3.0.0" + +[package.extras] +code-style = ["pre-commit"] +rtd = ["attrs", "myst-parser (>=0.16.1,<0.17.0)", "sphinx-book-theme (>=0.1.0,<0.2.0)"] +testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"] + +[[package]] +name = "mdurl" +version = "0.1.2" +description = "Markdown URL utilities" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8"}, + {file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"}, +] + +[[package]] +name = "mistune" +version = "0.8.4" +description = "The fastest markdown parser in pure Python" +optional = false +python-versions = "*" +files = [ + {file = "mistune-0.8.4-py2.py3-none-any.whl", hash = "sha256:88a1051873018da288eee8538d476dffe1262495144b33ecb586c4ab266bb8d4"}, + {file = "mistune-0.8.4.tar.gz", hash = "sha256:59a3429db53c50b5c6bcc8a07f8848cb00d7dc8bdb431a4ab41920d201d4756e"}, +] + [[package]] name = "multidict" version = "6.0.4" description = "multidict implementation" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1676,137 +1772,82 @@ files = [ {file = "multidict-6.0.4.tar.gz", hash = "sha256:3666906492efb76453c0e7b97f2cf459b0682e7402c0489a95484965dbc1da49"}, ] +[[package]] +name = "myst-parser" +version = "0.18.1" +description = "An extended commonmark compliant parser, with bridges to docutils & sphinx." +optional = false +python-versions = ">=3.7" +files = [ + {file = "myst-parser-0.18.1.tar.gz", hash = "sha256:79317f4bb2c13053dd6e64f9da1ba1da6cd9c40c8a430c447a7b146a594c246d"}, + {file = "myst_parser-0.18.1-py3-none-any.whl", hash = "sha256:61b275b85d9f58aa327f370913ae1bec26ebad372cc99f3ab85c8ec3ee8d9fb8"}, +] + +[package.dependencies] +docutils = ">=0.15,<0.20" +jinja2 = "*" +markdown-it-py = ">=1.0.0,<3.0.0" +mdit-py-plugins = ">=0.3.1,<0.4.0" +pyyaml = "*" +sphinx = ">=4,<6" +typing-extensions = "*" + +[package.extras] +code-style = ["pre-commit (>=2.12,<3.0)"] +linkify = ["linkify-it-py (>=1.0,<2.0)"] +rtd = ["ipython", "sphinx-book-theme", "sphinx-design", "sphinxcontrib.mermaid (>=0.7.1,<0.8.0)", "sphinxext-opengraph (>=0.6.3,<0.7.0)", "sphinxext-rediraffe (>=0.2.7,<0.3.0)"] +testing = ["beautifulsoup4", "coverage[toml]", "pytest (>=6,<7)", "pytest-cov", "pytest-param-files (>=0.3.4,<0.4.0)", "pytest-regressions", "sphinx (<5.2)", "sphinx-pytest"] + [[package]] name = "nodeenv" -version = "1.7.0" +version = "1.8.0" description = "Node.js virtual environment builder" -category = "dev" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*" files = [ - {file = "nodeenv-1.7.0-py2.py3-none-any.whl", hash = "sha256:27083a7b96a25f2f5e1d8cb4b6317ee8aeda3bdd121394e5ac54e498028a042e"}, - {file = "nodeenv-1.7.0.tar.gz", hash = "sha256:e0e7f7dfb85fc5394c6fe1e8fa98131a2473e04311a45afb6508f7cf1836fa2b"}, + {file = "nodeenv-1.8.0-py2.py3-none-any.whl", hash = "sha256:df865724bb3c3adc86b3876fa209771517b0cfe596beff01a92700e0e8be4cec"}, + {file = "nodeenv-1.8.0.tar.gz", hash = "sha256:d51e0c37e64fbf47d017feac3145cdbb58836d7eee8c6f6d3b6880c5456227d2"}, ] [package.dependencies] setuptools = "*" -[[package]] -name = "opencolorio" -version = "2.2.1" -description = "OpenColorIO (OCIO) is a complete color management solution geared towards motion picture production with an emphasis on visual effects and computer animation." -category = "main" -optional = false -python-versions = "*" -files = [ - {file = "opencolorio-2.2.1-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:a9feec76e450325f12203264194d905a938d5e7944772b806886f9531e406d42"}, - {file = "opencolorio-2.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7eeae01328b359408940a1f29d53b15b034755413d95d08781b76084ee14cbb1"}, - {file = "opencolorio-2.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85b63a9162e99f0f29ef4074017d1b6e8caf59096043fb91cbacfc5bc01fa0b9"}, - {file = "opencolorio-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:67d19ea54daff2b209b91981da415aa41ea8e3a60fecd5dd843ae13272d38dcf"}, - {file = "opencolorio-2.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:da0043a1007d269b5da3c8ca1de8c63926b38bf5e08cfade6cb8f2f5f6b663b9"}, - {file = "opencolorio-2.2.1-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:62180cec075cae8dff56eeb977132eb9755d7fe312d8d34236cba838cb9314b3"}, - {file = "opencolorio-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:24b7bfc4b77c04845de847373e58232c48838042d5e45e027b8bf64bada988e3"}, - {file = "opencolorio-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41cadab13b18dbedd992df2056c787cf38bf89a5b0903b90f701d5228ac496f9"}, - {file = "opencolorio-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa278dd4414791a5605e685b562b6ad1c729a4a44c1c906151f5bca10c0ff10e"}, - {file = "opencolorio-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:7b44858c26b662ec42b089f8f85ea3aa63aa04e0e58e902a4cbf8cae0fbd4c6c"}, - {file = "opencolorio-2.2.1-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:07fce0d36a6041b524b2122b9f55fbd03e029def5a22e93822041b652b60590a"}, - {file = "opencolorio-2.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae043bc588d9ee98f54fe9524481eba5634d6dd70d0c70e1bd242b60a3a81731"}, - {file = "opencolorio-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a4ad1a4ed5742a7dda41f0548274e8328b2774ce04dfc31fd5dfbacabc4c166"}, - {file = "opencolorio-2.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bd885e34898c204db19a9e6926c551a74bda6d8e7d3ef27596630e3422b99b1"}, - {file = "opencolorio-2.2.1-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:86ed205bec96fd84e882d431c181560df0cf6f0f73150976303b6f3ff1d9d5ed"}, - {file = "opencolorio-2.2.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c1bf1c19baa86203e2329194ea837161520dae5c94e4f04b7659e9bfe4f1a6a9"}, - {file = "opencolorio-2.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:639f7052da7086d999c0d84e424453fb44abc8f2d22ec8601d20d8ee9d90384b"}, - {file = "opencolorio-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7e3208c5c1ac63a6e921398db661fdd9309b17253b285f227818713f3faec92"}, - {file = "opencolorio-2.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:68814600c0d8c07b552e1f1e4e32d45bffba4cb49b41481e5d4dd0bc56a206ea"}, - {file = "opencolorio-2.2.1-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:cb5337ac2804dbb90c856b423d2799d3dc35f9c948da25d8e6506d1dd8200df7"}, - {file = "opencolorio-2.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:425593a96de7927aa7cda065dc3729e881de1d0b72c43e704e02962adb63b4ad"}, - {file = "opencolorio-2.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8be9f6af01e4c710de4cc03c9b6de04ef0844bf611e9100abf045ec62a4c685a"}, - {file = "opencolorio-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84788002aa28151409f2367a040e9d39ffea0a9129777451bd0c55ac87d9d47"}, - {file = "opencolorio-2.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:d92802922bc4e2ff3e9a06d44b6055efd1863abb1aaf0243849d35b077b72253"}, - {file = "opencolorio-2.2.1.tar.gz", hash = "sha256:283abb8db5bc18ab9686e08255a9245deaba3d7837be5363b7a69b0902b73a78"}, -] - -[[package]] -name = "opentimelineio" -version = "0.14.1" -description = "Editorial interchange format and API" -category = "main" -optional = false -python-versions = ">2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*, !=3.9.0" -files = [ - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:d5466742d1de323e922965e64ca7099f6dd756774d5f8b404a11d6ec6e7c5fe0"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:3f5187eb0cd8f607bfcc5c1d58ce878734975a0a6a91360a2605ad831198ed89"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:a2b64bf817d3065f7302c748bcc1d5938971e157c42e67fcb4e5e3612358813b"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-win32.whl", hash = "sha256:4cde33ea83ba041332bae55474fc155219871396b82031dd54d3e857973805b6"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-win_amd64.whl", hash = "sha256:d5dc153867c688ad4f39cbac78eda069cfe4f17376d9444d202f8073efa6cbd4"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:e07390dd1e0f82e5a5880ef2d498cbcbf482b4e5bfb4b9026342578a2fad358d"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:4c1c522df397536c7620d44e32302165a9ef9bbbf0de83a5a0621f0a75047cc9"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e368a1d64366e3fdf1eadd10077a135833fdc893ff65f8dc43a91254cb7ee6fa"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:cf2cd94d11d0ae0fc78418cc0d17f2fe3bf85598b9b109f98b2301272a87bff5"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:7af41f43ef72fbf3c0ae2e47cabd7715eb348726c9e5e430ab36ce2357181cf4"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-win32.whl", hash = "sha256:55dbb859d16535ba5dab8a66a78aef8db55f030d771b6e5b91e94241b6db65bd"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:08eaef8fbc423c25e94e189eb788c92c16916ae74d16ebcab34ba889e980c6ad"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:10b34a6997d6d6edb9b8a1c93718a1e90e8202d930559cdce2ad369e0473327f"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:c6b44986da8c7a64f8f549795279f0af05ec875a425d11600585dab0b3269ec2"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:45e1774d9f7215190a7c1e5b70dfc237f4a03b79b0539902d9ec8074707450f9"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-win32.whl", hash = "sha256:1ee0e72320309b8dedf0e2f40fc2b8d3dd2c854db0aba28a84a038d7177a1208"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:bd58e9fdc765623e160ab3ec32e9199bcb3906a6f3c06cca7564fbb7c18d2d28"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f8d6e15f793577de59cc01e49600898ab12dbdc260dbcba83936c00965f0090a"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:50644c5e43076a3717b77645657545d0be19376ecb4c6f2e4103670052d726d4"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:a44f77fb5dbfd60d992ac2acc6782a7b0a26452db3a069425b8bd73b2f3bb336"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-win32.whl", hash = "sha256:63fb0d1258f490bcebf6325067db64a0f0dc405b8b905ee2bb625f04d04a8082"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:8a303b2f3dfba542f588b227575f1967f7a9da854b34f620504e1ecb8d551f5f"}, - {file = "OpenTimelineIO-0.14.1.tar.gz", hash = "sha256:0b9adc0fd303b978af120259d6b1d23e0623800615b4a3e2eb9f9fb2c70d5d13"}, -] - -[package.dependencies] -pyaaf2 = ">=1.4.0,<1.5.0" - -[package.extras] -dev = ["check-manifest", "coverage (>=4.5)", "flake8 (>=3.5)", "urllib3 (>=1.24.3)"] -view = ["PySide2 (>=5.11,<6.0)"] - [[package]] name = "packaging" -version = "23.0" +version = "23.1" description = "Core utilities for Python packages" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"}, - {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"}, + {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"}, + {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"}, ] [[package]] name = "paramiko" -version = "2.12.0" +version = "3.2.0" description = "SSH2 protocol library" -category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "paramiko-2.12.0-py2.py3-none-any.whl", hash = "sha256:b2df1a6325f6996ef55a8789d0462f5b502ea83b3c990cbb5bbe57345c6812c4"}, - {file = "paramiko-2.12.0.tar.gz", hash = "sha256:376885c05c5d6aa6e1f4608aac2a6b5b0548b1add40274477324605903d9cd49"}, + {file = "paramiko-3.2.0-py3-none-any.whl", hash = "sha256:df0f9dd8903bc50f2e10580af687f3015bf592a377cd438d2ec9546467a14eb8"}, + {file = "paramiko-3.2.0.tar.gz", hash = "sha256:93cdce625a8a1dc12204439d45033f3261bdb2c201648cfcdc06f9fd0f94ec29"}, ] [package.dependencies] -bcrypt = ">=3.1.3" -cryptography = ">=2.5" -pynacl = ">=1.0.1" -six = "*" +bcrypt = ">=3.2" +cryptography = ">=3.3" +pynacl = ">=1.5" [package.extras] -all = ["bcrypt (>=3.1.3)", "gssapi (>=1.4.1)", "invoke (>=1.3)", "pyasn1 (>=0.1.7)", "pynacl (>=1.0.1)", "pywin32 (>=2.1.8)"] -ed25519 = ["bcrypt (>=3.1.3)", "pynacl (>=1.0.1)"] +all = ["gssapi (>=1.4.1)", "invoke (>=2.0)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] gssapi = ["gssapi (>=1.4.1)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] -invoke = ["invoke (>=1.3)"] +invoke = ["invoke (>=2.0)"] [[package]] name = "parso" version = "0.8.3" description = "A Python Parser" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -1820,18 +1861,18 @@ testing = ["docopt", "pytest (<6.0.0)"] [[package]] name = "patchelf" -version = "0.17.2.0" +version = "0.17.2.1" description = "A small utility to modify the dynamic linker and RPATH of ELF executables." -category = "dev" optional = false python-versions = "*" files = [ - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:b8d86f32e1414d6964d5d166ddd2cf829d156fba0d28d32a3bd0192f987f4529"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.musllinux_1_1_ppc64le.whl", hash = "sha256:9233a0f2fc73820c5bd468f27507bdf0c9ac543f07c7f9888bb7cf910b1be22f"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_s390x.manylinux2014_s390x.musllinux_1_1_s390x.whl", hash = "sha256:6601d7d831508bcdd3d8ebfa6435c2379bf11e41af2409ae7b88de572926841c"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_5_i686.manylinux1_i686.musllinux_1_1_i686.whl", hash = "sha256:c62a34f0c25e6c2d6ae44389f819a00ccdf3f292ad1b814fbe1cc23cb27023ce"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.musllinux_1_1_x86_64.whl", hash = "sha256:1b9fd14f300341dc020ae05c49274dd1fa6727eabb4e61dd7fb6fb3600acd26e"}, - {file = "patchelf-0.17.2.0.tar.gz", hash = "sha256:dedf987a83d7f6d6f5512269e57f5feeec36719bd59567173b6d9beabe019efe"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:fc329da0e8f628bd836dfb8eaf523547e342351fa8f739bf2b3fe4a6db5a297c"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:ccb266a94edf016efe80151172c26cff8c2ec120a57a1665d257b0442784195d"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.musllinux_1_1_ppc64le.whl", hash = "sha256:f47b5bdd6885cfb20abdd14c707d26eb6f499a7f52e911865548d4aa43385502"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_s390x.manylinux2014_s390x.musllinux_1_1_s390x.whl", hash = "sha256:a9e6ebb0874a11f7ed56d2380bfaa95f00612b23b15f896583da30c2059fcfa8"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_5_i686.manylinux1_i686.musllinux_1_1_i686.whl", hash = "sha256:3c8d58f0e4c1929b1c7c45ba8da5a84a8f1aa6a82a46e1cfb2e44a4d40f350e5"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.musllinux_1_1_x86_64.whl", hash = "sha256:d1a9bc0d4fd80c038523ebdc451a1cce75237cfcc52dbd1aca224578001d5927"}, + {file = "patchelf-0.17.2.1.tar.gz", hash = "sha256:a6eb0dd452ce4127d0d5e1eb26515e39186fa609364274bc1b0b77539cfa7031"}, ] [package.extras] @@ -1841,7 +1882,6 @@ test = ["importlib-metadata", "pytest"] name = "pathlib2" version = "2.3.7.post1" description = "Object-oriented filesystem paths" -category = "main" optional = false python-versions = "*" files = [ @@ -1854,116 +1894,102 @@ six = "*" [[package]] name = "pillow" -version = "9.4.0" +version = "9.5.0" description = "Python Imaging Library (Fork)" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "Pillow-9.4.0-1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b4b4e9dda4f4e4c4e6896f93e84a8f0bcca3b059de9ddf67dac3c334b1195e1"}, - {file = "Pillow-9.4.0-1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fb5c1ad6bad98c57482236a21bf985ab0ef42bd51f7ad4e4538e89a997624e12"}, - {file = "Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:f0caf4a5dcf610d96c3bd32932bfac8aee61c96e60481c2a0ea58da435e25acd"}, - {file = "Pillow-9.4.0-1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:3f4cc516e0b264c8d4ccd6b6cbc69a07c6d582d8337df79be1e15a5056b258c9"}, - {file = "Pillow-9.4.0-1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:b8c2f6eb0df979ee99433d8b3f6d193d9590f735cf12274c108bd954e30ca858"}, - {file = "Pillow-9.4.0-1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b70756ec9417c34e097f987b4d8c510975216ad26ba6e57ccb53bc758f490dab"}, - {file = "Pillow-9.4.0-1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:43521ce2c4b865d385e78579a082b6ad1166ebed2b1a2293c3be1d68dd7ca3b9"}, - {file = "Pillow-9.4.0-2-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:9d9a62576b68cd90f7075876f4e8444487db5eeea0e4df3ba298ee38a8d067b0"}, - {file = "Pillow-9.4.0-2-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:87708d78a14d56a990fbf4f9cb350b7d89ee8988705e58e39bdf4d82c149210f"}, - {file = "Pillow-9.4.0-2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:8a2b5874d17e72dfb80d917213abd55d7e1ed2479f38f001f264f7ce7bae757c"}, - {file = "Pillow-9.4.0-2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:83125753a60cfc8c412de5896d10a0a405e0bd88d0470ad82e0869ddf0cb3848"}, - {file = "Pillow-9.4.0-2-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9e5f94742033898bfe84c93c831a6f552bb629448d4072dd312306bab3bd96f1"}, - {file = "Pillow-9.4.0-2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:013016af6b3a12a2f40b704677f8b51f72cb007dac785a9933d5c86a72a7fe33"}, - {file = "Pillow-9.4.0-2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:99d92d148dd03fd19d16175b6d355cc1b01faf80dae93c6c3eb4163709edc0a9"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:2968c58feca624bb6c8502f9564dd187d0e1389964898f5e9e1fbc8533169157"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c5c1362c14aee73f50143d74389b2c158707b4abce2cb055b7ad37ce60738d47"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd752c5ff1b4a870b7661234694f24b1d2b9076b8bf337321a814c612665f343"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a3049a10261d7f2b6514d35bbb7a4dfc3ece4c4de14ef5876c4b7a23a0e566d"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16a8df99701f9095bea8a6c4b3197da105df6f74e6176c5b410bc2df2fd29a57"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:94cdff45173b1919350601f82d61365e792895e3c3a3443cf99819e6fbf717a5"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ed3e4b4e1e6de75fdc16d3259098de7c6571b1a6cc863b1a49e7d3d53e036070"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5b2f8a31bd43e0f18172d8ac82347c8f37ef3e0b414431157718aa234991b28"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:09b89ddc95c248ee788328528e6a2996e09eaccddeeb82a5356e92645733be35"}, - {file = "Pillow-9.4.0-cp310-cp310-win32.whl", hash = "sha256:f09598b416ba39a8f489c124447b007fe865f786a89dbfa48bb5cf395693132a"}, - {file = "Pillow-9.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6e78171be3fb7941f9910ea15b4b14ec27725865a73c15277bc39f5ca4f8391"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3fa1284762aacca6dc97474ee9c16f83990b8eeb6697f2ba17140d54b453e133"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eaef5d2de3c7e9b21f1e762f289d17b726c2239a42b11e25446abf82b26ac132"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4dfdae195335abb4e89cc9762b2edc524f3c6e80d647a9a81bf81e17e3fb6f0"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6abfb51a82e919e3933eb137e17c4ae9c0475a25508ea88993bb59faf82f3b35"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451f10ef963918e65b8869e17d67db5e2f4ab40e716ee6ce7129b0cde2876eab"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:6663977496d616b618b6cfa43ec86e479ee62b942e1da76a2c3daa1c75933ef4"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:60e7da3a3ad1812c128750fc1bc14a7ceeb8d29f77e0a2356a8fb2aa8925287d"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:19005a8e58b7c1796bc0167862b1f54a64d3b44ee5d48152b06bb861458bc0f8"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f715c32e774a60a337b2bb8ad9839b4abf75b267a0f18806f6f4f5f1688c4b5a"}, - {file = "Pillow-9.4.0-cp311-cp311-win32.whl", hash = "sha256:b222090c455d6d1a64e6b7bb5f4035c4dff479e22455c9eaa1bdd4c75b52c80c"}, - {file = "Pillow-9.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:ba6612b6548220ff5e9df85261bddc811a057b0b465a1226b39bfb8550616aee"}, - {file = "Pillow-9.4.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5f532a2ad4d174eb73494e7397988e22bf427f91acc8e6ebf5bb10597b49c493"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dd5a9c3091a0f414a963d427f920368e2b6a4c2f7527fdd82cde8ef0bc7a327"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef21af928e807f10bf4141cad4746eee692a0dd3ff56cfb25fce076ec3cc8abe"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:847b114580c5cc9ebaf216dd8c8dbc6b00a3b7ab0131e173d7120e6deade1f57"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:653d7fb2df65efefbcbf81ef5fe5e5be931f1ee4332c2893ca638c9b11a409c4"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:46f39cab8bbf4a384ba7cb0bc8bae7b7062b6a11cfac1ca4bc144dea90d4a9f5"}, - {file = "Pillow-9.4.0-cp37-cp37m-win32.whl", hash = "sha256:7ac7594397698f77bce84382929747130765f66406dc2cd8b4ab4da68ade4c6e"}, - {file = "Pillow-9.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:46c259e87199041583658457372a183636ae8cd56dbf3f0755e0f376a7f9d0e6"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:0e51f608da093e5d9038c592b5b575cadc12fd748af1479b5e858045fff955a9"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:765cb54c0b8724a7c12c55146ae4647e0274a839fb6de7bcba841e04298e1011"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:519e14e2c49fcf7616d6d2cfc5c70adae95682ae20f0395e9280db85e8d6c4df"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d197df5489004db87d90b918033edbeee0bd6df3848a204bca3ff0a903bef837"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0845adc64fe9886db00f5ab68c4a8cd933ab749a87747555cec1c95acea64b0b"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:e1339790c083c5a4de48f688b4841f18df839eb3c9584a770cbd818b33e26d5d"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:a96e6e23f2b79433390273eaf8cc94fec9c6370842e577ab10dabdcc7ea0a66b"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:7cfc287da09f9d2a7ec146ee4d72d6ea1342e770d975e49a8621bf54eaa8f30f"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d7081c084ceb58278dd3cf81f836bc818978c0ccc770cbbb202125ddabec6628"}, - {file = "Pillow-9.4.0-cp38-cp38-win32.whl", hash = "sha256:df41112ccce5d47770a0c13651479fbcd8793f34232a2dd9faeccb75eb5d0d0d"}, - {file = "Pillow-9.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7a21222644ab69ddd9967cfe6f2bb420b460dae4289c9d40ff9a4896e7c35c9a"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0f3269304c1a7ce82f1759c12ce731ef9b6e95b6df829dccd9fe42912cc48569"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cb362e3b0976dc994857391b776ddaa8c13c28a16f80ac6522c23d5257156bed"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2e0f87144fcbbe54297cae708c5e7f9da21a4646523456b00cc956bd4c65815"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:28676836c7796805914b76b1837a40f76827ee0d5398f72f7dcc634bae7c6264"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0884ba7b515163a1a05440a138adeb722b8a6ae2c2b33aea93ea3118dd3a899e"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:53dcb50fbdc3fb2c55431a9b30caeb2f7027fcd2aeb501459464f0214200a503"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:e8c5cf126889a4de385c02a2c3d3aba4b00f70234bfddae82a5eaa3ee6d5e3e6"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6c6b1389ed66cdd174d040105123a5a1bc91d0aa7059c7261d20e583b6d8cbd2"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0dd4c681b82214b36273c18ca7ee87065a50e013112eea7d78c7a1b89a739153"}, - {file = "Pillow-9.4.0-cp39-cp39-win32.whl", hash = "sha256:6d9dfb9959a3b0039ee06c1a1a90dc23bac3b430842dcb97908ddde05870601c"}, - {file = "Pillow-9.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:54614444887e0d3043557d9dbc697dbb16cfb5a35d672b7a0fcc1ed0cf1c600b"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b9b752ab91e78234941e44abdecc07f1f0d8f51fb62941d32995b8161f68cfe5"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3b56206244dc8711f7e8b7d6cad4663917cd5b2d950799425076681e8766286"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aabdab8ec1e7ca7f1434d042bf8b1e92056245fb179790dc97ed040361f16bfd"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db74f5562c09953b2c5f8ec4b7dfd3f5421f31811e97d1dbc0a7c93d6e3a24df"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e9d7747847c53a16a729b6ee5e737cf170f7a16611c143d95aa60a109a59c336"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b52ff4f4e002f828ea6483faf4c4e8deea8d743cf801b74910243c58acc6eda3"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:575d8912dca808edd9acd6f7795199332696d3469665ef26163cd090fa1f8bfa"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c4ed2ff6760e98d262e0cc9c9a7f7b8a9f61aa4d47c58835cdaf7b0b8811bb"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e621b0246192d3b9cb1dc62c78cfa4c6f6d2ddc0ec207d43c0dedecb914f152a"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8f127e7b028900421cad64f51f75c051b628db17fb00e099eb148761eed598c9"}, - {file = "Pillow-9.4.0.tar.gz", hash = "sha256:a1c2d7780448eb93fbcc3789bf3916aa5720d942e37945f4056680317f1cd23e"}, + {file = "Pillow-9.5.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:ace6ca218308447b9077c14ea4ef381ba0b67ee78d64046b3f19cf4e1139ad16"}, + {file = "Pillow-9.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d3d403753c9d5adc04d4694d35cf0391f0f3d57c8e0030aac09d7678fa8030aa"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ba1b81ee69573fe7124881762bb4cd2e4b6ed9dd28c9c60a632902fe8db8b38"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fe7e1c262d3392afcf5071df9afa574544f28eac825284596ac6db56e6d11062"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f36397bf3f7d7c6a3abdea815ecf6fd14e7fcd4418ab24bae01008d8d8ca15e"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:252a03f1bdddce077eff2354c3861bf437c892fb1832f75ce813ee94347aa9b5"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:85ec677246533e27770b0de5cf0f9d6e4ec0c212a1f89dfc941b64b21226009d"}, + {file = "Pillow-9.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b416f03d37d27290cb93597335a2f85ed446731200705b22bb927405320de903"}, + {file = "Pillow-9.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1781a624c229cb35a2ac31cc4a77e28cafc8900733a864870c49bfeedacd106a"}, + {file = "Pillow-9.5.0-cp310-cp310-win32.whl", hash = "sha256:8507eda3cd0608a1f94f58c64817e83ec12fa93a9436938b191b80d9e4c0fc44"}, + {file = "Pillow-9.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:d3c6b54e304c60c4181da1c9dadf83e4a54fd266a99c70ba646a9baa626819eb"}, + {file = "Pillow-9.5.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:7ec6f6ce99dab90b52da21cf0dc519e21095e332ff3b399a357c187b1a5eee32"}, + {file = "Pillow-9.5.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:560737e70cb9c6255d6dcba3de6578a9e2ec4b573659943a5e7e4af13f298f5c"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:96e88745a55b88a7c64fa49bceff363a1a27d9a64e04019c2281049444a571e3"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d9c206c29b46cfd343ea7cdfe1232443072bbb270d6a46f59c259460db76779a"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cfcc2c53c06f2ccb8976fb5c71d448bdd0a07d26d8e07e321c103416444c7ad1"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:a0f9bb6c80e6efcde93ffc51256d5cfb2155ff8f78292f074f60f9e70b942d99"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8d935f924bbab8f0a9a28404422da8af4904e36d5c33fc6f677e4c4485515625"}, + {file = "Pillow-9.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fed1e1cf6a42577953abbe8e6cf2fe2f566daebde7c34724ec8803c4c0cda579"}, + {file = "Pillow-9.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c1170d6b195555644f0616fd6ed929dfcf6333b8675fcca044ae5ab110ded296"}, + {file = "Pillow-9.5.0-cp311-cp311-win32.whl", hash = "sha256:54f7102ad31a3de5666827526e248c3530b3a33539dbda27c6843d19d72644ec"}, + {file = "Pillow-9.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:cfa4561277f677ecf651e2b22dc43e8f5368b74a25a8f7d1d4a3a243e573f2d4"}, + {file = "Pillow-9.5.0-cp311-cp311-win_arm64.whl", hash = "sha256:965e4a05ef364e7b973dd17fc765f42233415974d773e82144c9bbaaaea5d089"}, + {file = "Pillow-9.5.0-cp312-cp312-win32.whl", hash = "sha256:22baf0c3cf0c7f26e82d6e1adf118027afb325e703922c8dfc1d5d0156bb2eeb"}, + {file = "Pillow-9.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:432b975c009cf649420615388561c0ce7cc31ce9b2e374db659ee4f7d57a1f8b"}, + {file = "Pillow-9.5.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5d4ebf8e1db4441a55c509c4baa7a0587a0210f7cd25fcfe74dbbce7a4bd1906"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:375f6e5ee9620a271acb6820b3d1e94ffa8e741c0601db4c0c4d3cb0a9c224bf"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99eb6cafb6ba90e436684e08dad8be1637efb71c4f2180ee6b8f940739406e78"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dfaaf10b6172697b9bceb9a3bd7b951819d1ca339a5ef294d1f1ac6d7f63270"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:763782b2e03e45e2c77d7779875f4432e25121ef002a41829d8868700d119392"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:35f6e77122a0c0762268216315bf239cf52b88865bba522999dc38f1c52b9b47"}, + {file = "Pillow-9.5.0-cp37-cp37m-win32.whl", hash = "sha256:aca1c196f407ec7cf04dcbb15d19a43c507a81f7ffc45b690899d6a76ac9fda7"}, + {file = "Pillow-9.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322724c0032af6692456cd6ed554bb85f8149214d97398bb80613b04e33769f6"}, + {file = "Pillow-9.5.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:a0aa9417994d91301056f3d0038af1199eb7adc86e646a36b9e050b06f526597"}, + {file = "Pillow-9.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f8286396b351785801a976b1e85ea88e937712ee2c3ac653710a4a57a8da5d9c"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c830a02caeb789633863b466b9de10c015bded434deb3ec87c768e53752ad22a"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fbd359831c1657d69bb81f0db962905ee05e5e9451913b18b831febfe0519082"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f8fc330c3370a81bbf3f88557097d1ea26cd8b019d6433aa59f71195f5ddebbf"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:7002d0797a3e4193c7cdee3198d7c14f92c0836d6b4a3f3046a64bd1ce8df2bf"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:229e2c79c00e85989a34b5981a2b67aa079fd08c903f0aaead522a1d68d79e51"}, + {file = "Pillow-9.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9adf58f5d64e474bed00d69bcd86ec4bcaa4123bfa70a65ce72e424bfb88ed96"}, + {file = "Pillow-9.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:662da1f3f89a302cc22faa9f14a262c2e3951f9dbc9617609a47521c69dd9f8f"}, + {file = "Pillow-9.5.0-cp38-cp38-win32.whl", hash = "sha256:6608ff3bf781eee0cd14d0901a2b9cc3d3834516532e3bd673a0a204dc8615fc"}, + {file = "Pillow-9.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:e49eb4e95ff6fd7c0c402508894b1ef0e01b99a44320ba7d8ecbabefddcc5569"}, + {file = "Pillow-9.5.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:482877592e927fd263028c105b36272398e3e1be3269efda09f6ba21fd83ec66"}, + {file = "Pillow-9.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3ded42b9ad70e5f1754fb7c2e2d6465a9c842e41d178f262e08b8c85ed8a1d8e"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c446d2245ba29820d405315083d55299a796695d747efceb5717a8b450324115"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8aca1152d93dcc27dc55395604dcfc55bed5f25ef4c98716a928bacba90d33a3"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:608488bdcbdb4ba7837461442b90ea6f3079397ddc968c31265c1e056964f1ef"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:60037a8db8750e474af7ffc9faa9b5859e6c6d0a50e55c45576bf28be7419705"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:07999f5834bdc404c442146942a2ecadd1cb6292f5229f4ed3b31e0a108746b1"}, + {file = "Pillow-9.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a127ae76092974abfbfa38ca2d12cbeddcdeac0fb71f9627cc1135bedaf9d51a"}, + {file = "Pillow-9.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:489f8389261e5ed43ac8ff7b453162af39c3e8abd730af8363587ba64bb2e865"}, + {file = "Pillow-9.5.0-cp39-cp39-win32.whl", hash = "sha256:9b1af95c3a967bf1da94f253e56b6286b50af23392a886720f563c547e48e964"}, + {file = "Pillow-9.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:77165c4a5e7d5a284f10a6efaa39a0ae8ba839da344f20b111d62cc932fa4e5d"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:833b86a98e0ede388fa29363159c9b1a294b0905b5128baf01db683672f230f5"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aaf305d6d40bd9632198c766fb64f0c1a83ca5b667f16c1e79e1661ab5060140"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0852ddb76d85f127c135b6dd1f0bb88dbb9ee990d2cd9aa9e28526c93e794fba"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:91ec6fe47b5eb5a9968c79ad9ed78c342b1f97a091677ba0e012701add857829"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:cb841572862f629b99725ebaec3287fc6d275be9b14443ea746c1dd325053cbd"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:c380b27d041209b849ed246b111b7c166ba36d7933ec6e41175fd15ab9eb1572"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c9af5a3b406a50e313467e3565fc99929717f780164fe6fbb7704edba0cebbe"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5671583eab84af046a397d6d0ba25343c00cd50bce03787948e0fff01d4fd9b1"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:84a6f19ce086c1bf894644b43cd129702f781ba5751ca8572f08aa40ef0ab7b7"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:1e7723bd90ef94eda669a3c2c19d549874dd5badaeefabefd26053304abe5799"}, + {file = "Pillow-9.5.0.tar.gz", hash = "sha256:bf548479d336726d7a0eceb6e767e179fbde37833ae42794602631a070d630f1"}, ] [package.extras] -docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"] +docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-removed-in", "sphinxext-opengraph"] tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"] [[package]] name = "platformdirs" -version = "2.6.2" +version = "3.5.1" description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "platformdirs-2.6.2-py3-none-any.whl", hash = "sha256:83c8f6d04389165de7c9b6f0c682439697887bca0aa2f1c87ef1826be3584490"}, - {file = "platformdirs-2.6.2.tar.gz", hash = "sha256:e1fea1fe471b9ff8332e229df3cb7de4f53eeea4998d3b6bfff542115e998bd2"}, + {file = "platformdirs-3.5.1-py3-none-any.whl", hash = "sha256:e2378146f1964972c03c085bb5662ae80b2b8c06226c54b2ff4aa9483e8a13a5"}, + {file = "platformdirs-3.5.1.tar.gz", hash = "sha256:412dae91f52a6f84830f39a8078cecd0e866cb72294a5c66808e74d5e88d251f"}, ] [package.extras] -docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"] -test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"] +docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.2.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"] +test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"] [[package]] name = "pluggy" version = "1.0.0" description = "plugin and hook calling mechanisms for python" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -1979,7 +2005,6 @@ testing = ["pytest", "pytest-benchmark"] name = "ply" version = "3.11" description = "Python Lex & Yacc" -category = "main" optional = false python-versions = "*" files = [ @@ -1988,15 +2013,28 @@ files = [ ] [[package]] -name = "pre-commit" -version = "2.21.0" -description = "A framework for managing and maintaining multi-language pre-commit hooks." -category = "dev" +name = "pockets" +version = "0.9.1" +description = "A collection of helpful Python tools!" optional = false -python-versions = ">=3.7" +python-versions = "*" files = [ - {file = "pre_commit-2.21.0-py2.py3-none-any.whl", hash = "sha256:e2f91727039fc39a92f58a588a25b87f936de6567eed4f0e673e0507edc75bad"}, - {file = "pre_commit-2.21.0.tar.gz", hash = "sha256:31ef31af7e474a8d8995027fefdfcf509b5c913ff31f2015b4ec4beb26a6f658"}, + {file = "pockets-0.9.1-py2.py3-none-any.whl", hash = "sha256:68597934193c08a08eb2bf6a1d85593f627c22f9b065cc727a4f03f669d96d86"}, + {file = "pockets-0.9.1.tar.gz", hash = "sha256:9320f1a3c6f7a9133fe3b571f283bcf3353cd70249025ae8d618e40e9f7e92b3"}, +] + +[package.dependencies] +six = ">=1.5.2" + +[[package]] +name = "pre-commit" +version = "3.3.2" +description = "A framework for managing and maintaining multi-language pre-commit hooks." +optional = false +python-versions = ">=3.8" +files = [ + {file = "pre_commit-3.3.2-py2.py3-none-any.whl", hash = "sha256:8056bc52181efadf4aac792b1f4f255dfd2fb5a350ded7335d251a68561e8cb6"}, + {file = "pre_commit-3.3.2.tar.gz", hash = "sha256:66e37bec2d882de1f17f88075047ef8962581f83c234ac08da21a0c58953d1f0"}, ] [package.dependencies] @@ -2008,45 +2046,41 @@ virtualenv = ">=20.10.0" [[package]] name = "prefixed" -version = "0.6.0" +version = "0.7.0" description = "Prefixed alternative numeric library" -category = "main" optional = false python-versions = "*" files = [ - {file = "prefixed-0.6.0-py2.py3-none-any.whl", hash = "sha256:5ab094773dc71df68cc78151c81510b9521dcc6b58a4acb78442b127d4e400fa"}, - {file = "prefixed-0.6.0.tar.gz", hash = "sha256:b39fbfac72618fa1eeb5b3fd9ed1341f10dd90df75499cb4c38a6c3ef47cdd94"}, + {file = "prefixed-0.7.0-py2.py3-none-any.whl", hash = "sha256:537b0e4ff4516c4578f277a41d7104f769d6935ae9cdb0f88fed82ec7b3c0ca5"}, + {file = "prefixed-0.7.0.tar.gz", hash = "sha256:0b54d15e602eb8af4ac31b1db21a37ea95ce5890e0741bb0dd9ded493cefbbe9"}, ] [[package]] name = "protobuf" -version = "4.21.12" +version = "4.23.2" description = "" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "protobuf-4.21.12-cp310-abi3-win32.whl", hash = "sha256:b135410244ebe777db80298297a97fbb4c862c881b4403b71bac9d4107d61fd1"}, - {file = "protobuf-4.21.12-cp310-abi3-win_amd64.whl", hash = "sha256:89f9149e4a0169cddfc44c74f230d7743002e3aa0b9472d8c28f0388102fc4c2"}, - {file = "protobuf-4.21.12-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:299ea899484ee6f44604deb71f424234f654606b983cb496ea2a53e3c63ab791"}, - {file = "protobuf-4.21.12-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:d1736130bce8cf131ac7957fa26880ca19227d4ad68b4888b3be0dea1f95df97"}, - {file = "protobuf-4.21.12-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:78a28c9fa223998472886c77042e9b9afb6fe4242bd2a2a5aced88e3f4422aa7"}, - {file = "protobuf-4.21.12-cp37-cp37m-win32.whl", hash = "sha256:3d164928ff0727d97022957c2b849250ca0e64777ee31efd7d6de2e07c494717"}, - {file = "protobuf-4.21.12-cp37-cp37m-win_amd64.whl", hash = "sha256:f45460f9ee70a0ec1b6694c6e4e348ad2019275680bd68a1d9314b8c7e01e574"}, - {file = "protobuf-4.21.12-cp38-cp38-win32.whl", hash = "sha256:6ab80df09e3208f742c98443b6166bcb70d65f52cfeb67357d52032ea1ae9bec"}, - {file = "protobuf-4.21.12-cp38-cp38-win_amd64.whl", hash = "sha256:1f22ac0ca65bb70a876060d96d914dae09ac98d114294f77584b0d2644fa9c30"}, - {file = "protobuf-4.21.12-cp39-cp39-win32.whl", hash = "sha256:27f4d15021da6d2b706ddc3860fac0a5ddaba34ab679dc182b60a8bb4e1121cc"}, - {file = "protobuf-4.21.12-cp39-cp39-win_amd64.whl", hash = "sha256:237216c3326d46808a9f7c26fd1bd4b20015fb6867dc5d263a493ef9a539293b"}, - {file = "protobuf-4.21.12-py2.py3-none-any.whl", hash = "sha256:a53fd3f03e578553623272dc46ac2f189de23862e68565e83dde203d41b76fc5"}, - {file = "protobuf-4.21.12-py3-none-any.whl", hash = "sha256:b98d0148f84e3a3c569e19f52103ca1feacdac0d2df8d6533cf983d1fda28462"}, - {file = "protobuf-4.21.12.tar.gz", hash = "sha256:7cd532c4566d0e6feafecc1059d04c7915aec8e182d1cf7adee8b24ef1e2e6ab"}, + {file = "protobuf-4.23.2-cp310-abi3-win32.whl", hash = "sha256:384dd44cb4c43f2ccddd3645389a23ae61aeb8cfa15ca3a0f60e7c3ea09b28b3"}, + {file = "protobuf-4.23.2-cp310-abi3-win_amd64.whl", hash = "sha256:09310bce43353b46d73ba7e3bca78273b9bc50349509b9698e64d288c6372c2a"}, + {file = "protobuf-4.23.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:b2cfab63a230b39ae603834718db74ac11e52bccaaf19bf20f5cce1a84cf76df"}, + {file = "protobuf-4.23.2-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:c52cfcbfba8eb791255edd675c1fe6056f723bf832fa67f0442218f8817c076e"}, + {file = "protobuf-4.23.2-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:86df87016d290143c7ce3be3ad52d055714ebaebb57cc659c387e76cfacd81aa"}, + {file = "protobuf-4.23.2-cp37-cp37m-win32.whl", hash = "sha256:281342ea5eb631c86697e1e048cb7e73b8a4e85f3299a128c116f05f5c668f8f"}, + {file = "protobuf-4.23.2-cp37-cp37m-win_amd64.whl", hash = "sha256:ce744938406de1e64b91410f473736e815f28c3b71201302612a68bf01517fea"}, + {file = "protobuf-4.23.2-cp38-cp38-win32.whl", hash = "sha256:6c081863c379bb1741be8f8193e893511312b1d7329b4a75445d1ea9955be69e"}, + {file = "protobuf-4.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:25e3370eda26469b58b602e29dff069cfaae8eaa0ef4550039cc5ef8dc004511"}, + {file = "protobuf-4.23.2-cp39-cp39-win32.whl", hash = "sha256:efabbbbac1ab519a514579ba9ec52f006c28ae19d97915951f69fa70da2c9e91"}, + {file = "protobuf-4.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:54a533b971288af3b9926e53850c7eb186886c0c84e61daa8444385a4720297f"}, + {file = "protobuf-4.23.2-py3-none-any.whl", hash = "sha256:8da6070310d634c99c0db7df48f10da495cc283fd9e9234877f0cd182d43ab7f"}, + {file = "protobuf-4.23.2.tar.gz", hash = "sha256:20874e7ca4436f683b64ebdbee2129a5a2c301579a67d1a7dda2cdf62fb7f5f7"}, ] [[package]] name = "py" version = "1.11.0" description = "library with cross-python path, ini-parsing, io, code, log facilities" -category = "dev" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" files = [ @@ -2054,61 +2088,46 @@ files = [ {file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"}, ] -[[package]] -name = "pyaaf2" -version = "1.4.0" -description = "A python module for reading and writing advanced authoring format files" -category = "main" -optional = false -python-versions = "*" -files = [ - {file = "pyaaf2-1.4.0.tar.gz", hash = "sha256:160d3c26c7cfef7176d0bdb0e55772156570435982c3abfa415e89639f76e71b"}, -] - [[package]] name = "pyasn1" -version = "0.4.8" -description = "ASN.1 types and codecs" -category = "main" +version = "0.5.0" +description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)" optional = false -python-versions = "*" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" files = [ - {file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"}, - {file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"}, + {file = "pyasn1-0.5.0-py2.py3-none-any.whl", hash = "sha256:87a2121042a1ac9358cabcaf1d07680ff97ee6404333bacca15f76aa8ad01a57"}, + {file = "pyasn1-0.5.0.tar.gz", hash = "sha256:97b7290ca68e62a832558ec3976f15cbf911bf5d7c7039d8b861c2a0ece69fde"}, ] [[package]] name = "pyasn1-modules" -version = "0.2.8" -description = "A collection of ASN.1-based protocols modules." -category = "main" +version = "0.3.0" +description = "A collection of ASN.1-based protocols modules" optional = false -python-versions = "*" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" files = [ - {file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"}, - {file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"}, + {file = "pyasn1_modules-0.3.0-py2.py3-none-any.whl", hash = "sha256:d3ccd6ed470d9ffbc716be08bd90efbd44d0734bc9303818f7336070984a162d"}, + {file = "pyasn1_modules-0.3.0.tar.gz", hash = "sha256:5bd01446b736eb9d31512a30d46c1ac3395d676c6f3cafa4c03eb54b9925631c"}, ] [package.dependencies] -pyasn1 = ">=0.4.6,<0.5.0" +pyasn1 = ">=0.4.6,<0.6.0" [[package]] name = "pyblish-base" -version = "1.8.8" +version = "1.8.11" description = "Plug-in driven automation framework for content" -category = "main" optional = false python-versions = "*" files = [ - {file = "pyblish-base-1.8.8.tar.gz", hash = "sha256:85a2c034dbb86345bf95018f5b7b3c36c7dda29ea4d93c10d167f147b69a7b22"}, - {file = "pyblish_base-1.8.8-py2.py3-none-any.whl", hash = "sha256:67ea253a05d007ab4a175e44e778928ea7bdb0e9707573e1100417bbf0451a53"}, + {file = "pyblish-base-1.8.11.tar.gz", hash = "sha256:86dfeec0567430eb7eb25f89a18312054147a729ec66f6ac8c7e421fd15b66e1"}, + {file = "pyblish_base-1.8.11-py2.py3-none-any.whl", hash = "sha256:c321be7020c946fe9dfa11941241bd985a572c5009198b4f9810e5afad1f0b4b"}, ] [[package]] name = "pycodestyle" version = "2.10.0" description = "Python style guide checker" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -2120,7 +2139,6 @@ files = [ name = "pycparser" version = "2.21" description = "C parser in Python" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -2130,26 +2148,25 @@ files = [ [[package]] name = "pydocstyle" -version = "3.0.0" +version = "6.3.0" description = "Python docstring style checker" -category = "dev" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "pydocstyle-3.0.0-py2-none-any.whl", hash = "sha256:2258f9b0df68b97bf3a6c29003edc5238ff8879f1efb6f1999988d934e432bd8"}, - {file = "pydocstyle-3.0.0-py3-none-any.whl", hash = "sha256:ed79d4ec5e92655eccc21eb0c6cf512e69512b4a97d215ace46d17e4990f2039"}, - {file = "pydocstyle-3.0.0.tar.gz", hash = "sha256:5741c85e408f9e0ddf873611085e819b809fca90b619f5fd7f34bd4959da3dd4"}, + {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, + {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, ] [package.dependencies] -six = "*" -snowballstemmer = "*" +snowballstemmer = ">=2.2.0" + +[package.extras] +toml = ["tomli (>=1.2.3)"] [[package]] name = "pyflakes" version = "3.0.1" description = "passive checker of Python programs" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -2159,14 +2176,13 @@ files = [ [[package]] name = "pygments" -version = "2.14.0" +version = "2.15.1" description = "Pygments is a syntax highlighting package written in Python." -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"}, - {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"}, + {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"}, + {file = "Pygments-2.15.1.tar.gz", hash = "sha256:8ace4d3c1dd481894b2005f560ead0f9f19ee64fe983366be1a21e171d12775c"}, ] [package.extras] @@ -2174,18 +2190,17 @@ plugins = ["importlib-metadata"] [[package]] name = "pylint" -version = "2.15.10" +version = "2.17.4" description = "python code static checker" -category = "dev" optional = false python-versions = ">=3.7.2" files = [ - {file = "pylint-2.15.10-py3-none-any.whl", hash = "sha256:9df0d07e8948a1c3ffa3b6e2d7e6e63d9fb457c5da5b961ed63106594780cc7e"}, - {file = "pylint-2.15.10.tar.gz", hash = "sha256:b3dc5ef7d33858f297ac0d06cc73862f01e4f2e74025ec3eff347ce0bc60baf5"}, + {file = "pylint-2.17.4-py3-none-any.whl", hash = "sha256:7a1145fb08c251bdb5cca11739722ce64a63db479283d10ce718b2460e54123c"}, + {file = "pylint-2.17.4.tar.gz", hash = "sha256:5dcf1d9e19f41f38e4e85d10f511e5b9c35e1aa74251bf95cdd8cb23584e2db1"}, ] [package.dependencies] -astroid = ">=2.12.13,<=2.14.0-dev0" +astroid = ">=2.15.4,<=2.17.0-dev0" colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} dill = {version = ">=0.2", markers = "python_version < \"3.11\""} isort = ">=4.2.5,<6" @@ -2203,7 +2218,6 @@ testutils = ["gitpython (>3)"] name = "pymongo" version = "3.13.0" description = "Python driver for MongoDB " -category = "main" optional = false python-versions = "*" files = [ @@ -2332,7 +2346,6 @@ zstd = ["zstandard"] name = "pynacl" version = "1.5.0" description = "Python binding to the Networking and Cryptography (NaCl) library" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -2352,14 +2365,13 @@ files = [ cffi = ">=1.4.1" [package.extras] -docs = ["sphinx (>=1.6.5)", "sphinx_rtd_theme"] +docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"] tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"] [[package]] name = "pynput" version = "1.7.6" description = "Monitor and control user input devices" -category = "main" optional = false python-versions = "*" files = [ @@ -2376,89 +2388,84 @@ six = "*" [[package]] name = "pyobjc-core" -version = "9.0.1" +version = "9.1.1" description = "Python<->ObjC Interoperability Module" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-core-9.0.1.tar.gz", hash = "sha256:5ce1510bb0bdff527c597079a42b2e13a19b7592e76850be7960a2775b59c929"}, - {file = "pyobjc_core-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b614406d46175b1438a9596b664bf61952323116704d19bc1dea68052a0aad98"}, - {file = "pyobjc_core-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:bd397e729f6271c694fb70df8f5d3d3c9b2f2b8ac02fbbdd1757ca96027b94bb"}, - {file = "pyobjc_core-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:d919934eaa6d1cf1505ff447a5c2312be4c5651efcb694eb9f59e86f5bd25e6b"}, - {file = "pyobjc_core-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:67d67ca8b164f38ceacce28a18025845c3ec69613f3301935d4d2c4ceb22e3fd"}, - {file = "pyobjc_core-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:39d11d71f6161ac0bd93cffc8ea210bb0178b56d16a7408bf74283d6ecfa7430"}, - {file = "pyobjc_core-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:25be1c4d530e473ed98b15063b8d6844f0733c98914de6f09fe1f7652b772bbc"}, + {file = "pyobjc-core-9.1.1.tar.gz", hash = "sha256:4b6cb9053b5fcd3c0e76b8c8105a8110786b20f3403c5643a688c5ec51c55c6b"}, + {file = "pyobjc_core-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4bd07049fd9fe5b40e4b7c468af9cf942508387faf383a5acb043d20627bad2c"}, + {file = "pyobjc_core-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:1a8307527621729ff2ab67860e7ed84f76ad0da881b248c2ef31e0da0088e4ba"}, + {file = "pyobjc_core-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:083004d28b92ccb483a41195c600728854843b0486566aba2d6e63eef51f80e6"}, + {file = "pyobjc_core-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d61e9517d451bc062a7fae8b3648f4deba4fa54a24926fa1cf581b90ef4ced5a"}, + {file = "pyobjc_core-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:1626909916603a3b04c07c721cf1af0e0b892cec85bb3db98d05ba024f1786fc"}, + {file = "pyobjc_core-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2dde96462b52e952515d142e2afbb6913624a02c13582047e06211e6c3993728"}, ] [[package]] name = "pyobjc-framework-applicationservices" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the framework ApplicationServices on macOS" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-ApplicationServices-9.0.1.tar.gz", hash = "sha256:e3a350781fdcab6c1da4343dfc54ae3c0523e59e61147432f61dcfb365752fde"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:c4214febf3cc2e417ae15d45b6502e5c20f1097cd042b025760d019fe69b07b6"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c62693e01ba272fbadcd66677881311d2d63fda84b9662533fcc883c54be76d7"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6829df4dc4cf012bdc221d4e0296d6699b33ca89741569df153989a0c18aa40e"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5af5d12871499c429dd68c5ec4be56c631ec8439aa953c266eed9afdffb5ec2b"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:724da9dfae6ab0505b90340231a685720288caecfcca335b08903102e97a93dc"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8e1dbfc8f482c433ce642724d4bed0c527c7f2f2f8b9ba1ac3f778a68cf1538d"}, + {file = "pyobjc-framework-ApplicationServices-9.1.1.tar.gz", hash = "sha256:50c613bee364150bbd6cd992ca32b0848a780922cb57d112f6a4a56e29802e19"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f9286c05d80a6aafc7388a4c2a35801db9ea6bab960acf2df079110debb659cb"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3db1c79d7420052320529432e8562cd339a7ef0841df83a85bbf3648abb55b6b"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:baf5a0d72c9e2d2a3b402823a2ea53eccdc27b8b9319d61cee7d753a30cb9411"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7cc5aad93bb6b178f838fe9b78cdcf1217c7baab157b1f3525e0acf696cc3490"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:a03434605873b9f83255a0b16bbc539d06afd77f5969a3b11a1fc293dfd56680"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9f18e9b92674be0e503a2bd451a328693450e6f80ee510bbc375238b14117e24"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" -pyobjc-framework-Cocoa = ">=9.0.1" -pyobjc-framework-Quartz = ">=9.0.1" +pyobjc-core = ">=9.1.1" +pyobjc-framework-Cocoa = ">=9.1.1" +pyobjc-framework-Quartz = ">=9.1.1" [[package]] name = "pyobjc-framework-cocoa" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the Cocoa frameworks on macOS" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-Cocoa-9.0.1.tar.gz", hash = "sha256:a8b53b3426f94307a58e2f8214dc1094c19afa9dcb96f21be12f937d968b2df3"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:5f94b0f92a62b781e633e58f09bcaded63d612f9b1e15202f5f372ea59e4aebd"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f062c3bb5cc89902e6d164aa9a66ffc03638645dd5f0468b6f525ac997c86e51"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0b374c0a9d32ba4fc5610ab2741cb05a005f1dfb82a47dbf2dbb2b3a34b73ce5"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8928080cebbce91ac139e460d3dfc94c7cb6935be032dcae9c0a51b247f9c2d9"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:9d2bd86a0a98d906f762f5dc59f2fc67cce32ae9633b02ff59ac8c8a33dd862d"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2a41053cbcee30e1e8914efa749c50b70bf782527d5938f2bc2a6393740969ce"}, + {file = "pyobjc-framework-Cocoa-9.1.1.tar.gz", hash = "sha256:345c32b6d1f3db45f635e400f2d0d6c0f0f7349d45ec823f76fc1df43d13caeb"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9176a4276f3b4b4758e9b9ca10698be5341ceffaeaa4fa055133417179e6bc37"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5e1e96fb3461f46ff951413515f2029e21be268b0e033db6abee7b64ec8e93d3"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:083b195c496d30c6b9dd86126a6093c4b95e0138e9b052b13e54103fcc0b4872"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a1b3333b1aa045608848bd68bbab4c31171f36aeeaa2fabeb4527c6f6f1e33cd"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:54c017354671f0d955432986c42218e452ca69906a101c8e7acde8510432303a"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:10c0075688ce95b92caf59e368585fffdcc98c919bc345067af070222f5d01d2"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" +pyobjc-core = ">=9.1.1" [[package]] name = "pyobjc-framework-quartz" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the Quartz frameworks on macOS" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-Quartz-9.0.1.tar.gz", hash = "sha256:7e2e37fc5c01bbdc37c1355d886e6184d1977043d5a05d1d956573fa8503dac3"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:13a546a2af7c1c5c2bbf88cce6891896a449e92466415ad14d9a5ee93fba6ef3"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:93ee6e339ab6928115a92188a0162ec80bf62cd0bd908d54695c1b9f9381ea45"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:066ffbe26de1456f79a6d9467dabd6a3b9ef228318a0ba3f3fedbdbc0e2d3444"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0c9b553be6ef672e0886b0d2c77d1841b1a942c7b1dc9a67f6e1376dc5493513"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:7b39f85d0b747b0a13a11d0d538001b757c82d05e656eab437167b5b118307df"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0bedb6e1b7789d5b24fd5c790f0d53e4c62930313c97a891068bfa0e966ccc0b"}, + {file = "pyobjc-framework-Quartz-9.1.1.tar.gz", hash = "sha256:8d03bc52bd6d90f00f274fd709b82e53dc5dfca19f3fc744997634e03faaa159"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:32602f46353a5eadb0843a0940635c8ec103f47d5b1ce84284604e01c6393fa8"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7b3a56f52f9bb7fbd45c5a5f0de312ee9c104dfce6e1731015048d9e65a95e43"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b3138773dfb269e6e3894e20dcfaf90102bad84ba44aa2bba8683b8426a69cdd"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3b583e6953e9c65525db908c33c1c97cead3ac8aa0cf2759fcc568666a1b7373"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:c3efcbba62e9c5351c2a9469faabb7f400f214cd8cf98f57798d6b6c93c76efb"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a82d43c6c5fe0f5d350cfc97212bef7c572e345aa9c6e23909d23dace6448c99"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" -pyobjc-framework-Cocoa = ">=9.0.1" +pyobjc-core = ">=9.1.1" +pyobjc-framework-Cocoa = ">=9.1.1" [[package]] name = "pyparsing" version = "2.4.7" description = "Python parsing module" -category = "main" optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" files = [ @@ -2470,7 +2477,6 @@ files = [ name = "pysftp" version = "0.2.9" description = "A friendly face on SFTP" -category = "main" optional = false python-versions = "*" files = [ @@ -2484,7 +2490,6 @@ paramiko = ">=1.17" name = "pytest" version = "6.2.5" description = "pytest: simple powerful testing with Python" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -2507,14 +2512,13 @@ testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xm [[package]] name = "pytest-cov" -version = "4.0.0" +version = "4.1.0" description = "Pytest plugin for measuring coverage." -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "pytest-cov-4.0.0.tar.gz", hash = "sha256:996b79efde6433cdbd0088872dbc5fb3ed7fe1578b68cdbba634f14bb8dd0470"}, - {file = "pytest_cov-4.0.0-py3-none-any.whl", hash = "sha256:2feb1b751d66a8bd934e5edfa2e961d11309dc37b73b0eabe73b5945fee20f6b"}, + {file = "pytest-cov-4.1.0.tar.gz", hash = "sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6"}, + {file = "pytest_cov-4.1.0-py3-none-any.whl", hash = "sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a"}, ] [package.dependencies] @@ -2528,7 +2532,6 @@ testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtuale name = "pytest-print" version = "0.3.1" description = "pytest-print adds the printer fixture you can use to print messages to the user (directly to the pytest runner, not stdout)" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -2546,7 +2549,6 @@ test = ["coverage (>=5)"] name = "python-dateutil" version = "2.8.2" description = "Extensions to the standard Python datetime module" -category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" files = [ @@ -2559,50 +2561,44 @@ six = ">=1.5" [[package]] name = "python-engineio" -version = "3.14.2" -description = "Engine.IO server" -category = "main" +version = "4.4.1" +description = "Engine.IO server and client for Python" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "python-engineio-3.14.2.tar.gz", hash = "sha256:eab4553f2804c1ce97054c8b22cf0d5a9ab23128075248b97e1a5b2f29553085"}, - {file = "python_engineio-3.14.2-py2.py3-none-any.whl", hash = "sha256:5a9e6086d192463b04a1428ff1f85b6ba631bbb19d453b144ffc04f530542b84"}, + {file = "python-engineio-4.4.1.tar.gz", hash = "sha256:eb3663ecb300195926b526386f712dff84cd092c818fb7b62eeeda9160120c29"}, + {file = "python_engineio-4.4.1-py3-none-any.whl", hash = "sha256:28ab67f94cba2e5f598cbb04428138fd6bb8b06d3478c939412da445f24f0773"}, ] -[package.dependencies] -six = ">=1.9.0" - [package.extras] asyncio-client = ["aiohttp (>=3.4)"] client = ["requests (>=2.21.0)", "websocket-client (>=0.54.0)"] [[package]] name = "python-socketio" -version = "4.6.1" -description = "Socket.IO server" -category = "main" +version = "5.8.0" +description = "Socket.IO server and client for Python" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "python-socketio-4.6.1.tar.gz", hash = "sha256:cd1f5aa492c1eb2be77838e837a495f117e17f686029ebc03d62c09e33f4fa10"}, - {file = "python_socketio-4.6.1-py2.py3-none-any.whl", hash = "sha256:5a21da53fdbdc6bb6c8071f40e13d100e0b279ad997681c2492478e06f370523"}, + {file = "python-socketio-5.8.0.tar.gz", hash = "sha256:e714f4dddfaaa0cb0e37a1e2deef2bb60590a5b9fea9c343dd8ca5e688416fd9"}, + {file = "python_socketio-5.8.0-py3-none-any.whl", hash = "sha256:7adb8867aac1c2929b9c1429f1c02e12ca4c36b67c807967393e367dfbb01441"}, ] [package.dependencies] -python-engineio = ">=3.13.0,<4" +bidict = ">=0.21.0" +python-engineio = ">=4.3.0" requests = {version = ">=2.21.0", optional = true, markers = "extra == \"client\""} -six = ">=1.9.0" websocket-client = {version = ">=0.54.0", optional = true, markers = "extra == \"client\""} [package.extras] -asyncio-client = ["aiohttp (>=3.4)", "websockets (>=7.0)"] +asyncio-client = ["aiohttp (>=3.4)"] client = ["requests (>=2.21.0)", "websocket-client (>=0.54.0)"] [[package]] name = "python-xlib" version = "0.33" description = "Python X Library" -category = "main" optional = false python-versions = "*" files = [ @@ -2617,30 +2613,16 @@ six = ">=1.10.0" name = "python3-xlib" version = "0.15" description = "Python3 X Library" -category = "main" optional = false python-versions = "*" files = [ {file = "python3-xlib-0.15.tar.gz", hash = "sha256:dc4245f3ae4aa5949c1d112ee4723901ade37a96721ba9645f2bfa56e5b383f8"}, ] -[[package]] -name = "pytz" -version = "2022.7.1" -description = "World timezone definitions, modern and historical" -category = "dev" -optional = false -python-versions = "*" -files = [ - {file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"}, - {file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"}, -] - [[package]] name = "pywin32" version = "301" description = "Python for Window Extensions" -category = "main" optional = false python-versions = "*" files = [ @@ -2660,7 +2642,6 @@ files = [ name = "pywin32-ctypes" version = "0.2.0" description = "" -category = "main" optional = false python-versions = "*" files = [ @@ -2672,7 +2653,6 @@ files = [ name = "pyyaml" version = "6.0" description = "YAML parser and emitter for Python" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -2720,21 +2700,22 @@ files = [ [[package]] name = "qt-py" -version = "1.3.7" +version = "1.3.8" description = "Python 2 & 3 compatibility wrapper around all Qt bindings - PySide, PySide2, PyQt4 and PyQt5." -category = "main" optional = false python-versions = "*" files = [ - {file = "Qt.py-1.3.7-py2.py3-none-any.whl", hash = "sha256:150099d1c6f64c9621a2c9d79d45102ec781c30ee30ee69fc082c6e9be7324fe"}, - {file = "Qt.py-1.3.7.tar.gz", hash = "sha256:803c7bdf4d6230f9a466be19d55934a173eabb61406d21cb91e80c2a3f773b1f"}, + {file = "Qt.py-1.3.8-py2.py3-none-any.whl", hash = "sha256:665b9d4cfefaff2d697876d5027e145a0e0b1ba62dda9652ea114db134bc9911"}, + {file = "Qt.py-1.3.8.tar.gz", hash = "sha256:6d330928f7ec8db8e329b19116c52482b6abfaccfa5edef0248e57d012300895"}, ] +[package.dependencies] +types-PySide2 = "*" + [[package]] name = "qtawesome" version = "0.7.3" description = "FontAwesome icons in PyQt and PySide applications" -category = "main" optional = false python-versions = "*" files = [ @@ -2748,14 +2729,13 @@ six = "*" [[package]] name = "qtpy" -version = "2.3.0" +version = "2.3.1" description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)." -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"}, - {file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"}, + {file = "QtPy-2.3.1-py3-none-any.whl", hash = "sha256:5193d20e0b16e4d9d3bc2c642d04d9f4e2c892590bd1b9c92bfe38a95d5a2e12"}, + {file = "QtPy-2.3.1.tar.gz", hash = "sha256:a8c74982d6d172ce124d80cafd39653df78989683f760f2281ba91a6e7b9de8b"}, ] [package.dependencies] @@ -2768,7 +2748,6 @@ test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"] name = "recommonmark" version = "0.7.1" description = "A docutils-compatibility bridge to CommonMark, enabling you to write CommonMark inside of Docutils & Sphinx projects." -category = "dev" optional = false python-versions = "*" files = [ @@ -2783,31 +2762,50 @@ sphinx = ">=1.3.1" [[package]] name = "requests" -version = "2.28.1" +version = "2.31.0" description = "Python HTTP for Humans." -category = "main" optional = false -python-versions = ">=3.7, <4" +python-versions = ">=3.7" files = [ - {file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"}, - {file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"}, + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, ] [package.dependencies] certifi = ">=2017.4.17" -charset-normalizer = ">=2,<3" +charset-normalizer = ">=2,<4" idna = ">=2.5,<4" -urllib3 = ">=1.21.1,<1.27" +urllib3 = ">=1.21.1,<3" [package.extras] socks = ["PySocks (>=1.5.6,!=1.5.7)"] use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] +[[package]] +name = "revitron-sphinx-theme" +version = "0.7.2" +description = "Revitron theme for Sphinx" +optional = false +python-versions = "*" +files = [] +develop = false + +[package.dependencies] +sphinx = "*" + +[package.extras] +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"] + +[package.source] +type = "git" +url = "https://github.com/revitron/revitron-sphinx-theme.git" +reference = "master" +resolved_reference = "c0779c66365d9d258d93575ebaff7db9d3aee282" + [[package]] name = "rsa" version = "4.9" description = "Pure-Python RSA implementation" -category = "main" optional = false python-versions = ">=3.6,<4" files = [ @@ -2822,7 +2820,6 @@ pyasn1 = ">=0.1.3" name = "secretstorage" version = "3.3.3" description = "Python bindings to FreeDesktop.org Secret Service API" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -2838,7 +2835,6 @@ jeepney = ">=0.6" name = "semver" version = "2.13.0" description = "Python helper for Semantic Versioning (http://semver.org/)" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -2850,7 +2846,6 @@ files = [ name = "setuptools" version = "65.7.0" description = "Easily download, build, install, upgrade, and uninstall Python packages" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -2867,7 +2862,6 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs ( name = "shotgun-api3" version = "3.3.3" description = "Shotgun Python API" -category = "main" optional = false python-versions = "*" files = [] @@ -2883,7 +2877,6 @@ resolved_reference = "b9f066c0edbea6e0733242e18f32f75489064840" name = "six" version = "1.16.0" description = "Python 2 and 3 compatibility utilities" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" files = [ @@ -2893,25 +2886,23 @@ files = [ [[package]] name = "slack-sdk" -version = "3.19.5" +version = "3.21.3" description = "The Slack API Platform SDK for Python" -category = "main" optional = false python-versions = ">=3.6.0" files = [ - {file = "slack_sdk-3.19.5-py2.py3-none-any.whl", hash = "sha256:0b52bb32a87c71f638b9eb47e228dffeebf89de5e762684ef848276f9f186c84"}, - {file = "slack_sdk-3.19.5.tar.gz", hash = "sha256:47fb4af596243fe6585a92f3034de21eb2104a55cc9fd59a92ef3af17cf9ddd8"}, + {file = "slack_sdk-3.21.3-py2.py3-none-any.whl", hash = "sha256:de3c07b92479940b61cd68c566f49fbc9974c8f38f661d26244078f3903bb9cc"}, + {file = "slack_sdk-3.21.3.tar.gz", hash = "sha256:20829bdc1a423ec93dac903470975ebf3bc76fd3fd91a4dadc0eeffc940ecb0c"}, ] [package.extras] -optional = ["SQLAlchemy (>=1,<2)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"] -testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "codecov (>=2,<3)", "databases (>=0.5)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"] +optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"] +testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "databases (>=0.5)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"] [[package]] name = "smmap" version = "5.0.0" description = "A pure Python implementation of a sliding window memory map manager" -category = "dev" optional = false python-versions = ">=3.6" files = [ @@ -2923,7 +2914,6 @@ files = [ name = "snowballstemmer" version = "2.2.0" description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms." -category = "dev" optional = false python-versions = "*" files = [ @@ -2935,7 +2925,6 @@ files = [ name = "speedcopy" version = "2.1.4" description = "Replacement or alternative for python copyfile() utilizing server side copy on network shares for faster copying." -category = "main" optional = false python-versions = "*" files = [ @@ -2945,27 +2934,26 @@ files = [ [[package]] name = "sphinx" -version = "6.1.3" +version = "5.3.0" description = "Python documentation generator" -category = "dev" optional = false -python-versions = ">=3.8" +python-versions = ">=3.6" files = [ - {file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"}, - {file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"}, + {file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"}, + {file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"}, ] [package.dependencies] alabaster = ">=0.7,<0.8" babel = ">=2.9" colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} -docutils = ">=0.18,<0.20" +docutils = ">=0.14,<0.20" imagesize = ">=1.3" importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""} Jinja2 = ">=3.0" packaging = ">=21.0" -Pygments = ">=2.13" -requests = ">=2.25.0" +Pygments = ">=2.12" +requests = ">=2.5.0" snowballstemmer = ">=2.0" sphinxcontrib-applehelp = "*" sphinxcontrib-devhelp = "*" @@ -2976,37 +2964,41 @@ sphinxcontrib-serializinghtml = ">=1.1.5" [package.extras] docs = ["sphinxcontrib-websupport"] -lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"] -test = ["cython", "html5lib", "pytest (>=4.6)"] +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"] +test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"] [[package]] -name = "sphinx-rtd-theme" -version = "0.5.1" -description = "Read the Docs theme for Sphinx" -category = "dev" +name = "sphinx-autoapi" +version = "2.1.0" +description = "Sphinx API documentation generator" optional = false -python-versions = "*" +python-versions = ">=3.7" files = [ - {file = "sphinx_rtd_theme-0.5.1-py2.py3-none-any.whl", hash = "sha256:fa6bebd5ab9a73da8e102509a86f3fcc36dec04a0b52ea80e5a033b2aba00113"}, - {file = "sphinx_rtd_theme-0.5.1.tar.gz", hash = "sha256:eda689eda0c7301a80cf122dad28b1861e5605cbf455558f3775e1e8200e83a5"}, + {file = "sphinx-autoapi-2.1.0.tar.gz", hash = "sha256:5b5c58064214d5a846c9c81d23f00990a64654b9bca10213231db54a241bc50f"}, + {file = "sphinx_autoapi-2.1.0-py2.py3-none-any.whl", hash = "sha256:b25c7b2cda379447b8c36b6a0e3bdf76e02fd64f7ca99d41c6cbdf130a01768f"}, ] [package.dependencies] -sphinx = "*" +astroid = ">=2.7" +Jinja2 = "*" +PyYAML = "*" +sphinx = ">=5.2.0" +unidecode = "*" [package.extras] -dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"] +docs = ["sphinx", "sphinx-rtd-theme"] +dotnet = ["sphinxcontrib-dotnetdomain"] +go = ["sphinxcontrib-golangdomain"] [[package]] name = "sphinxcontrib-applehelp" -version = "1.0.3" +version = "1.0.4" description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books" -category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "sphinxcontrib.applehelp-1.0.3-py3-none-any.whl", hash = "sha256:ba0f2a22e6eeada8da6428d0d520215ee8864253f32facf958cca81e426f661d"}, - {file = "sphinxcontrib.applehelp-1.0.3.tar.gz", hash = "sha256:83749f09f6ac843b8cb685277dbc818a8bf2d76cc19602699094fe9a74db529e"}, + {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"}, + {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"}, ] [package.extras] @@ -3017,7 +3009,6 @@ test = ["pytest"] name = "sphinxcontrib-devhelp" version = "1.0.2" description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document." -category = "dev" optional = false python-versions = ">=3.5" files = [ @@ -3031,14 +3022,13 @@ test = ["pytest"] [[package]] name = "sphinxcontrib-htmlhelp" -version = "2.0.0" +version = "2.0.1" description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files" -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"}, - {file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"}, + {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"}, + {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"}, ] [package.extras] @@ -3049,7 +3039,6 @@ test = ["html5lib", "pytest"] name = "sphinxcontrib-jsmath" version = "1.0.1" description = "A sphinx extension which renders display math in HTML via JavaScript" -category = "dev" optional = false python-versions = ">=3.5" files = [ @@ -3060,11 +3049,25 @@ files = [ [package.extras] test = ["flake8", "mypy", "pytest"] +[[package]] +name = "sphinxcontrib-napoleon" +version = "0.7" +description = "Sphinx \"napoleon\" extension." +optional = false +python-versions = "*" +files = [ + {file = "sphinxcontrib-napoleon-0.7.tar.gz", hash = "sha256:407382beed396e9f2d7f3043fad6afda95719204a1e1a231ac865f40abcbfcf8"}, + {file = "sphinxcontrib_napoleon-0.7-py2.py3-none-any.whl", hash = "sha256:711e41a3974bdf110a484aec4c1a556799eb0b3f3b897521a018ad7e2db13fef"}, +] + +[package.dependencies] +pockets = ">=0.3" +six = ">=1.5.2" + [[package]] name = "sphinxcontrib-qthelp" version = "1.0.3" description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document." -category = "dev" optional = false python-versions = ">=3.5" files = [ @@ -3080,7 +3083,6 @@ test = ["pytest"] name = "sphinxcontrib-serializinghtml" version = "1.1.5" description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)." -category = "dev" optional = false python-versions = ">=3.5" files = [ @@ -3096,7 +3098,6 @@ test = ["pytest"] name = "stone" version = "3.3.1" description = "Stone is an interface description language (IDL) for APIs." -category = "main" optional = false python-versions = "*" files = [ @@ -3113,7 +3114,6 @@ six = ">=1.12.0" name = "termcolor" version = "1.1.0" description = "ANSII Color formatting for output in terminal." -category = "main" optional = false python-versions = "*" files = [ @@ -3124,7 +3124,6 @@ files = [ name = "toml" version = "0.10.2" description = "Python Library for Tom's Obvious, Minimal Language" -category = "dev" optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" files = [ @@ -3136,7 +3135,6 @@ files = [ name = "tomli" version = "2.0.1" description = "A lil' TOML parser" -category = "dev" optional = false python-versions = ">=3.7" files = [ @@ -3146,33 +3144,65 @@ files = [ [[package]] name = "tomlkit" -version = "0.11.6" +version = "0.11.8" description = "Style preserving TOML library" -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "tomlkit-0.11.6-py3-none-any.whl", hash = "sha256:07de26b0d8cfc18f871aec595fda24d95b08fef89d147caa861939f37230bf4b"}, - {file = "tomlkit-0.11.6.tar.gz", hash = "sha256:71b952e5721688937fb02cf9d354dbcf0785066149d2855e44531ebdd2b65d73"}, + {file = "tomlkit-0.11.8-py3-none-any.whl", hash = "sha256:8c726c4c202bdb148667835f68d68780b9a003a9ec34167b6c673b38eff2a171"}, + {file = "tomlkit-0.11.8.tar.gz", hash = "sha256:9330fc7faa1db67b541b28e62018c17d20be733177d290a13b24c62d1614e0c3"}, +] + +[[package]] +name = "types-pyside2" +version = "5.15.2.1.5" +description = "The most accurate stubs for PySide2" +optional = false +python-versions = "*" +files = [ + {file = "types_PySide2-5.15.2.1.5-py2.py3-none-any.whl", hash = "sha256:4bbee2c8f09961101013d05bb5c506b7351b3020494fc8b5c3b73c95014fa1b0"}, ] [[package]] name = "typing-extensions" -version = "4.4.0" +version = "4.6.2" description = "Backported and Experimental Type Hints for Python 3.7+" -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"}, - {file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"}, + {file = "typing_extensions-4.6.2-py3-none-any.whl", hash = "sha256:3a8b36f13dd5fdc5d1b16fe317f5668545de77fa0b8e02006381fd49d731ab98"}, + {file = "typing_extensions-4.6.2.tar.gz", hash = "sha256:06006244c70ac8ee83fa8282cb188f697b8db25bc8b4df07be1873c43897060c"}, +] + +[[package]] +name = "uc-micro-py" +version = "1.0.2" +description = "Micro subset of unicode data files for linkify-it-py projects." +optional = false +python-versions = ">=3.7" +files = [ + {file = "uc-micro-py-1.0.2.tar.gz", hash = "sha256:30ae2ac9c49f39ac6dce743bd187fcd2b574b16ca095fa74cd9396795c954c54"}, + {file = "uc_micro_py-1.0.2-py3-none-any.whl", hash = "sha256:8c9110c309db9d9e87302e2f4ad2c3152770930d88ab385cd544e7a7e75f3de0"}, +] + +[package.extras] +test = ["coverage", "pytest", "pytest-cov"] + +[[package]] +name = "unidecode" +version = "1.2.0" +description = "ASCII transliterations of Unicode text" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "Unidecode-1.2.0-py2.py3-none-any.whl", hash = "sha256:12435ef2fc4cdfd9cf1035a1db7e98b6b047fe591892e81f34e94959591fad00"}, + {file = "Unidecode-1.2.0.tar.gz", hash = "sha256:8d73a97d387a956922344f6b74243c2c6771594659778744b2dbdaad8f6b727d"}, ] [[package]] name = "uritemplate" version = "3.0.1" description = "URI templates" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -3182,14 +3212,13 @@ files = [ [[package]] name = "urllib3" -version = "1.26.14" +version = "1.26.16" description = "HTTP library with thread-safe connection pooling, file post, and more." -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" files = [ - {file = "urllib3-1.26.14-py2.py3-none-any.whl", hash = "sha256:75edcdc2f7d85b137124a6c3c9fc3933cdeaa12ecb9a6a959f22797a0feca7e1"}, - {file = "urllib3-1.26.14.tar.gz", hash = "sha256:076907bf8fd355cde77728471316625a4d2f7e713c125f51953bb5b3eecf4f72"}, + {file = "urllib3-1.26.16-py2.py3-none-any.whl", hash = "sha256:8d36afa7616d8ab714608411b4a3b13e58f463aee519024578e062e141dce20f"}, + {file = "urllib3-1.26.16.tar.gz", hash = "sha256:8f135f6502756bde6b2a9b28989df5fbe87c9970cecaa69041edcce7f0589b14"}, ] [package.extras] @@ -3199,30 +3228,28 @@ socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] [[package]] name = "virtualenv" -version = "20.17.1" +version = "20.23.0" description = "Virtual Python Environment builder" -category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "virtualenv-20.17.1-py3-none-any.whl", hash = "sha256:ce3b1684d6e1a20a3e5ed36795a97dfc6af29bc3970ca8dab93e11ac6094b3c4"}, - {file = "virtualenv-20.17.1.tar.gz", hash = "sha256:f8b927684efc6f1cc206c9db297a570ab9ad0e51c16fa9e45487d36d1905c058"}, + {file = "virtualenv-20.23.0-py3-none-any.whl", hash = "sha256:6abec7670e5802a528357fdc75b26b9f57d5d92f29c5462ba0fbe45feacc685e"}, + {file = "virtualenv-20.23.0.tar.gz", hash = "sha256:a85caa554ced0c0afbd0d638e7e2d7b5f92d23478d05d17a76daeac8f279f924"}, ] [package.dependencies] distlib = ">=0.3.6,<1" -filelock = ">=3.4.1,<4" -platformdirs = ">=2.4,<3" +filelock = ">=3.11,<4" +platformdirs = ">=3.2,<4" [package.extras] -docs = ["proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-argparse (>=0.3.2)", "sphinx-rtd-theme (>=1)", "towncrier (>=22.8)"] -testing = ["coverage (>=6.2)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=21.3)", "pytest (>=7.0.1)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.6.1)", "pytest-randomly (>=3.10.3)", "pytest-timeout (>=2.1)"] +docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=22.12)"] +test = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.3.1)", "pytest-env (>=0.8.1)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.10)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=67.7.1)", "time-machine (>=2.9)"] [[package]] name = "wcwidth" version = "0.2.6" description = "Measures the displayed width of unicode strings in a terminal" -category = "main" optional = false python-versions = "*" files = [ @@ -3234,7 +3261,6 @@ files = [ name = "websocket-client" version = "0.59.0" description = "WebSocket client for Python with low level API options" -category = "main" optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -3247,98 +3273,106 @@ six = "*" [[package]] name = "wheel" -version = "0.38.4" +version = "0.40.0" description = "A built-package format for Python" -category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"}, - {file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"}, + {file = "wheel-0.40.0-py3-none-any.whl", hash = "sha256:d236b20e7cb522daf2390fa84c55eea81c5c30190f90f29ae2ca1ad8355bf247"}, + {file = "wheel-0.40.0.tar.gz", hash = "sha256:cd1196f3faee2b31968d626e1731c94f99cbdb67cf5a46e4f5656cbee7738873"}, ] [package.extras] -test = ["pytest (>=3.0.0)"] +test = ["pytest (>=6.0.0)"] [[package]] name = "wrapt" -version = "1.14.1" +version = "1.15.0" description = "Module for decorators, wrappers and monkey patching." -category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" files = [ - {file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"}, - {file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"}, - {file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"}, - {file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"}, - {file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"}, - {file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"}, - {file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"}, - {file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"}, - {file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"}, - {file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"}, - {file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"}, - {file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"}, - {file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"}, - {file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"}, - {file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"}, - {file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"}, - {file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"}, - {file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"}, - {file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"}, - {file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"}, - {file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"}, - {file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"}, + {file = "wrapt-1.15.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:ca1cccf838cd28d5a0883b342474c630ac48cac5df0ee6eacc9c7290f76b11c1"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e826aadda3cae59295b95343db8f3d965fb31059da7de01ee8d1c40a60398b29"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5fc8e02f5984a55d2c653f5fea93531e9836abbd84342c1d1e17abc4a15084c2"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:96e25c8603a155559231c19c0349245eeb4ac0096fe3c1d0be5c47e075bd4f46"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:40737a081d7497efea35ab9304b829b857f21558acfc7b3272f908d33b0d9d4c"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:f87ec75864c37c4c6cb908d282e1969e79763e0d9becdfe9fe5473b7bb1e5f09"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:1286eb30261894e4c70d124d44b7fd07825340869945c79d05bda53a40caa079"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:493d389a2b63c88ad56cdc35d0fa5752daac56ca755805b1b0c530f785767d5e"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:58d7a75d731e8c63614222bcb21dd992b4ab01a399f1f09dd82af17bbfc2368a"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:21f6d9a0d5b3a207cdf7acf8e58d7d13d463e639f0c7e01d82cdb671e6cb7923"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ce42618f67741d4697684e501ef02f29e758a123aa2d669e2d964ff734ee00ee"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41d07d029dd4157ae27beab04d22b8e261eddfc6ecd64ff7000b10dc8b3a5727"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54accd4b8bc202966bafafd16e69da9d5640ff92389d33d28555c5fd4f25ccb7"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2fbfbca668dd15b744418265a9607baa970c347eefd0db6a518aaf0cfbd153c0"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:76e9c727a874b4856d11a32fb0b389afc61ce8aaf281ada613713ddeadd1cfec"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e20076a211cd6f9b44a6be58f7eeafa7ab5720eb796975d0c03f05b47d89eb90"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a74d56552ddbde46c246b5b89199cb3fd182f9c346c784e1a93e4dc3f5ec9975"}, + {file = "wrapt-1.15.0-cp310-cp310-win32.whl", hash = "sha256:26458da5653aa5b3d8dc8b24192f574a58984c749401f98fff994d41d3f08da1"}, + {file = "wrapt-1.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:75760a47c06b5974aa5e01949bf7e66d2af4d08cb8c1d6516af5e39595397f5e"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ba1711cda2d30634a7e452fc79eabcadaffedf241ff206db2ee93dd2c89a60e7"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56374914b132c702aa9aa9959c550004b8847148f95e1b824772d453ac204a72"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a89ce3fd220ff144bd9d54da333ec0de0399b52c9ac3d2ce34b569cf1a5748fb"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bbe623731d03b186b3d6b0d6f51865bf598587c38d6f7b0be2e27414f7f214e"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3abbe948c3cbde2689370a262a8d04e32ec2dd4f27103669a45c6929bcdbfe7c"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b67b819628e3b748fd3c2192c15fb951f549d0f47c0449af0764d7647302fda3"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:7eebcdbe3677e58dd4c0e03b4f2cfa346ed4049687d839adad68cc38bb559c92"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:74934ebd71950e3db69960a7da29204f89624dde411afbfb3b4858c1409b1e98"}, + {file = "wrapt-1.15.0-cp311-cp311-win32.whl", hash = "sha256:bd84395aab8e4d36263cd1b9308cd504f6cf713b7d6d3ce25ea55670baec5416"}, + {file = "wrapt-1.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:a487f72a25904e2b4bbc0817ce7a8de94363bd7e79890510174da9d901c38705"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:4ff0d20f2e670800d3ed2b220d40984162089a6e2c9646fdb09b85e6f9a8fc29"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9ed6aa0726b9b60911f4aed8ec5b8dd7bf3491476015819f56473ffaef8959bd"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:896689fddba4f23ef7c718279e42f8834041a21342d95e56922e1c10c0cc7afb"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:75669d77bb2c071333417617a235324a1618dba66f82a750362eccbe5b61d248"}, + {file = "wrapt-1.15.0-cp35-cp35m-win32.whl", hash = "sha256:fbec11614dba0424ca72f4e8ba3c420dba07b4a7c206c8c8e4e73f2e98f4c559"}, + {file = "wrapt-1.15.0-cp35-cp35m-win_amd64.whl", hash = "sha256:fd69666217b62fa5d7c6aa88e507493a34dec4fa20c5bd925e4bc12fce586639"}, + {file = "wrapt-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b0724f05c396b0a4c36a3226c31648385deb6a65d8992644c12a4963c70326ba"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbeccb1aa40ab88cd29e6c7d8585582c99548f55f9b2581dfc5ba68c59a85752"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38adf7198f8f154502883242f9fe7333ab05a5b02de7d83aa2d88ea621f13364"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:578383d740457fa790fdf85e6d346fda1416a40549fe8db08e5e9bd281c6a475"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:a4cbb9ff5795cd66f0066bdf5947f170f5d63a9274f99bdbca02fd973adcf2a8"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:af5bd9ccb188f6a5fdda9f1f09d9f4c86cc8a539bd48a0bfdc97723970348418"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:b56d5519e470d3f2fe4aa7585f0632b060d532d0696c5bdfb5e8319e1d0f69a2"}, + {file = "wrapt-1.15.0-cp36-cp36m-win32.whl", hash = "sha256:77d4c1b881076c3ba173484dfa53d3582c1c8ff1f914c6461ab70c8428b796c1"}, + {file = "wrapt-1.15.0-cp36-cp36m-win_amd64.whl", hash = "sha256:077ff0d1f9d9e4ce6476c1a924a3332452c1406e59d90a2cf24aeb29eeac9420"}, + {file = "wrapt-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5c5aa28df055697d7c37d2099a7bc09f559d5053c3349b1ad0c39000e611d317"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a8564f283394634a7a7054b7983e47dbf39c07712d7b177b37e03f2467a024e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:780c82a41dc493b62fc5884fb1d3a3b81106642c5c5c78d6a0d4cbe96d62ba7e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e169e957c33576f47e21864cf3fc9ff47c223a4ebca8960079b8bd36cb014fd0"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b02f21c1e2074943312d03d243ac4388319f2456576b2c6023041c4d57cd7019"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f2e69b3ed24544b0d3dbe2c5c0ba5153ce50dcebb576fdc4696d52aa22db6034"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d787272ed958a05b2c86311d3a4135d3c2aeea4fc655705f074130aa57d71653"}, + {file = "wrapt-1.15.0-cp37-cp37m-win32.whl", hash = "sha256:02fce1852f755f44f95af51f69d22e45080102e9d00258053b79367d07af39c0"}, + {file = "wrapt-1.15.0-cp37-cp37m-win_amd64.whl", hash = "sha256:abd52a09d03adf9c763d706df707c343293d5d106aea53483e0ec8d9e310ad5e"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cdb4f085756c96a3af04e6eca7f08b1345e94b53af8921b25c72f096e704e145"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:230ae493696a371f1dbffaad3dafbb742a4d27a0afd2b1aecebe52b740167e7f"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63424c681923b9f3bfbc5e3205aafe790904053d42ddcc08542181a30a7a51bd"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d6bcbfc99f55655c3d93feb7ef3800bd5bbe963a755687cbf1f490a71fb7794b"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c99f4309f5145b93eca6e35ac1a988f0dc0a7ccf9ccdcd78d3c0adf57224e62f"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b130fe77361d6771ecf5a219d8e0817d61b236b7d8b37cc045172e574ed219e6"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:96177eb5645b1c6985f5c11d03fc2dbda9ad24ec0f3a46dcce91445747e15094"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d5fe3e099cf07d0fb5a1e23d399e5d4d1ca3e6dfcbe5c8570ccff3e9208274f7"}, + {file = "wrapt-1.15.0-cp38-cp38-win32.whl", hash = "sha256:abd8f36c99512755b8456047b7be10372fca271bf1467a1caa88db991e7c421b"}, + {file = "wrapt-1.15.0-cp38-cp38-win_amd64.whl", hash = "sha256:b06fa97478a5f478fb05e1980980a7cdf2712015493b44d0c87606c1513ed5b1"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2e51de54d4fb8fb50d6ee8327f9828306a959ae394d3e01a1ba8b2f937747d86"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0970ddb69bba00670e58955f8019bec4a42d1785db3faa043c33d81de2bf843c"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76407ab327158c510f44ded207e2f76b657303e17cb7a572ffe2f5a8a48aa04d"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd525e0e52a5ff16653a3fc9e3dd827981917d34996600bbc34c05d048ca35cc"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d37ac69edc5614b90516807de32d08cb8e7b12260a285ee330955604ed9dd29"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:078e2a1a86544e644a68422f881c48b84fef6d18f8c7a957ffd3f2e0a74a0d4a"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2cf56d0e237280baed46f0b5316661da892565ff58309d4d2ed7dba763d984b8"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7dc0713bf81287a00516ef43137273b23ee414fe41a3c14be10dd95ed98a2df9"}, + {file = "wrapt-1.15.0-cp39-cp39-win32.whl", hash = "sha256:46ed616d5fb42f98630ed70c3529541408166c22cdfd4540b88d5f21006b0eff"}, + {file = "wrapt-1.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:eef4d64c650f33347c1f9266fa5ae001440b232ad9b98f1f43dfe7a79435c0a6"}, + {file = "wrapt-1.15.0-py3-none-any.whl", hash = "sha256:64b1df0f83706b4ef4cfb4fb0e4c2669100fd7ecacfb59e091fad300d4e04640"}, + {file = "wrapt-1.15.0.tar.gz", hash = "sha256:d06730c6aed78cee4126234cf2d071e01b44b915e725a6cb439a879ec9754a3a"}, ] [[package]] name = "wsrpc-aiohttp" version = "3.2.0" description = "WSRPC is the RPC over WebSocket for aiohttp" -category = "main" optional = false python-versions = ">3.5.*, <4" files = [ @@ -3357,86 +3391,85 @@ ujson = ["ujson"] [[package]] name = "yarl" -version = "1.8.2" +version = "1.9.2" description = "Yet another URL library" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "yarl-1.8.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:bb81f753c815f6b8e2ddd2eef3c855cf7da193b82396ac013c661aaa6cc6b0a5"}, - {file = "yarl-1.8.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:47d49ac96156f0928f002e2424299b2c91d9db73e08c4cd6742923a086f1c863"}, - {file = "yarl-1.8.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:3fc056e35fa6fba63248d93ff6e672c096f95f7836938241ebc8260e062832fe"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58a3c13d1c3005dbbac5c9f0d3210b60220a65a999b1833aa46bd6677c69b08e"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:10b08293cda921157f1e7c2790999d903b3fd28cd5c208cf8826b3b508026996"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:de986979bbd87272fe557e0a8fcb66fd40ae2ddfe28a8b1ce4eae22681728fef"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c4fcfa71e2c6a3cb568cf81aadc12768b9995323186a10827beccf5fa23d4f8"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ae4d7ff1049f36accde9e1ef7301912a751e5bae0a9d142459646114c70ecba6"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:bf071f797aec5b96abfc735ab97da9fd8f8768b43ce2abd85356a3127909d146"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:74dece2bfc60f0f70907c34b857ee98f2c6dd0f75185db133770cd67300d505f"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:df60a94d332158b444301c7f569659c926168e4d4aad2cfbf4bce0e8fb8be826"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:63243b21c6e28ec2375f932a10ce7eda65139b5b854c0f6b82ed945ba526bff3"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cfa2bbca929aa742b5084fd4663dd4b87c191c844326fcb21c3afd2d11497f80"}, - {file = "yarl-1.8.2-cp310-cp310-win32.whl", hash = "sha256:b05df9ea7496df11b710081bd90ecc3a3db6adb4fee36f6a411e7bc91a18aa42"}, - {file = "yarl-1.8.2-cp310-cp310-win_amd64.whl", hash = "sha256:24ad1d10c9db1953291f56b5fe76203977f1ed05f82d09ec97acb623a7976574"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2a1fca9588f360036242f379bfea2b8b44cae2721859b1c56d033adfd5893634"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f37db05c6051eff17bc832914fe46869f8849de5b92dc4a3466cd63095d23dfd"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:77e913b846a6b9c5f767b14dc1e759e5aff05502fe73079f6f4176359d832581"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0978f29222e649c351b173da2b9b4665ad1feb8d1daa9d971eb90df08702668a"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:388a45dc77198b2460eac0aca1efd6a7c09e976ee768b0d5109173e521a19daf"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2305517e332a862ef75be8fad3606ea10108662bc6fe08509d5ca99503ac2aee"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42430ff511571940d51e75cf42f1e4dbdded477e71c1b7a17f4da76c1da8ea76"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3150078118f62371375e1e69b13b48288e44f6691c1069340081c3fd12c94d5b"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:c15163b6125db87c8f53c98baa5e785782078fbd2dbeaa04c6141935eb6dab7a"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4d04acba75c72e6eb90745447d69f84e6c9056390f7a9724605ca9c56b4afcc6"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:e7fd20d6576c10306dea2d6a5765f46f0ac5d6f53436217913e952d19237efc4"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:75c16b2a900b3536dfc7014905a128a2bea8fb01f9ee26d2d7d8db0a08e7cb2c"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6d88056a04860a98341a0cf53e950e3ac9f4e51d1b6f61a53b0609df342cc8b2"}, - {file = "yarl-1.8.2-cp311-cp311-win32.whl", hash = "sha256:fb742dcdd5eec9f26b61224c23baea46c9055cf16f62475e11b9b15dfd5c117b"}, - {file = "yarl-1.8.2-cp311-cp311-win_amd64.whl", hash = "sha256:8c46d3d89902c393a1d1e243ac847e0442d0196bbd81aecc94fcebbc2fd5857c"}, - {file = "yarl-1.8.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ceff9722e0df2e0a9e8a79c610842004fa54e5b309fe6d218e47cd52f791d7ef"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f6b4aca43b602ba0f1459de647af954769919c4714706be36af670a5f44c9c1"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1684a9bd9077e922300ecd48003ddae7a7474e0412bea38d4631443a91d61077"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ebb78745273e51b9832ef90c0898501006670d6e059f2cdb0e999494eb1450c2"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3adeef150d528ded2a8e734ebf9ae2e658f4c49bf413f5f157a470e17a4a2e89"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:57a7c87927a468e5a1dc60c17caf9597161d66457a34273ab1760219953f7f4c"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:efff27bd8cbe1f9bd127e7894942ccc20c857aa8b5a0327874f30201e5ce83d0"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a783cd344113cb88c5ff7ca32f1f16532a6f2142185147822187913eb989f739"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:705227dccbe96ab02c7cb2c43e1228e2826e7ead880bb19ec94ef279e9555b5b"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:34c09b43bd538bf6c4b891ecce94b6fa4f1f10663a8d4ca589a079a5018f6ed7"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a48f4f7fea9a51098b02209d90297ac324241bf37ff6be6d2b0149ab2bd51b37"}, - {file = "yarl-1.8.2-cp37-cp37m-win32.whl", hash = "sha256:0414fd91ce0b763d4eadb4456795b307a71524dbacd015c657bb2a39db2eab89"}, - {file = "yarl-1.8.2-cp37-cp37m-win_amd64.whl", hash = "sha256:d881d152ae0007809c2c02e22aa534e702f12071e6b285e90945aa3c376463c5"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5df5e3d04101c1e5c3b1d69710b0574171cc02fddc4b23d1b2813e75f35a30b1"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7a66c506ec67eb3159eea5096acd05f5e788ceec7b96087d30c7d2865a243918"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2b4fa2606adf392051d990c3b3877d768771adc3faf2e117b9de7eb977741229"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1e21fb44e1eff06dd6ef971d4bdc611807d6bd3691223d9c01a18cec3677939e"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:93202666046d9edadfe9f2e7bf5e0782ea0d497b6d63da322e541665d65a044e"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fc77086ce244453e074e445104f0ecb27530d6fd3a46698e33f6c38951d5a0f1"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dd68a92cab699a233641f5929a40f02a4ede8c009068ca8aa1fe87b8c20ae3"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1b372aad2b5f81db66ee7ec085cbad72c4da660d994e8e590c997e9b01e44901"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e6f3515aafe0209dd17fb9bdd3b4e892963370b3de781f53e1746a521fb39fc0"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:dfef7350ee369197106805e193d420b75467b6cceac646ea5ed3049fcc950a05"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:728be34f70a190566d20aa13dc1f01dc44b6aa74580e10a3fb159691bc76909d"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:ff205b58dc2929191f68162633d5e10e8044398d7a45265f90a0f1d51f85f72c"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:baf211dcad448a87a0d9047dc8282d7de59473ade7d7fdf22150b1d23859f946"}, - {file = "yarl-1.8.2-cp38-cp38-win32.whl", hash = "sha256:272b4f1599f1b621bf2aabe4e5b54f39a933971f4e7c9aa311d6d7dc06965165"}, - {file = "yarl-1.8.2-cp38-cp38-win_amd64.whl", hash = "sha256:326dd1d3caf910cd26a26ccbfb84c03b608ba32499b5d6eeb09252c920bcbe4f"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f8ca8ad414c85bbc50f49c0a106f951613dfa5f948ab69c10ce9b128d368baf8"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:418857f837347e8aaef682679f41e36c24250097f9e2f315d39bae3a99a34cbf"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ae0eec05ab49e91a78700761777f284c2df119376e391db42c38ab46fd662b77"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:009a028127e0a1755c38b03244c0bea9d5565630db9c4cf9572496e947137a87"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3edac5d74bb3209c418805bda77f973117836e1de7c000e9755e572c1f7850d0"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da65c3f263729e47351261351b8679c6429151ef9649bba08ef2528ff2c423b2"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ef8fb25e52663a1c85d608f6dd72e19bd390e2ecaf29c17fb08f730226e3a08"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bcd7bb1e5c45274af9a1dd7494d3c52b2be5e6bd8d7e49c612705fd45420b12d"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44ceac0450e648de86da8e42674f9b7077d763ea80c8ceb9d1c3e41f0f0a9951"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:97209cc91189b48e7cfe777237c04af8e7cc51eb369004e061809bcdf4e55220"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:48dd18adcf98ea9cd721a25313aef49d70d413a999d7d89df44f469edfb38a06"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:e59399dda559688461762800d7fb34d9e8a6a7444fd76ec33220a926c8be1516"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d617c241c8c3ad5c4e78a08429fa49e4b04bedfc507b34b4d8dceb83b4af3588"}, - {file = "yarl-1.8.2-cp39-cp39-win32.whl", hash = "sha256:cb6d48d80a41f68de41212f3dfd1a9d9898d7841c8f7ce6696cf2fd9cb57ef83"}, - {file = "yarl-1.8.2-cp39-cp39-win_amd64.whl", hash = "sha256:6604711362f2dbf7160df21c416f81fac0de6dbcf0b5445a2ef25478ecc4c778"}, - {file = "yarl-1.8.2.tar.gz", hash = "sha256:49d43402c6e3013ad0978602bf6bf5328535c48d192304b91b97a3c6790b1562"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8c2ad583743d16ddbdf6bb14b5cd76bf43b0d0006e918809d5d4ddf7bde8dd82"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:82aa6264b36c50acfb2424ad5ca537a2060ab6de158a5bd2a72a032cc75b9eb8"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c0c77533b5ed4bcc38e943178ccae29b9bcf48ffd1063f5821192f23a1bd27b9"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee4afac41415d52d53a9833ebae7e32b344be72835bbb589018c9e938045a560"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9bf345c3a4f5ba7f766430f97f9cc1320786f19584acc7086491f45524a551ac"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2a96c19c52ff442a808c105901d0bdfd2e28575b3d5f82e2f5fd67e20dc5f4ea"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:891c0e3ec5ec881541f6c5113d8df0315ce5440e244a716b95f2525b7b9f3608"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c3a53ba34a636a256d767c086ceb111358876e1fb6b50dfc4d3f4951d40133d5"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:566185e8ebc0898b11f8026447eacd02e46226716229cea8db37496c8cdd26e0"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b0738fb871812722a0ac2154be1f049c6223b9f6f22eec352996b69775b36d4"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:32f1d071b3f362c80f1a7d322bfd7b2d11e33d2adf395cc1dd4df36c9c243095"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:e9fdc7ac0d42bc3ea78818557fab03af6181e076a2944f43c38684b4b6bed8e3"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:56ff08ab5df8429901ebdc5d15941b59f6253393cb5da07b4170beefcf1b2528"}, + {file = "yarl-1.9.2-cp310-cp310-win32.whl", hash = "sha256:8ea48e0a2f931064469bdabca50c2f578b565fc446f302a79ba6cc0ee7f384d3"}, + {file = "yarl-1.9.2-cp310-cp310-win_amd64.whl", hash = "sha256:50f33040f3836e912ed16d212f6cc1efb3231a8a60526a407aeb66c1c1956dde"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:646d663eb2232d7909e6601f1a9107e66f9791f290a1b3dc7057818fe44fc2b6"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:aff634b15beff8902d1f918012fc2a42e0dbae6f469fce134c8a0dc51ca423bb"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a83503934c6273806aed765035716216cc9ab4e0364f7f066227e1aaea90b8d0"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b25322201585c69abc7b0e89e72790469f7dad90d26754717f3310bfe30331c2"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:22a94666751778629f1ec4280b08eb11815783c63f52092a5953faf73be24191"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ec53a0ea2a80c5cd1ab397925f94bff59222aa3cf9c6da938ce05c9ec20428d"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:159d81f22d7a43e6eabc36d7194cb53f2f15f498dbbfa8edc8a3239350f59fe7"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:832b7e711027c114d79dffb92576acd1bd2decc467dec60e1cac96912602d0e6"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:95d2ecefbcf4e744ea952d073c6922e72ee650ffc79028eb1e320e732898d7e8"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:d4e2c6d555e77b37288eaf45b8f60f0737c9efa3452c6c44626a5455aeb250b9"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:783185c75c12a017cc345015ea359cc801c3b29a2966c2655cd12b233bf5a2be"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:b8cc1863402472f16c600e3e93d542b7e7542a540f95c30afd472e8e549fc3f7"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:822b30a0f22e588b32d3120f6d41e4ed021806418b4c9f0bc3048b8c8cb3f92a"}, + {file = "yarl-1.9.2-cp311-cp311-win32.whl", hash = "sha256:a60347f234c2212a9f0361955007fcf4033a75bf600a33c88a0a8e91af77c0e8"}, + {file = "yarl-1.9.2-cp311-cp311-win_amd64.whl", hash = "sha256:be6b3fdec5c62f2a67cb3f8c6dbf56bbf3f61c0f046f84645cd1ca73532ea051"}, + {file = "yarl-1.9.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:38a3928ae37558bc1b559f67410df446d1fbfa87318b124bf5032c31e3447b74"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac9bb4c5ce3975aeac288cfcb5061ce60e0d14d92209e780c93954076c7c4367"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3da8a678ca8b96c8606bbb8bfacd99a12ad5dd288bc6f7979baddd62f71c63ef"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13414591ff516e04fcdee8dc051c13fd3db13b673c7a4cb1350e6b2ad9639ad3"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf74d08542c3a9ea97bb8f343d4fcbd4d8f91bba5ec9d5d7f792dbe727f88938"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6e7221580dc1db478464cfeef9b03b95c5852cc22894e418562997df0d074ccc"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:494053246b119b041960ddcd20fd76224149cfea8ed8777b687358727911dd33"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:52a25809fcbecfc63ac9ba0c0fb586f90837f5425edfd1ec9f3372b119585e45"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:e65610c5792870d45d7b68c677681376fcf9cc1c289f23e8e8b39c1485384185"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:1b1bba902cba32cdec51fca038fd53f8beee88b77efc373968d1ed021024cc04"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:662e6016409828ee910f5d9602a2729a8a57d74b163c89a837de3fea050c7582"}, + {file = "yarl-1.9.2-cp37-cp37m-win32.whl", hash = "sha256:f364d3480bffd3aa566e886587eaca7c8c04d74f6e8933f3f2c996b7f09bee1b"}, + {file = "yarl-1.9.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6a5883464143ab3ae9ba68daae8e7c5c95b969462bbe42e2464d60e7e2698368"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5610f80cf43b6202e2c33ba3ec2ee0a2884f8f423c8f4f62906731d876ef4fac"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b9a4e67ad7b646cd6f0938c7ebfd60e481b7410f574c560e455e938d2da8e0f4"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:83fcc480d7549ccebe9415d96d9263e2d4226798c37ebd18c930fce43dfb9574"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5fcd436ea16fee7d4207c045b1e340020e58a2597301cfbcfdbe5abd2356c2fb"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84e0b1599334b1e1478db01b756e55937d4614f8654311eb26012091be109d59"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3458a24e4ea3fd8930e934c129b676c27452e4ebda80fbe47b56d8c6c7a63a9e"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:838162460b3a08987546e881a2bfa573960bb559dfa739e7800ceeec92e64417"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4e2d08f07a3d7d3e12549052eb5ad3eab1c349c53ac51c209a0e5991bbada78"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:de119f56f3c5f0e2fb4dee508531a32b069a5f2c6e827b272d1e0ff5ac040333"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:149ddea5abf329752ea5051b61bd6c1d979e13fbf122d3a1f9f0c8be6cb6f63c"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:674ca19cbee4a82c9f54e0d1eee28116e63bc6fd1e96c43031d11cbab8b2afd5"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:9b3152f2f5677b997ae6c804b73da05a39daa6a9e85a512e0e6823d81cdad7cc"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5415d5a4b080dc9612b1b63cba008db84e908b95848369aa1da3686ae27b6d2b"}, + {file = "yarl-1.9.2-cp38-cp38-win32.whl", hash = "sha256:f7a3d8146575e08c29ed1cd287068e6d02f1c7bdff8970db96683b9591b86ee7"}, + {file = "yarl-1.9.2-cp38-cp38-win_amd64.whl", hash = "sha256:63c48f6cef34e6319a74c727376e95626f84ea091f92c0250a98e53e62c77c72"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:75df5ef94c3fdc393c6b19d80e6ef1ecc9ae2f4263c09cacb178d871c02a5ba9"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c027a6e96ef77d401d8d5a5c8d6bc478e8042f1e448272e8d9752cb0aff8b5c8"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f3b078dbe227f79be488ffcfc7a9edb3409d018e0952cf13f15fd6512847f3f7"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59723a029760079b7d991a401386390c4be5bfec1e7dd83e25a6a0881859e716"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b03917871bf859a81ccb180c9a2e6c1e04d2f6a51d953e6a5cdd70c93d4e5a2a"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c1012fa63eb6c032f3ce5d2171c267992ae0c00b9e164efe4d73db818465fac3"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a74dcbfe780e62f4b5a062714576f16c2f3493a0394e555ab141bf0d746bb955"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8c56986609b057b4839968ba901944af91b8e92f1725d1a2d77cbac6972b9ed1"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2c315df3293cd521033533d242d15eab26583360b58f7ee5d9565f15fee1bef4"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:b7232f8dfbd225d57340e441d8caf8652a6acd06b389ea2d3222b8bc89cbfca6"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:53338749febd28935d55b41bf0bcc79d634881195a39f6b2f767870b72514caf"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:066c163aec9d3d073dc9ffe5dd3ad05069bcb03fcaab8d221290ba99f9f69ee3"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8288d7cd28f8119b07dd49b7230d6b4562f9b61ee9a4ab02221060d21136be80"}, + {file = "yarl-1.9.2-cp39-cp39-win32.whl", hash = "sha256:b124e2a6d223b65ba8768d5706d103280914d61f5cae3afbc50fc3dfcc016623"}, + {file = "yarl-1.9.2-cp39-cp39-win_amd64.whl", hash = "sha256:61016e7d582bc46a5378ffdd02cd0314fb8ba52f40f9cf4d9a5e7dbef88dee18"}, + {file = "yarl-1.9.2.tar.gz", hash = "sha256:04ab9d4b9f587c06d801c2abfe9317b77cdf996c65a90d5e84ecc45010823571"}, ] [package.dependencies] @@ -3445,21 +3478,23 @@ multidict = ">=4.0" [[package]] name = "zipp" -version = "3.11.0" +version = "3.15.0" description = "Backport of pathlib-compatible object wrapper for zip files" -category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "zipp-3.11.0-py3-none-any.whl", hash = "sha256:83a28fcb75844b5c0cdaf5aa4003c2d728c77e05f5aeabe8e95e56727005fbaa"}, - {file = "zipp-3.11.0.tar.gz", hash = "sha256:a7a22e05929290a67401440b39690ae6563279bced5f314609d9d03798f56766"}, + {file = "zipp-3.15.0-py3-none-any.whl", hash = "sha256:48904fc76a60e542af151aded95726c1a5c34ed43ab4134b597665c86d7ad556"}, + {file = "zipp-3.15.0.tar.gz", hash = "sha256:112929ad649da941c23de50f356a2b5570c954b65150642bccdd66bf194d224b"}, ] [package.extras] -docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"] -testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"] +docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"] + +[extras] +docs = [] [metadata] lock-version = "2.0" -python-versions = ">=3.9.1,<3.10" -content-hash = "02daca205796a0f29a0d9f50707544e6804f32027eba493cd2aa7f175a00dcea" +python-versions = "~3.9" +content-hash = "bc3e256094db6e33894840bb6a5adda4473d3736b852433ad8d5bd478c7e0c1c" diff --git a/pyproject.toml b/pyproject.toml index 06a74d9126..ad93b70c0f 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.15.11" # OpenPype +version = "3.17.2" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" @@ -32,23 +32,23 @@ python = ">=3.9.1,<3.10" aiohttp = "^3.7" aiohttp_json_rpc = "*" # TVPaint server acre = { git = "https://github.com/pypeclub/acre.git" } -opentimelineio = "^0.14" appdirs = { git = "https://github.com/ActiveState/appdirs.git", branch = "master" } blessed = "^1.17" # openpype terminal formatting coolname = "*" clique = "1.6.*" -Click = "^7" +Click = "7.1.2" dnspython = "^2.1.0" ftrack-python-api = "^2.3.3" +arrow = "^0.17" shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"} -gazu = "^0.8.34" +gazu = "^0.9.3" google-api-python-client = "^1.12.8" # sync server google support (should be separate?) jsonschema = "^2.6.0" keyring = "^22.0.1" log4mongo = "^1.7" pathlib2= "^2.3.5" # deadline submit publish job only (single place, maybe not needed?) Pillow = "^9.0" # used in TVPaint and for slates -pyblish-base = "^1.8.8" +pyblish-base = "^1.8.11" pynput = "^1.7.2" # idle manager in tray pymongo = "^3.11.2" "Qt.py" = "^1.3.3" @@ -56,6 +56,7 @@ QtPy = "^2.3.0" qtawesome = "0.7.3" speedcopy = "^2.1" six = "^1.15" +urllib3 = "1.26.16" semver = "^2.13.0" # for version resolution wsrpc_aiohttp = "^3.1.1" # websocket server pywin32 = { version = "301", markers = "sys_platform == 'win32'" } @@ -70,7 +71,8 @@ requests = "^2.25.1" pysftp = "^0.2.9" dropbox = "^11.20.0" aiohttp-middlewares = "^2.0.0" -opencolorio = "^2.2.0" +Unidecode = "1.2.0" +cryptography = "39.0.0" [tool.poetry.dev-dependencies] flake8 = "^6.0" @@ -81,14 +83,19 @@ GitPython = "^3.1.17" jedi = "^0.13" Jinja2 = "^3" markupsafe = "2.0.1" -pycodestyle = "^2.5.0" -pydocstyle = "^3.0.0" +pycodestyle = "*" +pydocstyle = "*" +linkify-it-py = "^2.0.0" +myst-parser = "^0.18.1" pylint = "^2.4.4" pytest = "^6.1" pytest-cov = "*" pytest-print = "*" -Sphinx = "^6.1" -sphinx-rtd-theme = "*" +Sphinx = "^5.3" +m2r2 = "^0.3.3.post2" +sphinx-autoapi = "^2.0.1" +sphinxcontrib-napoleon = "^0.7" +revitron-sphinx-theme = { git = "https://github.com/revitron/revitron-sphinx-theme.git", branch = "master" } recommonmark = "*" wheel = "*" enlighten = "*" # cool terminal progress bars @@ -118,12 +125,18 @@ version = "5.15.2" [openpype.qtbinding.darwin] package = "PySide6" -version = "6.4.1" +version = "6.4.3" [openpype.qtbinding.linux] package = "PySide2" version = "5.15.2" +# Python dependencies that will be available only in runtime of +# OpenPype process - do not interfere with DCCs dependencies +[openpype.runtime-deps] +opencolorio = "2.2.1" +opentimelineio = "0.14.1" + # TODO: we will need to handle different linux flavours here and # also different macos versions too. [openpype.thirdparty.ffmpeg.windows] @@ -165,3 +178,6 @@ ignore = ["website", "docs", ".git"] reportMissingImports = true reportMissingTypeStubs = false + +[tool.poetry.extras] +docs = ["Sphinx", "furo", "sphinxcontrib-napoleon"] diff --git a/server_addon/README.md b/server_addon/README.md new file mode 100644 index 0000000000..c6d467adaa --- /dev/null +++ b/server_addon/README.md @@ -0,0 +1,34 @@ +# Addons for AYON server +Preparation of AYON addons based on OpenPype codebase. The output is a bunch of zip files in `./packages` directory that can be uploaded to AYON server. One of the packages is `openpype` which is OpenPype code converted to AYON addon. The addon is must have requirement to be able to use `ayon-launcher`. The versioning of `openpype` addon is following versioning of OpenPype. The other addons contain only settings models. + +## Intro +OpenPype is transitioning to AYON, a dedicated server with its own database, moving away from MongoDB. During this transition period, OpenPype will remain compatible with both MongoDB and AYON. However, we will gradually update the codebase to align with AYON's data structure and separate individual components into addons. + +Currently, OpenPype has an AYON mode, which means it utilizes the AYON server instead of MongoDB through conversion utilities. Initially, we added the AYON executable alongside the OpenPype executables to enable AYON mode. While this approach worked, updating to new code versions would require a complete reinstallation. To address this, we have decided to create a new repository specifically for the base desktop application logic, which we currently refer to as the AYON Launcher. This Launcher will replace the executables generated by the OpenPype build and convert the OpenPype code into a server addon, resulting in smaller updates. + +Since the implementation of the AYON Launcher is not yet fully completed, we will maintain both methods of starting AYON mode for now. Once the AYON Launcher is finished, we will remove the AYON executables from the OpenPype codebase entirely. + +During this transitional period, the AYON Launcher addon will be a requirement as the entry point for using the AYON Launcher. + +## How to start +There is a `create_ayon_addons.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py +``` + +It will create directory `./packages/.zip` files for AYON server. You can then copy upload the zip files to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. + +Once addon is on server and is enabled, you can just run AYON launcher. Content will be downloaded and used automatically. + +### Additional arguments +Additional arguments are useful for development purposes. + +To skip zip creation to keep only server ready folder structure, pass `--skip-zip` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --skip-zip +``` + +To create both zips and keep folder structure, pass `--keep-sources` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --keep-sources +``` diff --git a/server_addon/aftereffects/LICENSE b/server_addon/aftereffects/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/aftereffects/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/aftereffects/README.md b/server_addon/aftereffects/README.md new file mode 100644 index 0000000000..b2f34f3407 --- /dev/null +++ b/server_addon/aftereffects/README.md @@ -0,0 +1,4 @@ +AfterEffects Addon +=============== + +Integration with Adobe AfterEffects. diff --git a/server_addon/aftereffects/server/__init__.py b/server_addon/aftereffects/server/__init__.py new file mode 100644 index 0000000000..e14e76e9db --- /dev/null +++ b/server_addon/aftereffects/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import AfterEffectsSettings, DEFAULT_AFTEREFFECTS_SETTING +from .version import __version__ + + +class AfterEffects(BaseServerAddon): + name = "aftereffects" + title = "AfterEffects" + version = __version__ + + settings_model = AfterEffectsSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_AFTEREFFECTS_SETTING) diff --git a/server_addon/aftereffects/server/settings/__init__.py b/server_addon/aftereffects/server/settings/__init__.py new file mode 100644 index 0000000000..4e96804b4a --- /dev/null +++ b/server_addon/aftereffects/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + AfterEffectsSettings, + DEFAULT_AFTEREFFECTS_SETTING, +) + + +__all__ = ( + "AfterEffectsSettings", + "DEFAULT_AFTEREFFECTS_SETTING", +) diff --git a/server_addon/aftereffects/server/settings/creator_plugins.py b/server_addon/aftereffects/server/settings/creator_plugins.py new file mode 100644 index 0000000000..9cb03b0b26 --- /dev/null +++ b/server_addon/aftereffects/server/settings/creator_plugins.py @@ -0,0 +1,18 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateRenderPlugin(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AfterEffectsCreatorPlugins(BaseSettingsModel): + RenderCreator: CreateRenderPlugin = Field( + title="Create Render", + default_factory=CreateRenderPlugin, + ) diff --git a/server_addon/aftereffects/server/settings/imageio.py b/server_addon/aftereffects/server/settings/imageio.py new file mode 100644 index 0000000000..55160ffd11 --- /dev/null +++ b/server_addon/aftereffects/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class AfterEffectsImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/aftereffects/server/settings/main.py b/server_addon/aftereffects/server/settings/main.py new file mode 100644 index 0000000000..4edc46d259 --- /dev/null +++ b/server_addon/aftereffects/server/settings/main.py @@ -0,0 +1,56 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import AfterEffectsImageIOModel +from .creator_plugins import AfterEffectsCreatorPlugins +from .publish_plugins import ( + AfterEffectsPublishPlugins, + AE_PUBLISH_PLUGINS_DEFAULTS, +) +from .workfile_builder import WorkfileBuilderPlugin +from .templated_workfile_build import TemplatedWorkfileBuildModel + + +class AfterEffectsSettings(BaseSettingsModel): + """AfterEffects Project Settings.""" + + imageio: AfterEffectsImageIOModel = Field( + default_factory=AfterEffectsImageIOModel, + title="OCIO config" + ) + create: AfterEffectsCreatorPlugins = Field( + default_factory=AfterEffectsCreatorPlugins, + title="Creator plugins" + ) + publish: AfterEffectsPublishPlugins = Field( + default_factory=AfterEffectsPublishPlugins, + title="Publish plugins" + ) + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + default_factory=TemplatedWorkfileBuildModel, + title="Templated Workfile Build Settings" + ) + + +DEFAULT_AFTEREFFECTS_SETTING = { + "create": { + "RenderCreator": { + "mark_for_review": True, + "default_variants": [ + "Main" + ] + } + }, + "publish": AE_PUBLISH_PLUGINS_DEFAULTS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "templated_workfile_build": { + "profiles": [] + }, +} diff --git a/server_addon/aftereffects/server/settings/publish_plugins.py b/server_addon/aftereffects/server/settings/publish_plugins.py new file mode 100644 index 0000000000..78445d3223 --- /dev/null +++ b/server_addon/aftereffects/server/settings/publish_plugins.py @@ -0,0 +1,68 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectReviewPluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + + +class ValidateSceneSettingsModel(BaseSettingsModel): + """Validate naming of products and layers""" + + # _isGroup = True + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks", + ) + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks", + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class AfterEffectsPublishPlugins(BaseSettingsModel): + CollectReview: CollectReviewPluginModel = Field( + default_factory=CollectReviewPluginModel, + title="Collect Review", + ) + ValidateSceneSettings: ValidateSceneSettingsModel = Field( + default_factory=ValidateSceneSettingsModel, + title="Validate Scene Settings", + ) + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Containers", + ) + + +AE_PUBLISH_PLUGINS_DEFAULTS = { + "CollectReview": { + "enabled": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "skip_resolution_check": [ + ".*" + ], + "skip_timelines_check": [ + ".*" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True, + } +} diff --git a/server_addon/aftereffects/server/settings/templated_workfile_build.py b/server_addon/aftereffects/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..e0245c8d06 --- /dev/null +++ b/server_addon/aftereffects/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/settings/workfile_builder.py b/server_addon/aftereffects/server/settings/workfile_builder.py new file mode 100644 index 0000000000..d45d3f7f24 --- /dev/null +++ b/server_addon/aftereffects/server/settings/workfile_builder.py @@ -0,0 +1,25 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/version.py b/server_addon/aftereffects/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/aftereffects/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/applications/server/__init__.py b/server_addon/applications/server/__init__.py new file mode 100644 index 0000000000..e782e8a591 --- /dev/null +++ b/server_addon/applications/server/__init__.py @@ -0,0 +1,233 @@ +import os +import json +import copy + +from ayon_server.addons import BaseServerAddon, AddonLibrary +from ayon_server.lib.postgres import Postgres + +from .version import __version__ +from .settings import ApplicationsAddonSettings, DEFAULT_VALUES + +try: + import semver +except ImportError: + semver = None + + +def sort_versions(addon_versions, reverse=False): + if semver is None: + for addon_version in sorted(addon_versions, reverse=reverse): + yield addon_version + return + + version_objs = [] + invalid_versions = [] + for addon_version in addon_versions: + try: + version_objs.append( + (addon_version, semver.VersionInfo.parse(addon_version)) + ) + except ValueError: + invalid_versions.append(addon_version) + + valid_versions = [ + addon_version + for addon_version, _ in sorted(version_objs, key=lambda x: x[1]) + ] + sorted_versions = list(sorted(invalid_versions)) + valid_versions + if reverse: + sorted_versions = reversed(sorted_versions) + for addon_version in sorted_versions: + yield addon_version + + +def merge_groups(output, new_groups): + groups_by_name = { + o_group["name"]: o_group + for o_group in output + } + extend_groups = [] + for new_group in new_groups: + group_name = new_group["name"] + if group_name not in groups_by_name: + extend_groups.append(new_group) + continue + existing_group = groups_by_name[group_name] + existing_variants = existing_group["variants"] + existing_variants_by_name = { + variant["name"]: variant + for variant in existing_variants + } + for new_variant in new_group["variants"]: + if new_variant["name"] not in existing_variants_by_name: + existing_variants.append(new_variant) + + output.extend(extend_groups) + + +def get_enum_items_from_groups(groups): + label_by_name = {} + for group in groups: + group_name = group["name"] + group_label = group["label"] or group_name + for variant in group["variants"]: + variant_name = variant["name"] + if not variant_name: + continue + variant_label = variant["label"] or variant_name + full_name = f"{group_name}/{variant_name}" + full_label = f"{group_label} {variant_label}" + label_by_name[full_name] = full_label + + return [ + {"value": full_name, "label": label_by_name[full_name]} + for full_name in sorted(label_by_name) + ] + + +class ApplicationsAddon(BaseServerAddon): + name = "applications" + title = "Applications" + version = __version__ + settings_model = ApplicationsAddonSettings + + async def get_default_settings(self): + applications_path = os.path.join(self.addon_dir, "applications.json") + tools_path = os.path.join(self.addon_dir, "tools.json") + default_values = copy.deepcopy(DEFAULT_VALUES) + with open(applications_path, "r") as stream: + default_values.update(json.load(stream)) + + with open(tools_path, "r") as stream: + default_values.update(json.load(stream)) + + return self.get_settings_model()(**default_values) + + async def pre_setup(self): + """Make sure older version of addon use the new way of attributes.""" + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + old_addon = app_defs.versions.get("0.1.0") + if old_addon is not None: + # Override 'create_applications_attribute' for older versions + # - avoid infinite server restart loop + old_addon.create_applications_attribute = ( + self.create_applications_attribute + ) + + async def setup(self): + need_restart = await self.create_applications_attribute() + if need_restart: + self.request_server_restart() + + async def create_applications_attribute(self) -> bool: + """Make sure there are required attributes which ftrack addon needs. + + Returns: + bool: 'True' if an attribute was created or updated. + """ + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + all_applications = [] + all_tools = [] + for addon_version in sort_versions( + app_defs.versions.keys(), reverse=True + ): + addon = app_defs.versions[addon_version] + for variant in ("production", "staging"): + settings_model = await addon.get_studio_settings(variant) + studio_settings = settings_model.dict() + application_settings = studio_settings["applications"] + app_groups = application_settings.pop("additional_apps") + for group_name, value in application_settings.items(): + value["name"] = group_name + app_groups.append(value) + merge_groups(all_applications, app_groups) + merge_groups(all_tools, studio_settings["tool_groups"]) + + query = "SELECT name, position, scope, data from public.attributes" + + apps_attrib_name = "applications" + tools_attrib_name = "tools" + + apps_enum = get_enum_items_from_groups(all_applications) + tools_enum = get_enum_items_from_groups(all_tools) + apps_attribute_data = { + "type": "list_of_strings", + "title": "Applications", + "enum": apps_enum + } + tools_attribute_data = { + "type": "list_of_strings", + "title": "Tools", + "enum": tools_enum + } + apps_scope = ["project"] + tools_scope = ["project", "folder", "task"] + + apps_match_position = None + apps_matches = False + tools_match_position = None + tools_matches = False + position = 1 + async for row in Postgres.iterate(query): + position += 1 + if row["name"] == apps_attrib_name: + # Check if scope is matching ftrack addon requirements + if ( + set(row["scope"]) == set(apps_scope) + and row["data"].get("enum") == apps_enum + ): + apps_matches = True + apps_match_position = row["position"] + + elif row["name"] == tools_attrib_name: + if ( + set(row["scope"]) == set(tools_scope) + and row["data"].get("enum") == tools_enum + ): + tools_matches = True + tools_match_position = row["position"] + + if apps_matches and tools_matches: + return False + + postgre_query = "\n".join(( + "INSERT INTO public.attributes", + " (name, position, scope, data)", + "VALUES", + " ($1, $2, $3, $4)", + "ON CONFLICT (name)", + "DO UPDATE SET", + " scope = $3,", + " data = $4", + )) + if not apps_matches: + # Reuse position from found attribute + if apps_match_position is None: + apps_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + apps_attrib_name, + apps_match_position, + apps_scope, + apps_attribute_data, + ) + + if not tools_matches: + if tools_match_position is None: + tools_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + tools_attrib_name, + tools_match_position, + tools_scope, + tools_attribute_data, + ) + return True diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json new file mode 100644 index 0000000000..171bd709a6 --- /dev/null +++ b/server_addon/applications/server/applications.json @@ -0,0 +1,1137 @@ +{ + "applications": { + "maya": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2024", + "label": "2024", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2024\"\n}", + "use_python_2": false + }, + { + "name": "2023", + "label": "2023", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2023/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2022\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2022/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2022\"\n}", + "use_python_2": true + } + ] + }, + "maya": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2024", + "label": "2024", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\mayapy.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/mayapy" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2024\"\n}", + "use_python_2": false + }, + { + "name": "2023", + "label": "2023", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\mayapy.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2023/bin/mayapy" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false + } + ] + }, + "adsk_3dsmax": { + "enabled": true, + "label": "3ds Max", + "icon": "{}/app_icons/3dsmax.png", + "host_name": "max", + "environment": "{\n \"ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR\": \"{OPENPYPE_ROOT}/openpype/hosts/max/startup\"\n}", + "variants": [ + { + "name": "2023", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\3ds Max 2023\\3dsmax.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"3DSMAX_VERSION\": \"2023\"\n}" + } + ] + }, + "flame": { + "enabled": true, + "label": "Flame", + "icon": "{}/app_icons/flame.png", + "host_name": "flame", + "environment": "{\n \"FLAME_SCRIPT_DIRS\": {\n \"windows\": \"\",\n \"darwin\": \"\",\n \"linux\": \"\"\n },\n \"FLAME_WIRETAP_HOSTNAME\": \"\",\n \"FLAME_WIRETAP_VOLUME\": \"stonefs\",\n \"FLAME_WIRETAP_GROUP\": \"staff\"\n}", + "variants": [ + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021\"\n}", + "use_python_2": true + }, + { + "name": "2021_1", + "label": "2021.1", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021.1/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021.1/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021.1/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021.1/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021.1\"\n}", + "use_python_2": true + } + ] + }, + "nuke": { + "enabled": true, + "label": "Nuke", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Nuke14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Nuke13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "label": "13.0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Nuke13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "nukeassist": { + "enabled": true, + "label": "Nuke Assist", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeAssist14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeAssist13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "label": "13.0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeAssist13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}" + } + ] + }, + "nukex": { + "enabled": true, + "label": "Nuke X", + "icon": "{}/app_icons/nukex.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeX14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeX13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "label": "13.0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeX13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}" + } + ] + }, + "nukestudio": { + "enabled": true, + "label": "Nuke Studio", + "icon": "{}/app_icons/nukestudio.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeStudio14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeStudio13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "label": "13.0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeStudio13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}" + } + ] + }, + "hiero": { + "enabled": true, + "label": "Hiero", + "icon": "{}/app_icons/hiero.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Hiero14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Hiero13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "label": "13.0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Hiero13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}" + } + ] + }, + "fusion": { + "enabled": true, + "label": "Fusion", + "icon": "{}/app_icons/fusion.png", + "host_name": "fusion", + "environment": "{\n \"FUSION_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 17\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "16", + "label": "16", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 16\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "9", + "label": "9", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 9\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "resolve": { + "enabled": true, + "label": "Resolve", + "icon": "{}/app_icons/resolve.png", + "host_name": "resolve", + "environment": "{\n \"RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR\": [],\n \"RESOLVE_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "stable", + "label": "stable", + "executables": { + "windows": [ + "C:/Program Files/Blackmagic Design/DaVinci Resolve/Resolve.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "houdini": { + "enabled": true, + "label": "Houdini", + "icon": "{}/app_icons/houdini.png", + "host_name": "houdini", + "environment": "{}", + "variants": [ + { + "name": "18-5", + "label": "18.5", + "executables": { + "windows": [ + "C:\\Program Files\\Side Effects Software\\Houdini 18.5.499\\bin\\houdini.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "18", + "label": "18", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + } + ] + }, + "blender": { + "enabled": true, + "label": "Blender", + "icon": "{}/app_icons/blender.png", + "host_name": "blender", + "environment": "{}", + "variants": [ + { + "name": "2-83", + "label": "2.83", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.83\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-90", + "label": "2.90", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.90\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-91", + "label": "2.91", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.91\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + } + ] + }, + "harmony": { + "enabled": true, + "label": "Harmony", + "icon": "{}/app_icons/harmony.png", + "host_name": "harmony", + "environment": "{\n \"AVALON_HARMONY_WORKFILES_ON_LAUNCH\": \"1\"\n}", + "variants": [ + { + "name": "21", + "label": "21", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 21 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "20", + "label": "20", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 20 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 17 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 17 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "tvpaint": { + "enabled": true, + "label": "TVPaint", + "icon": "{}/app_icons/tvpaint.png", + "host_name": "tvpaint", + "environment": "{}", + "variants": [ + { + "name": "animation_11-64bits", + "label": "11 (64bits)", + "executables": { + "windows": [ + "C:\\Program Files\\TVPaint Developpement\\TVPaint Animation 11 (64bits)\\TVPaint Animation 11 (64bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "animation_11-32bits", + "label": "11 (32bits)", + "executables": { + "windows": [ + "C:\\Program Files (x86)\\TVPaint Developpement\\TVPaint Animation 11 (32bits)\\TVPaint Animation 11 (32bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "photoshop": { + "enabled": true, + "label": "Photoshop", + "icon": "{}/app_icons/photoshop.png", + "host_name": "photoshop", + "environment": "{\n \"AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2020\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2021\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2022\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "aftereffects": { + "enabled": true, + "label": "AfterEffects", + "icon": "{}/app_icons/aftereffects.png", + "host_name": "aftereffects", + "environment": "{\n \"AVALON_AFTEREFFECTS_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2020\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2021\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2022\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MULTIPROCESS\": \"No\"\n}" + } + ] + }, + "celaction": { + "enabled": true, + "label": "CelAction 2D", + "icon": "app_icons/celaction.png", + "host_name": "celaction", + "environment": "{\n \"CELACTION_TEMPLATE\": \"{OPENPYPE_REPOS_ROOT}/openpype/hosts/celaction/celaction_template_scene.scn\"\n}", + "variants": [ + { + "name": "local", + "label": "local", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "unreal": { + "enabled": true, + "label": "Unreal Editor", + "icon": "{}/app_icons/ue4.png", + "host_name": "unreal", + "environment": "{}", + "variants": [ + { + "name": "4-26", + "label": "4.26", + "executables": {}, + "arguments": {}, + "environment": "{}" + } + ] + }, + "djvview": { + "enabled": true, + "label": "DJV View", + "icon": "{}/app_icons/djvView.png", + "host_name": "", + "environment": "{}", + "variants": [ + { + "name": "1-1", + "label": "1.1", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "additional_apps": [] + } +} diff --git a/server_addon/applications/server/settings.py b/server_addon/applications/server/settings.py new file mode 100644 index 0000000000..be9a2ea07e --- /dev/null +++ b/server_addon/applications/server/settings.py @@ -0,0 +1,199 @@ +import json +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from ayon_server.exceptions import BadRequestException + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError as exc: + print(exc) + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class MultiplatformStrList(BaseSettingsModel): + windows: list[str] = Field(default_factory=list, title="Windows") + linux: list[str] = Field(default_factory=list, title="Linux") + darwin: list[str] = Field(default_factory=list, title="MacOS") + + +class AppVariant(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + executables: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Executables" + ) + arguments: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Arguments" + ) + environment: str = Field("{}", title="Environment", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class AppVariantWithPython(AppVariant): + use_python_2: bool = Field(False, title="Use Python 2") + + +class AppGroup(BaseSettingsModel): + enabled: bool = Field(True) + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariant] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class AppGroupWithPython(AppGroup): + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + +class AdditionalAppGroup(BaseSettingsModel): + enabled: bool = Field(True) + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ToolVariantModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_names: list[str] = Field(default_factory=list, title="Hosts") + # TODO use applications enum if possible + app_variants: list[str] = Field(default_factory=list, title="Applications") + environment: str = Field("{}", title="Environments", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ToolGroupModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + environment: str = Field("{}", title="Environments", widget="textarea") + variants: list[ToolVariantModel] = Field(default_factory=list) + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsSettings(BaseSettingsModel): + """Applications settings""" + + maya: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Maya") + adsk_3dsmax: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk 3ds Max") + flame: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Flame") + nuke: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke") + nukeassist: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Assist") + nukex: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke X") + nukestudio: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Studio") + hiero: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Hiero") + fusion: AppGroup = Field( + default_factory=AppGroupWithPython, title="Fusion") + resolve: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Resolve") + houdini: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Houdini") + blender: AppGroup = Field( + default_factory=AppGroupWithPython, title="Blender") + harmony: AppGroup = Field( + default_factory=AppGroupWithPython, title="Harmony") + tvpaint: AppGroup = Field( + default_factory=AppGroupWithPython, title="TVPaint") + photoshop: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe Photoshop") + aftereffects: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe After Effects") + celaction: AppGroup = Field( + default_factory=AppGroupWithPython, title="Celaction 2D") + unreal: AppGroup = Field( + default_factory=AppGroupWithPython, title="Unreal Editor") + additional_apps: list[AdditionalAppGroup] = Field( + default_factory=list, title="Additional Applications") + + @validator("additional_apps") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsAddonSettings(BaseSettingsModel): + applications: ApplicationsSettings = Field( + default_factory=ApplicationsSettings, + title="Applications", + scope=["studio"] + ) + tool_groups: list[ToolGroupModel] = Field( + default_factory=list, + scope=["studio"] + ) + only_available: bool = Field( + True, title="Show only available applications") + + @validator("tool_groups") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "only_available": False +} diff --git a/server_addon/applications/server/tools.json b/server_addon/applications/server/tools.json new file mode 100644 index 0000000000..54bee11cf7 --- /dev/null +++ b/server_addon/applications/server/tools.json @@ -0,0 +1,55 @@ +{ + "tool_groups": [ + { + "environment": "{\n \"MTOA\": \"{STUDIO_SOFTWARE}/arnold/mtoa_{MAYA_VERSION}_{MTOA_VERSION}\",\n \"MAYA_RENDER_DESC_PATH\": \"{MTOA}\",\n \"MAYA_MODULE_PATH\": \"{MTOA}\",\n \"ARNOLD_PLUGIN_PATH\": \"{MTOA}/shaders\",\n \"MTOA_EXTENSIONS_PATH\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"MTOA_EXTENSIONS\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"DYLD_LIBRARY_PATH\": {\n \"darwin\": \"{MTOA}/bin\"\n },\n \"PATH\": {\n \"windows\": \"{PATH};{MTOA}/bin\"\n }\n}", + "name": "mtoa", + "label": "Autodesk Arnold", + "variants": [ + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.2\"\n}", + "name": "3-2", + "label": "3.2" + }, + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.1\"\n}", + "name": "3-1", + "label": "3.1" + } + ] + }, + { + "environment": "{}", + "name": "vray", + "label": "Chaos Group Vray", + "variants": [] + }, + { + "environment": "{}", + "name": "yeti", + "label": "Peregrine Labs Yeti", + "variants": [] + }, + { + "environment": "{}", + "name": "renderman", + "label": "Pixar Renderman", + "variants": [ + { + "host_names": [ + "maya" + ], + "app_variants": [ + "maya/2022" + ], + "environment": "{\n \"RFMTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManForMaya-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManForMaya-24.3\",\n \"linux\": \"/opt/pixar/RenderManForMaya-24.3\"\n },\n \"RMANTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManProServer-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManProServer-24.3\",\n \"linux\": \"/opt/pixar/RenderManProServer-24.3\"\n }\n}", + "name": "24-3-maya", + "label": "24.3 RFM" + } + ] + } + ] +} diff --git a/server_addon/applications/server/version.py b/server_addon/applications/server/version.py new file mode 100644 index 0000000000..b3f4756216 --- /dev/null +++ b/server_addon/applications/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/blender/server/__init__.py b/server_addon/blender/server/__init__.py new file mode 100644 index 0000000000..a7d6cb4400 --- /dev/null +++ b/server_addon/blender/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import BlenderSettings, DEFAULT_VALUES + + +class BlenderAddon(BaseServerAddon): + name = "blender" + title = "Blender" + version = __version__ + settings_model: Type[BlenderSettings] = BlenderSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/blender/server/settings/__init__.py b/server_addon/blender/server/settings/__init__.py new file mode 100644 index 0000000000..3d51e5c3e1 --- /dev/null +++ b/server_addon/blender/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + BlenderSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "BlenderSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/blender/server/settings/imageio.py b/server_addon/blender/server/settings/imageio.py new file mode 100644 index 0000000000..a6d3c5ff64 --- /dev/null +++ b/server_addon/blender/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class BlenderImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/blender/server/settings/main.py b/server_addon/blender/server/settings/main.py new file mode 100644 index 0000000000..4476ea709b --- /dev/null +++ b/server_addon/blender/server/settings/main.py @@ -0,0 +1,70 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + TemplateWorkfileBaseOptions, +) + +from .imageio import BlenderImageIOModel +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_BLENDER_PUBLISH_SETTINGS +) +from .render_settings import ( + RenderSettingsModel, + DEFAULT_RENDER_SETTINGS +) + + +class UnitScaleSettingsModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + apply_on_opening: bool = Field( + False, title="Apply on Opening Existing Files") + base_file_unit_scale: float = Field( + 1.0, title="Base File Unit Scale" + ) + + +class BlenderSettings(BaseSettingsModel): + unit_scale_settings: UnitScaleSettingsModel = Field( + default_factory=UnitScaleSettingsModel, + title="Set Unit Scale" + ) + set_resolution_startup: bool = Field( + True, + title="Set Resolution on Startup" + ) + set_frames_startup: bool = Field( + True, + title="Set Start/End Frames and FPS on Startup" + ) + imageio: BlenderImageIOModel = Field( + default_factory=BlenderImageIOModel, + title="Color Management (ImageIO)" + ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") + workfile_builder: TemplateWorkfileBaseOptions = Field( + default_factory=TemplateWorkfileBaseOptions, + title="Workfile Builder" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins" + ) + + +DEFAULT_VALUES = { + "unit_scale_settings": { + "enabled": True, + "apply_on_opening": False, + "base_file_unit_scale": 0.01 + }, + "set_frames_startup": True, + "set_resolution_startup": True, + "render_settings": DEFAULT_RENDER_SETTINGS, + "publish": DEFAULT_BLENDER_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py new file mode 100644 index 0000000000..5e047b7013 --- /dev/null +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -0,0 +1,324 @@ +import json +from pydantic import Field, validator +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateFileSavedModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFileSaved") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_families: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + +class ExtractBlendModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + families: list[str] = Field( + default_factory=list, + title="Families" + ) + + +class ExtractPlayblastModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + presets: str = Field("", title="Presets", widget="textarea") + + @validator("presets") + def validate_json(cls, value): + return validate_json_dict(value) + + +class PublishPuginsModel(BaseSettingsModel): + ValidateCameraZeroKeyframe: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Camera Zero Keyframe", + section="Validators" + ) + ValidateFileSaved: ValidateFileSavedModel = Field( + default_factory=ValidateFileSavedModel, + title="Validate File Saved", + section="Validators" + ) + ValidateRenderCameraIsSet: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Render Camera Is Set", + section="Validators" + ) + ValidateDeadlinePublish: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Render Output for Deadline", + section="Validators" + ) + ValidateMeshHasUvs: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh Has Uvs" + ) + ValidateMeshNoNegativeScale: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh No Negative Scale" + ) + ValidateTransformZero: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Transform Zero" + ) + ValidateNoColonsInName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate No Colons In Name" + ) + ExtractBlend: ExtractBlendModel = Field( + default_factory=ExtractBlendModel, + title="Extract Blend", + section="Extractors" + ) + ExtractFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract FBX" + ) + ExtractABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract ABC" + ) + ExtractBlendAnimation: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Blend Animation" + ) + ExtractAnimationFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Animation FBX" + ) + ExtractCamera: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera" + ) + ExtractCameraABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera as ABC" + ) + ExtractLayout: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Layout" + ) + ExtractThumbnail: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Thumbnail" + ) + ExtractPlayblast: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Playblast" + ) + + +DEFAULT_BLENDER_PUBLISH_SETTINGS = { + "ValidateCameraZeroKeyframe": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFileSaved": { + "enabled": True, + "optional": False, + "active": True, + "exclude_families": [] + }, + "ValidateRenderCameraIsSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateDeadlinePublish": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateMeshHasUvs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateTransformZero": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoColonsInName": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractBlend": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "camera", + "rig", + "action", + "layout", + "blendScene" + ] + }, + "ExtractFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractABC": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractBlendAnimation": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractAnimationFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractCamera": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractCameraABC": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractLayout": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractThumbnail": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "model": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": False, + "show_shadows": False, + "show_cavity": True + }, + "overlay": { + "show_overlays": False + } + } + }, + "rig": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": True, + "show_shadows": False, + "show_cavity": False + }, + "overlay": { + "show_overlays": True, + "show_ortho_grid": False, + "show_floor": False, + "show_axis_x": False, + "show_axis_y": False, + "show_axis_z": False, + "show_text": False, + "show_stats": False, + "show_cursor": False, + "show_annotation": False, + "show_extras": False, + "show_relationship_lines": False, + "show_outline_selected": False, + "show_motion_paths": False, + "show_object_origins": False, + "show_bones": True + } + } + } + }, + indent=4, + ) + }, + "ExtractPlayblast": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "default": { + "image_settings": { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15 + }, + "display_options": { + "shading": { + "type": "MATERIAL", + "render_pass": "COMBINED" + }, + "overlay": { + "show_overlays": False + } + } + } + }, + indent=4 + ) + } +} diff --git a/server_addon/blender/server/settings/render_settings.py b/server_addon/blender/server/settings/render_settings.py new file mode 100644 index 0000000000..f62013982e --- /dev/null +++ b/server_addon/blender/server/settings/render_settings.py @@ -0,0 +1,109 @@ +"""Providing models and values for Blender Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def image_format_enum(): + return [ + {"value": "exr", "label": "OpenEXR"}, + {"value": "bmp", "label": "BMP"}, + {"value": "rgb", "label": "Iris"}, + {"value": "png", "label": "PNG"}, + {"value": "jpg", "label": "JPEG"}, + {"value": "jp2", "label": "JPEG 2000"}, + {"value": "tga", "label": "Targa"}, + {"value": "tif", "label": "TIFF"}, + ] + + +def aov_list_enum(): + return [ + {"value": "empty", "label": "< none >"}, + {"value": "combined", "label": "Combined"}, + {"value": "z", "label": "Z"}, + {"value": "mist", "label": "Mist"}, + {"value": "normal", "label": "Normal"}, + {"value": "diffuse_light", "label": "Diffuse Light"}, + {"value": "diffuse_color", "label": "Diffuse Color"}, + {"value": "specular_light", "label": "Specular Light"}, + {"value": "specular_color", "label": "Specular Color"}, + {"value": "volume_light", "label": "Volume Light"}, + {"value": "emission", "label": "Emission"}, + {"value": "environment", "label": "Environment"}, + {"value": "shadow", "label": "Shadow"}, + {"value": "ao", "label": "Ambient Occlusion"}, + {"value": "denoising", "label": "Denoising"}, + {"value": "volume_direct", "label": "Direct Volumetric Scattering"}, + {"value": "volume_indirect", "label": "Indirect Volumetric Scattering"} + ] + + +def custom_passes_types_enum(): + return [ + {"value": "COLOR", "label": "Color"}, + {"value": "VALUE", "label": "Value"}, + ] + + +class CustomPassesModel(BaseSettingsModel): + """Custom Passes""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field( + "COLOR", + title="Type", + enum_resolver=custom_passes_types_enum + ) + + +class RenderSettingsModel(BaseSettingsModel): + default_render_image_folder: str = Field( + title="Default Render Image Folder" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator Character", + enum_resolver=aov_separators_enum + ) + image_format: str = Field( + "exr", + title="Image Format", + enum_resolver=image_format_enum + ) + multilayer_exr: bool = Field( + title="Multilayer (EXR)" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=aov_list_enum, + title="AOVs to create" + ) + custom_passes: list[CustomPassesModel] = Field( + default_factory=list, + title="Custom Passes", + description=( + "Add custom AOVs. They are added to the view layer and in the " + "Compositing Nodetree,\nbut they need to be added manually to " + "the Shader Nodetree." + ) + ) + + +DEFAULT_RENDER_SETTINGS = { + "default_render_image_folder": "renders/blender", + "aov_separator": "underscore", + "image_format": "exr", + "multilayer_exr": True, + "aov_list": [], + "custom_passes": [] +} diff --git a/server_addon/blender/server/version.py b/server_addon/blender/server/version.py new file mode 100644 index 0000000000..ae7362549b --- /dev/null +++ b/server_addon/blender/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.3" diff --git a/server_addon/celaction/server/__init__.py b/server_addon/celaction/server/__init__.py new file mode 100644 index 0000000000..90d3dbaa01 --- /dev/null +++ b/server_addon/celaction/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CelActionSettings, DEFAULT_VALUES + + +class CelActionAddon(BaseServerAddon): + name = "celaction" + title = "CelAction" + version = __version__ + settings_model: Type[CelActionSettings] = CelActionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/celaction/server/imageio.py b/server_addon/celaction/server/imageio.py new file mode 100644 index 0000000000..72da441528 --- /dev/null +++ b/server_addon/celaction/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CelActionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/celaction/server/settings.py b/server_addon/celaction/server/settings.py new file mode 100644 index 0000000000..68d1d2dc31 --- /dev/null +++ b/server_addon/celaction/server/settings.py @@ -0,0 +1,92 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import CelActionImageIOModel + + +class CollectRenderPathModel(BaseSettingsModel): + output_extension: str = Field( + "", + title="Output render file extension" + ) + anatomy_template_key_render_files: str = Field( + "", + title="Anatomy template key: render files" + ) + anatomy_template_key_metadata: str = Field( + "", + title="Anatomy template key: metadata job file" + ) + + +def _workfile_submit_overrides(): + return [ + { + "value": "render_chunk", + "label": "Pass chunk size" + }, + { + "value": "frame_range", + "label": "Pass frame range" + }, + { + "value": "resolution", + "label": "Pass resolution" + } + ] + + +class WorkfileModel(BaseSettingsModel): + submission_overrides: list[str] = Field( + default_factory=list, + title="Submission workfile overrides", + enum_resolver=_workfile_submit_overrides + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectRenderPath: CollectRenderPathModel = Field( + default_factory=CollectRenderPathModel, + title="Collect Render Path" + ) + + +class CelActionSettings(BaseSettingsModel): + imageio: CelActionImageIOModel = Field( + default_factory=CelActionImageIOModel, + title="Color Management (ImageIO)" + ) + workfile: WorkfileModel = Field( + title="Workfile" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "workfile": { + "submission_overrides": [ + "render_chunk", + "frame_range", + "resolution" + ] + }, + "publish": { + "CollectRenderPath": { + "output_extension": "png", + "anatomy_template_key_render_files": "render", + "anatomy_template_key_metadata": "render" + } + } +} diff --git a/server_addon/celaction/server/version.py b/server_addon/celaction/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/celaction/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/clockify/server/__init__.py b/server_addon/clockify/server/__init__.py new file mode 100644 index 0000000000..0fa453fdf4 --- /dev/null +++ b/server_addon/clockify/server/__init__.py @@ -0,0 +1,15 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ClockifySettings + + +class ClockifyAddon(BaseServerAddon): + name = "clockify" + title = "Clockify" + version = __version__ + settings_model: Type[ClockifySettings] = ClockifySettings + frontend_scopes = {} + services = {} diff --git a/server_addon/clockify/server/settings.py b/server_addon/clockify/server/settings.py new file mode 100644 index 0000000000..9067cd4243 --- /dev/null +++ b/server_addon/clockify/server/settings.py @@ -0,0 +1,10 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ClockifySettings(BaseSettingsModel): + workspace_name: str = Field( + "", + title="Workspace name", + scope=["studio"] + ) diff --git a/server_addon/clockify/server/version.py b/server_addon/clockify/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/clockify/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/core/server/__init__.py b/server_addon/core/server/__init__.py new file mode 100644 index 0000000000..4de2b038a5 --- /dev/null +++ b/server_addon/core/server/__init__.py @@ -0,0 +1,15 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CoreSettings, DEFAULT_VALUES + + +class CoreAddon(BaseServerAddon): + name = "core" + title = "Core" + version = __version__ + settings_model = CoreSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/core/server/settings/__init__.py b/server_addon/core/server/settings/__init__.py new file mode 100644 index 0000000000..527a2bdc0c --- /dev/null +++ b/server_addon/core/server/settings/__init__.py @@ -0,0 +1,7 @@ +from .main import CoreSettings, DEFAULT_VALUES + + +__all__ = ( + "CoreSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py new file mode 100644 index 0000000000..ca8f7e63ed --- /dev/null +++ b/server_addon/core/server/settings/main.py @@ -0,0 +1,207 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathListModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.exceptions import BadRequestException + +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_VALUES +from .tools import GlobalToolsModel, DEFAULT_TOOLS_VALUES + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class CoreImageIOFileRulesModel(BaseSettingsModel): + activate_global_file_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CoreImageIOConfigModel(BaseSettingsModel): + filepath: list[str] = Field(default_factory=list, title="Config path") + + +class CoreImageIOBaseModel(BaseSettingsModel): + activate_global_color_management: bool = Field( + False, + title="Enable Color Management" + ) + ocio_config: CoreImageIOConfigModel = Field( + default_factory=CoreImageIOConfigModel, + title="OCIO config" + ) + file_rules: CoreImageIOFileRulesModel = Field( + default_factory=CoreImageIOFileRulesModel, + title="File Rules" + ) + + +class VersionStartCategoryProfileModel(BaseSettingsModel): + _layout = "expanded" + host_names: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + version_start: int = Field( + 1, + title="Version Start", + ge=0 + ) + + +class VersionStartCategoryModel(BaseSettingsModel): + profiles: list[VersionStartCategoryProfileModel] = Field( + default_factory=list, + title="Profiles" + ) + + +class CoreSettings(BaseSettingsModel): + studio_name: str = Field("", title="Studio name", scope=["studio"]) + studio_code: str = Field("", title="Studio code", scope=["studio"]) + environments: str = Field( + "{}", + title="Global environment variables", + widget="textarea", + scope=["studio"], + ) + tools: GlobalToolsModel = Field( + default_factory=GlobalToolsModel, + title="Tools" + ) + version_start_category: VersionStartCategoryModel = Field( + default_factory=VersionStartCategoryModel, + title="Version start" + ) + imageio: CoreImageIOBaseModel = Field( + default_factory=CoreImageIOBaseModel, + title="Color Management (ImageIO)" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + project_plugins: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Additional Project Plugin Paths", + ) + project_folder_structure: str = Field( + "{}", + widget="textarea", + title="Project folder structure", + section="---" + ) + project_environments: str = Field( + "{}", + widget="textarea", + title="Project environments", + section="---" + ) + + @validator( + "environments", + "project_folder_structure", + "project_environments") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +DEFAULT_VALUES = { + "imageio": { + "activate_global_color_management": False, + "ocio_config": { + "filepath": [ + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" + ] + }, + "file_rules": { + "activate_global_file_rules": False, + "rules": [ + { + "name": "example", + "pattern": ".*(beauty).*", + "colorspace": "ACES - ACEScg", + "ext": "exr" + } + ] + } + }, + "studio_name": "", + "studio_code": "", + "environments": "{}", + "tools": DEFAULT_TOOLS_VALUES, + "version_start_category": { + "profiles": [] + }, + "publish": DEFAULT_PUBLISH_VALUES, + "project_folder_structure": json.dumps({ + "__project_root__": { + "prod": {}, + "resources": { + "footage": { + "plates": {}, + "offline": {} + }, + "audio": {}, + "art_dept": {} + }, + "editorial": {}, + "assets": { + "characters": {}, + "locations": {} + }, + "shots": {} + } + }, indent=4), + "project_plugins": { + "windows": [], + "darwin": [], + "linux": [] + }, + "project_environments": "{}" +} diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py new file mode 100644 index 0000000000..69a759465e --- /dev/null +++ b/server_addon/core/server/settings/publish_plugins.py @@ -0,0 +1,966 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + +from ayon_server.types import ColorRGBA_uint8 + + +class ValidateBaseModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class CollectAnatomyInstanceDataModel(BaseSettingsModel): + _isGroup = True + follow_workfile_version: bool = Field( + True, title="Collect Anatomy Instance Data" + ) + + +class CollectAudioModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + audio_product_name: str = Field( + "", title="Name of audio variant" + ) + + +class CollectSceneVersionModel(BaseSettingsModel): + _isGroup = True + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + skip_hosts_headless_publish: list[str] = Field( + default_factory=list, + title="Skip for host if headless publish" + ) + + +class CollectCommentPIModel(BaseSettingsModel): + enabled: bool = Field(True) + families: list[str] = Field(default_factory=list, title="Families") + + +class CollectFramesFixDefModel(BaseSettingsModel): + enabled: bool = Field(True) + rewrite_version_enable: bool = Field( + True, + title="Show 'Rewrite latest version' toggle" + ) + + +class ValidateIntentProfile(BaseSettingsModel): + _layout = "expanded" + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + # TODO This was 'validate' in v3 + validate_intent: bool = Field(True, title="Validate") + + +class ValidateIntentModel(BaseSettingsModel): + """Validate if Publishing intent was selected. + + It is possible to disable validation for specific publishing context + with profiles. + """ + + _isGroup = True + enabled: bool = Field(False) + profiles: list[ValidateIntentProfile] = Field(default_factory=list) + + +class ExtractThumbnailFFmpegModel(BaseSettingsModel): + _layout = "expanded" + input: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + + +class ExtractThumbnailModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + ffmpeg_args: ExtractThumbnailFFmpegModel = Field( + default_factory=ExtractThumbnailFFmpegModel + ) + + +def _extract_oiio_transcoding_type(): + return [ + {"value": "colorspace", "label": "Use Colorspace"}, + {"value": "display", "label": "Use Display&View"} + ] + + +class OIIOToolArgumentsModel(BaseSettingsModel): + additional_command_args: list[str] = Field( + default_factory=list, title="Arguments") + + +class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + extension: str = Field("", title="Extension") + transcoding_type: str = Field( + "colorspace", + title="Transcoding type", + enum_resolver=_extract_oiio_transcoding_type + ) + colorspace: str = Field("", title="Colorspace") + display: str = Field("", title="Display") + view: str = Field("", title="View") + oiiotool_args: OIIOToolArgumentsModel = Field( + default_factory=OIIOToolArgumentsModel, + title="OIIOtool arguments") + + tags: list[str] = Field(default_factory=list, title="Tags") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + + +class ExtractOIIOTranscodeProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + delete_original: bool = Field( + True, + title="Delete Original Representation" + ) + outputs: list[ExtractOIIOTranscodeOutputModel] = Field( + default_factory=list, + title="Output Definitions", + ) + + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ExtractOIIOTranscodeModel(BaseSettingsModel): + enabled: bool = Field(True) + profiles: list[ExtractOIIOTranscodeProfileModel] = Field( + default_factory=list, title="Profiles" + ) + + +# --- [START] Extract Review --- +class ExtractReviewFFmpegModel(BaseSettingsModel): + video_filters: list[str] = Field( + default_factory=list, + title="Video filters" + ) + audio_filters: list[str] = Field( + default_factory=list, + title="Audio filters" + ) + input: list[str] = Field( + default_factory=list, + title="Input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="Output arguments" + ) + + +def extract_review_filter_enum(): + return [ + { + "value": "everytime", + "label": "Always" + }, + { + "value": "single_frame", + "label": "Only if input has 1 image frame" + }, + { + "value": "multi_frame", + "label": "Only if input is video or sequence of frames" + } + ] + + +class ExtractReviewFilterModel(BaseSettingsModel): + families: list[str] = Field(default_factory=list, title="Families") + product_names: list[str] = Field( + default_factory=list, title="Product names") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + single_frame_filter: str = Field( + "everytime", + description=( + "Use output always / only if input is 1 frame" + " image / only if has 2+ frames or is video" + ), + enum_resolver=extract_review_filter_enum + ) + + +class ExtractReviewLetterBox(BaseSettingsModel): + enabled: bool = Field(True) + ratio: float = Field( + 0.0, + title="Ratio", + ge=0.0, + le=10000.0 + ) + fill_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Fill Color" + ) + line_thickness: int = Field( + 0, + title="Line Thickness", + ge=0, + le=1000 + ) + line_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Line Color" + ) + + +class ExtractReviewOutputDefModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + ext: str = Field("", title="Output extension") + # TODO use some different source of tags + tags: list[str] = Field(default_factory=list, title="Tags") + burnins: list[str] = Field( + default_factory=list, title="Link to a burnin by name" + ) + ffmpeg_args: ExtractReviewFFmpegModel = Field( + default_factory=ExtractReviewFFmpegModel, + title="FFmpeg arguments" + ) + filter: ExtractReviewFilterModel = Field( + default_factory=ExtractReviewFilterModel, + title="Additional output filtering" + ) + overscan_crop: str = Field( + "", + title="Overscan crop", + description=( + "Crop input overscan. See the documentation for more information." + ) + ) + overscan_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Overscan color", + description=( + "Overscan color is used when input aspect ratio is not" + " same as output aspect ratio." + ) + ) + width: int = Field( + 0, + ge=0, + le=100000, + title="Output width", + description=( + "Width and Height must be both set to higher" + " value than 0 else source resolution is used." + ) + ) + height: int = Field( + 0, + title="Output height", + ge=0, + le=100000, + ) + scale_pixel_aspect: bool = Field( + True, + title="Scale pixel aspect", + description=( + "Rescale input when it's pixel aspect ratio is not 1." + " Usefull for anamorph reviews." + ) + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + description=( + "Background color is used only when input have transparency" + " and Alpha is higher than 0." + ), + title="Background color", + ) + letter_box: ExtractReviewLetterBox = Field( + default_factory=ExtractReviewLetterBox, + title="Letter Box" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractReviewProfileModel(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + # TODO use hosts enum + hosts: list[str] = Field( + default_factory=list, title="Host names" + ) + outputs: list[ExtractReviewOutputDefModel] = Field( + default_factory=list, title="Output Definitions" + ) + + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ExtractReviewModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + profiles: list[ExtractReviewProfileModel] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Review --- + + +# --- [Start] Extract Burnin --- +class ExtractBurninOptionsModel(BaseSettingsModel): + font_size: int = Field(0, ge=0, title="Font size") + font_color: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Font color" + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 1.0), + title="Background color" + ) + x_offset: int = Field(0, title="X Offset") + y_offset: int = Field(0, title="Y Offset") + bg_padding: int = Field(0, title="Padding around text") + font_filepath: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Font file path" + ) + + +class ExtractBurninDefFilter(BaseSettingsModel): + families: list[str] = Field( + default_factory=list, + title="Families" + ) + tags: list[str] = Field( + default_factory=list, + title="Tags" + ) + + +class ExtractBurninDef(BaseSettingsModel): + _isGroup = True + _layout = "expanded" + name: str = Field("") + TOP_LEFT: str = Field("", topic="Top Left") + TOP_CENTERED: str = Field("", topic="Top Centered") + TOP_RIGHT: str = Field("", topic="Top Right") + BOTTOM_LEFT: str = Field("", topic="Bottom Left") + BOTTOM_CENTERED: str = Field("", topic="Bottom Centered") + BOTTOM_RIGHT: str = Field("", topic="Bottom Right") + filter: ExtractBurninDefFilter = Field( + default_factory=ExtractBurninDefFilter, + title="Additional filtering" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractBurninProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Produt types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + burnins: list[ExtractBurninDef] = Field( + default_factory=list, + title="Burnins" + ) + + @validator("burnins") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + + return value + + +class ExtractBurninModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + options: ExtractBurninOptionsModel = Field( + default_factory=ExtractBurninOptionsModel, + title="Burnin formatting options" + ) + profiles: list[ExtractBurninProfile] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Burnin --- + + +class PreIntegrateThumbnailsProfile(BaseSettingsModel): + _isGroup = True + product_types: list[str] = Field( + default_factory=list, + title="Product types", + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts", + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names", + ) + integrate_thumbnail: bool = Field(True) + + +class PreIntegrateThumbnailsModel(BaseSettingsModel): + """Explicitly set if Thumbnail representation should be integrated. + + If no matching profile set, existing state from Host implementation + is kept. + """ + + _isGroup = True + enabled: bool = Field(True) + integrate_profiles: list[PreIntegrateThumbnailsProfile] = Field( + default_factory=list, + title="Integrate profiles" + ) + + +class IntegrateProductGroupProfile(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class IntegrateProductGroupModel(BaseSettingsModel): + """Group published products by filtering logic. + + Set all published instances as a part of specific group named according + to 'Template'. + + Implemented all variants of placeholders '{task}', '{product[type]}', + '{host}', '{product[name]}', '{renderlayer}'. + """ + + _isGroup = True + product_grouping_profiles: list[IntegrateProductGroupProfile] = Field( + default_factory=list, + title="Product group profiles" + ) + + +class IntegrateANProductGroupProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template: str = Field("", title="Template") + + +class IntegrateANTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroVersionModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + families: list[str] = Field(default_factory=list, title="Families") + # TODO remove when removed from client code + template_name_profiles: list[IntegrateHeroTemplateNameProfileModel] = ( + Field( + default_factory=list, + title="Template name profiles" + ) + ) + + +class CleanUpModel(BaseSettingsModel): + _isGroup = True + paterns: list[str] = Field( + default_factory=list, + title="Patterns (regex)" + ) + remove_temp_renders: bool = Field(False, title="Remove Temp renders") + + +class CleanUpFarmModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + + +class PublishPuginsModel(BaseSettingsModel): + CollectAnatomyInstanceData: CollectAnatomyInstanceDataModel = Field( + default_factory=CollectAnatomyInstanceDataModel, + title="Collect Anatomy Instance Data" + ) + CollectAudio: CollectAudioModel = Field( + default_factory=CollectAudioModel, + title="Collect Audio" + ) + CollectSceneVersion: CollectSceneVersionModel = Field( + default_factory=CollectSceneVersionModel, + title="Collect Version from Workfile" + ) + collect_comment_per_instance: CollectCommentPIModel = Field( + default_factory=CollectCommentPIModel, + title="Collect comment per instance", + ) + CollectFramesFixDef: CollectFramesFixDefModel = Field( + default_factory=CollectFramesFixDefModel, + title="Collect Frames to Fix", + ) + ValidateEditorialAssetName: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Editorial Asset Name" + ) + ValidateVersion: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Version" + ) + ValidateIntent: ValidateIntentModel = Field( + default_factory=ValidateIntentModel, + title="Validate Intent" + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + default_factory=ExtractThumbnailModel, + title="Extract Thumbnail" + ) + ExtractOIIOTranscode: ExtractOIIOTranscodeModel = Field( + default_factory=ExtractOIIOTranscodeModel, + title="Extract OIIO Transcode" + ) + ExtractReview: ExtractReviewModel = Field( + default_factory=ExtractReviewModel, + title="Extract Review" + ) + ExtractBurnin: ExtractBurninModel = Field( + default_factory=ExtractBurninModel, + title="Extract Burnin" + ) + PreIntegrateThumbnails: PreIntegrateThumbnailsModel = Field( + default_factory=PreIntegrateThumbnailsModel, + title="Override Integrate Thumbnail Representations" + ) + IntegrateProductGroup: IntegrateProductGroupModel = Field( + default_factory=IntegrateProductGroupModel, + title="Integrate Product Group" + ) + IntegrateHeroVersion: IntegrateHeroVersionModel = Field( + default_factory=IntegrateHeroVersionModel, + title="Integrate Hero Version" + ) + CleanUp: CleanUpModel = Field( + default_factory=CleanUpModel, + title="Clean Up" + ) + CleanUpFarm: CleanUpFarmModel = Field( + default_factory=CleanUpFarmModel, + title="Clean Up Farm" + ) + + +DEFAULT_PUBLISH_VALUES = { + "CollectAnatomyInstanceData": { + "follow_workfile_version": False + }, + "CollectAudio": { + "enabled": False, + "audio_product_name": "audioMain" + }, + "CollectSceneVersion": { + "hosts": [ + "aftereffects", + "blender", + "celaction", + "fusion", + "harmony", + "hiero", + "houdini", + "maya", + "nuke", + "photoshop", + "resolve", + "tvpaint" + ], + "skip_hosts_headless_publish": [] + }, + "collect_comment_per_instance": { + "enabled": False, + "families": [] + }, + "CollectFramesFixDef": { + "enabled": True, + "rewrite_version_enable": True + }, + "ValidateEditorialAssetName": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVersion": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateIntent": { + "enabled": False, + "profiles": [] + }, + "ExtractThumbnail": { + "enabled": True, + "ffmpeg_args": { + "input": [ + "-apply_trc gamma22" + ], + "output": [] + } + }, + "ExtractOIIOTranscode": { + "enabled": True, + "profiles": [] + }, + "ExtractReview": { + "enabled": True, + "profiles": [ + { + "product_types": [], + "hosts": [], + "outputs": [ + { + "name": "png", + "ext": "png", + "tags": [ + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [], + "output": [] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "single_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 1920, + "height": 1080, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + }, + { + "name": "h264", + "ext": "mp4", + "tags": [ + "burnin", + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [ + "-apply_trc gamma22" + ], + "output": [ + "-pix_fmt yuv420p", + "-crf 18", + "-intra" + ] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "multi_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 0, + "height": 0, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + } + ] + } + ] + }, + "ExtractBurnin": { + "enabled": True, + "options": { + "font_size": 42, + "font_color": [255, 255, 255, 1.0], + "bg_color": [0, 0, 0, 0.5], + "x_offset": 5, + "y_offset": 5, + "bg_padding": 5, + "font_filepath": { + "windows": "", + "darwin": "", + "linux": "" + } + }, + "profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + }, + ] + }, + { + "product_types": ["review"], + "hosts": [ + "maya", + "houdini", + "max" + ], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "focal_length_burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "{focalLength:.2f} mm", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + } + ] + } + ] + }, + "PreIntegrateThumbnails": { + "enabled": True, + "integrate_profiles": [] + }, + "IntegrateProductGroup": { + "product_grouping_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "" + } + ] + }, + "IntegrateHeroVersion": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "rig", + "look", + "pointcache", + "animation", + "setdress", + "layout", + "mayaScene", + "simpleUnrealTexture" + ], + "template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "simpleUnrealTextureHero" + } + ] + }, + "CleanUp": { + "paterns": [], + "remove_temp_renders": False + }, + "CleanUpFarm": { + "enabled": False + } +} diff --git a/server_addon/core/server/settings/tools.py b/server_addon/core/server/settings/tools.py new file mode 100644 index 0000000000..7befc795e4 --- /dev/null +++ b/server_addon/core/server/settings/tools.py @@ -0,0 +1,506 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + + +class ProductTypeSmartSelectModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Product type") + task_names: list[str] = Field(default_factory=list, title="Task names") + + @validator("name") + def normalize_value(cls, value): + return normalize_name(value) + + +class ProductNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class CreatorToolModel(BaseSettingsModel): + # TODO this was dynamic dictionary '{name: task_names}' + product_types_smart_select: list[ProductTypeSmartSelectModel] = Field( + default_factory=list, + title="Create Smart Select" + ) + product_name_profiles: list[ProductNameProfile] = Field( + default_factory=list, + title="Product name profiles" + ) + + @validator("product_types_smart_select") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class WorkfileTemplateProfile(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + # TODO this was using project anatomy template name + workfile_template: str = Field("", title="Workfile template") + + +class LastWorkfileOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + use_last_published_workfile: bool = Field( + True, title="Use last published workfile" + ) + + +class WorkfilesToolOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + + +class ExtraWorkFoldersProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + folders: list[str] = Field(default_factory=list, title="Folders") + + +class WorkfilesLockProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + host_names: list[str] = Field(default_factory=list, title="Hosts") + enabled: bool = Field(True, title="Enabled") + + +class WorkfilesToolModel(BaseSettingsModel): + workfile_template_profiles: list[WorkfileTemplateProfile] = Field( + default_factory=list, + title="Workfile template profiles" + ) + last_workfile_on_startup: list[LastWorkfileOnStartupProfile] = Field( + default_factory=list, + title="Open last workfile on launch" + ) + open_workfile_tool_on_startup: list[WorkfilesToolOnStartupProfile] = Field( + default_factory=list, + title="Open workfile tool on launch" + ) + extra_folders: list[ExtraWorkFoldersProfile] = Field( + default_factory=list, + title="Extra work folders" + ) + workfile_lock_profiles: list[WorkfilesLockProfile] = Field( + default_factory=list, + title="Workfile lock profiles" + ) + + +def _product_types_enum(): + return [ + "action", + "animation", + "assembly", + "audio", + "backgroundComp", + "backgroundLayout", + "camera", + "editorial", + "gizmo", + "image", + "layout", + "look", + "matchmove", + "mayaScene", + "model", + "nukenodes", + "plate", + "pointcache", + "prerender", + "redshiftproxy", + "reference", + "render", + "review", + "rig", + "setdress", + "take", + "usdShade", + "vdbcache", + "vrayproxy", + "workfile", + "xgen", + "yetiRig", + "yeticache" + ] + + +class LoaderProductTypeFilterProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + is_include: bool = Field(True, title="Exclude / Include") + filter_product_types: list[str] = Field( + default_factory=list, + enum_resolver=_product_types_enum + ) + + +class LoaderToolModel(BaseSettingsModel): + product_type_filter_profiles: list[LoaderProductTypeFilterProfile] = Field( + default_factory=list, + title="Product type filtering" + ) + + +class PublishTemplateNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + template_name: str = Field("", title="Template name") + + +class CustomStagingDirProfileModel(BaseSettingsModel): + active: bool = Field(True, title="Is active") + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names" + ) + custom_staging_dir_persistent: bool = Field( + False, title="Custom Staging Folder Persistent" + ) + template_name: str = Field("", title="Template Name") + + +class PublishToolModel(BaseSettingsModel): + template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Template name profiles" + ) + hero_template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Hero template name profiles" + ) + custom_staging_dir_profiles: list[CustomStagingDirProfileModel] = Field( + default_factory=list, + title="Custom Staging Dir Profiles" + ) + + +class GlobalToolsModel(BaseSettingsModel): + creator: CreatorToolModel = Field( + default_factory=CreatorToolModel, + title="Creator" + ) + Workfiles: WorkfilesToolModel = Field( + default_factory=WorkfilesToolModel, + title="Workfiles" + ) + loader: LoaderToolModel = Field( + default_factory=LoaderToolModel, + title="Loader" + ) + publish: PublishToolModel = Field( + default_factory=PublishToolModel, + title="Publish" + ) + + +DEFAULT_TOOLS_VALUES = { + "creator": { + "product_types_smart_select": [ + { + "name": "Render", + "task_names": [ + "light", + "render" + ] + }, + { + "name": "Model", + "task_names": [ + "model" + ] + }, + { + "name": "Layout", + "task_names": [ + "layout" + ] + }, + { + "name": "Look", + "task_names": [ + "look" + ] + }, + { + "name": "Rig", + "task_names": [ + "rigging", + "rig" + ] + } + ], + "product_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{variant}" + }, + { + "product_types": [ + "workfile" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": [ + "render" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Variant}" + }, + { + "product_types": [ + "renderLayer", + "renderPass" + ], + "hosts": [ + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}_{Renderlayer}_{Renderpass}" + }, + { + "product_types": [ + "review", + "workfile" + ], + "hosts": [ + "aftereffects", + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": ["render"], + "hosts": [ + "aftereffects" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Composition}{Variant}" + }, + { + "product_types": [ + "staticMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "S_{folder[name]}{variant}" + }, + { + "product_types": [ + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "SK_{folder[name]}{variant}" + } + ] + }, + "Workfiles": { + "workfile_template_profiles": [ + { + "task_types": [], + "hosts": [], + "workfile_template": "work" + }, + { + "task_types": [], + "hosts": [ + "unreal" + ], + "workfile_template": "work_unreal" + } + ], + "last_workfile_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": True, + "use_last_published_workfile": False + } + ], + "open_workfile_tool_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": False + } + ], + "extra_folders": [], + "workfile_lock_profiles": [] + }, + "loader": { + "product_type_filter_profiles": [ + { + "hosts": [], + "task_types": [], + "is_include": True, + "filter_product_types": [] + } + ] + }, + "publish": { + "template_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish" + }, + { + "product_types": [ + "review", + "render", + "prerender" + ], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish_render" + }, + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_simpleUnrealTexture" + }, + { + "product_types": [ + "staticMesh", + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_maya2unreal" + }, + { + "product_types": [ + "online" + ], + "hosts": [ + "traypublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_online" + } + ], + "hero_template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "hero_simpleUnrealTextureHero" + } + ] + } +} diff --git a/server_addon/core/server/version.py b/server_addon/core/server/version.py new file mode 100644 index 0000000000..b3f4756216 --- /dev/null +++ b/server_addon/core/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/create_ayon_addons.py b/server_addon/create_ayon_addons.py new file mode 100644 index 0000000000..61dbd5c8d9 --- /dev/null +++ b/server_addon/create_ayon_addons.py @@ -0,0 +1,308 @@ +import os +import sys +import re +import json +import shutil +import zipfile +import platform +import collections +from pathlib import Path +from typing import Any, Optional, Iterable, Pattern, List, Tuple + +# Patterns of directories to be skipped for server part of addon +IGNORE_DIR_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip directories starting with '.' + r"^\.", + # Skip any pycache folders + "^__pycache__$" + } +] + +# Patterns of files to be skipped for server part of addon +IGNORE_FILE_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip files starting with '.' + # NOTE this could be an issue in some cases + r"^\.", + # Skip '.pyc' files + r"\.pyc$" + } +] + + +class ZipFileLongPaths(zipfile.ZipFile): + """Allows longer paths in zip files. + + Regular DOS paths are limited to MAX_PATH (260) characters, including + the string's terminating NUL character. + That limit can be exceeded by using an extended-length path that + starts with the '\\?\' prefix. + """ + _is_windows = platform.system().lower() == "windows" + + def _extract_member(self, member, tpath, pwd): + if self._is_windows: + tpath = os.path.abspath(tpath) + if tpath.startswith("\\\\"): + tpath = "\\\\?\\UNC\\" + tpath[2:] + else: + tpath = "\\\\?\\" + tpath + + return super(ZipFileLongPaths, self)._extract_member( + member, tpath, pwd + ) + + +def _value_match_regexes(value: str, regexes: Iterable[Pattern]) -> bool: + return any( + regex.search(value) + for regex in regexes + ) + + +def find_files_in_subdir( + src_path: str, + ignore_file_patterns: Optional[List[Pattern]] = None, + ignore_dir_patterns: Optional[List[Pattern]] = None, + ignore_subdirs: Optional[Iterable[Tuple[str]]] = None +): + """Find all files to copy in subdirectories of given path. + + All files that match any of the patterns in 'ignore_file_patterns' will + be skipped and any directories that match any of the patterns in + 'ignore_dir_patterns' will be skipped with all subfiles. + + Args: + src_path (str): Path to directory to search in. + ignore_file_patterns (Optional[List[Pattern]]): List of regexes + to match files to ignore. + ignore_dir_patterns (Optional[List[Pattern]]): List of regexes + to match directories to ignore. + ignore_subdirs (Optional[Iterable[Tuple[str]]]): List of + subdirectories to ignore. + + Returns: + List[Tuple[str, str]]: List of tuples with path to file and parent + directories relative to 'src_path'. + """ + + if ignore_file_patterns is None: + ignore_file_patterns = IGNORE_FILE_PATTERNS + + if ignore_dir_patterns is None: + ignore_dir_patterns = IGNORE_DIR_PATTERNS + output: list[tuple[str, str]] = [] + + hierarchy_queue = collections.deque() + hierarchy_queue.append((src_path, [])) + while hierarchy_queue: + item: tuple[str, str] = hierarchy_queue.popleft() + dirpath, parents = item + if ignore_subdirs and parents in ignore_subdirs: + continue + for name in os.listdir(dirpath): + path = os.path.join(dirpath, name) + if os.path.isfile(path): + if not _value_match_regexes(name, ignore_file_patterns): + items = list(parents) + items.append(name) + output.append((path, os.path.sep.join(items))) + continue + + if not _value_match_regexes(name, ignore_dir_patterns): + items = list(parents) + items.append(name) + hierarchy_queue.append((path, items)) + + return output + + +def read_addon_version(version_path: Path) -> str: + # Read version + version_content: dict[str, Any] = {} + with open(str(version_path), "r") as stream: + exec(stream.read(), version_content) + return version_content["__version__"] + + +def get_addon_version(addon_dir: Path) -> str: + return read_addon_version(addon_dir / "server" / "version.py") + + +def create_addon_zip( + output_dir: Path, + addon_name: str, + addon_version: str, + keep_source: bool +): + zip_filepath = output_dir / f"{addon_name}-{addon_version}.zip" + addon_output_dir = output_dir / addon_name / addon_version + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + zipf.writestr( + "manifest.json", + json.dumps({ + "addon_name": addon_name, + "addon_version": addon_version + }) + ) + # Add client code content to zip + src_root = os.path.normpath(str(addon_output_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(addon_output_dir)): + rel_root = "" + if root != src_root: + rel_root = root[src_root_offset:] + + for filename in filenames: + src_path = os.path.join(root, filename) + if rel_root: + dst_path = os.path.join("addon", rel_root, filename) + else: + dst_path = os.path.join("addon", filename) + zipf.write(src_path, dst_path) + + if not keep_source: + shutil.rmtree(str(output_dir / addon_name)) + + +def create_openpype_package( + addon_dir: Path, + output_dir: Path, + root_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + pyproject_path = addon_dir / "client" / "pyproject.toml" + + openpype_dir = root_dir / "openpype" + version_path = openpype_dir / "version.py" + addon_version = read_addon_version(version_path) + + addon_output_dir = output_dir / "openpype" / addon_version + private_dir = addon_output_dir / "private" + # Make sure dir exists + addon_output_dir.mkdir(parents=True) + private_dir.mkdir(parents=True) + + # Copy version + shutil.copy(str(version_path), str(addon_output_dir)) + for subitem in server_dir.iterdir(): + shutil.copy(str(subitem), str(addon_output_dir / subitem.name)) + + # Copy pyproject.toml + shutil.copy( + str(pyproject_path), + (private_dir / pyproject_path.name) + ) + + ignored_hosts = [] + ignored_modules = [ + "ftrack", + "shotgrid", + "sync_server", + "example_addons", + "slack" + ] + # Subdirs that won't be added to output zip file + ignored_subpaths = [ + ["addons"], + ["vendor", "common", "ayon_api"], + ] + ignored_subpaths.extend( + ["hosts", host_name] + for host_name in ignored_hosts + ) + ignored_subpaths.extend( + ["modules", module_name] + for module_name in ignored_modules + ) + + # Zip client + zip_filepath = private_dir / "client.zip" + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + # Add client code content to zip + for path, sub_path in find_files_in_subdir( + str(openpype_dir), ignore_subdirs=ignored_subpaths + ): + zipf.write(path, f"{openpype_dir.name}/{sub_path}") + + if create_zip: + create_addon_zip(output_dir, "openpype", addon_version, keep_source) + + +def create_addon_package( + addon_dir: Path, + output_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + addon_version = get_addon_version(addon_dir) + + addon_output_dir = output_dir / addon_dir.name / addon_version + if addon_output_dir.exists(): + shutil.rmtree(str(addon_output_dir)) + addon_output_dir.mkdir(parents=True) + + # Copy server content + src_root = os.path.normpath(str(server_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(server_dir)): + dst_root = addon_output_dir + if root != src_root: + rel_root = root[src_root_offset:] + dst_root = dst_root / rel_root + + dst_root.mkdir(parents=True, exist_ok=True) + for filename in filenames: + src_path = os.path.join(root, filename) + shutil.copy(src_path, str(dst_root)) + + if create_zip: + create_addon_zip( + output_dir, addon_dir.name, addon_version, keep_source + ) + + +def main(create_zip=True, keep_source=False): + current_dir = Path(os.path.dirname(os.path.abspath(__file__))) + root_dir = current_dir.parent + output_dir = current_dir / "packages" + print("Package creation started...") + + # Make sure package dir is empty + if output_dir.exists(): + shutil.rmtree(str(output_dir)) + # Make sure output dir is created + output_dir.mkdir(parents=True) + + for addon_dir in current_dir.iterdir(): + if not addon_dir.is_dir(): + continue + + server_dir = addon_dir / "server" + if not server_dir.exists(): + continue + + if addon_dir.name == "openpype": + create_openpype_package( + addon_dir, output_dir, root_dir, create_zip, keep_source + ) + + else: + create_addon_package( + addon_dir, output_dir, create_zip, keep_source + ) + + print(f"- package '{addon_dir.name}' created") + print(f"Package creation finished. Output directory: {output_dir}") + + +if __name__ == "__main__": + create_zip = "--skip-zip" not in sys.argv + keep_sources = "--keep-sources" in sys.argv + main(create_zip, keep_sources) diff --git a/server_addon/deadline/server/__init__.py b/server_addon/deadline/server/__init__.py new file mode 100644 index 0000000000..36d04189a9 --- /dev/null +++ b/server_addon/deadline/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import DeadlineSettings, DEFAULT_VALUES + + +class Deadline(BaseServerAddon): + name = "deadline" + title = "Deadline" + version = __version__ + settings_model: Type[DeadlineSettings] = DeadlineSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/deadline/server/settings/__init__.py b/server_addon/deadline/server/settings/__init__.py new file mode 100644 index 0000000000..0307862afa --- /dev/null +++ b/server_addon/deadline/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + DeadlineSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "DeadlineSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/deadline/server/settings/main.py b/server_addon/deadline/server/settings/main.py new file mode 100644 index 0000000000..f158b7464d --- /dev/null +++ b/server_addon/deadline/server/settings/main.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + +from .publish_plugins import ( + PublishPluginsModel, + DEFAULT_DEADLINE_PLUGINS_SETTINGS +) + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class DeadlineSettings(BaseSettingsModel): + deadline_urls: list[ServerListSubmodel] = Field( + default_factory=list, + title="System Deadline Webservice URLs", + scope=["studio"], + ) + deadline_servers: list[str] = Field( + title="Project deadline servers", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + @validator("deadline_urls") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "deadline_urls": [ + { + "name": "default", + "value": "http://127.0.0.1:8082" + } + ], + # TODO: this needs to be dynamic from "deadline_urls" + "deadline_servers": [], + "publish": DEFAULT_DEADLINE_PLUGINS_SETTINGS +} diff --git a/server_addon/deadline/server/settings/publish_plugins.py b/server_addon/deadline/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8d48695a9c --- /dev/null +++ b/server_addon/deadline/server/settings/publish_plugins.py @@ -0,0 +1,483 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class CollectDefaultDeadlineServerModel(BaseSettingsModel): + """Settings for event handlers running in ftrack service.""" + + pass_mongo_url: bool = Field(title="Pass Mongo url to job") + + +class CollectDeadlinePoolsModel(BaseSettingsModel): + """Settings Deadline default pools.""" + + primary_pool: str = Field(title="Primary Pool") + + secondary_pool: str = Field(title="Secondary Pool") + + +class ValidateExpectedFilesModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active: bool = Field(True, title="Active") + allow_user_override: bool = Field( + True, title="Allow user change frame range" + ) + families: list[str] = Field( + default_factory=list, title="Trigger on families" + ) + targets: list[str] = Field( + default_factory=list, title="Trigger for plugins" + ) + + +def tile_assembler_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "DraftTileAssembler", + "label": "Draft Tile Assembler" + }, + { + "value": "OpenPypeTileAssembler", + "label": "Open Image IO" + } + ] + + +class ScenePatchesSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Patch name") + regex: str = Field(title="Patch regex") + line: str = Field(title="Patch line") + + +class MayaSubmitDeadlineModel(BaseSettingsModel): + """Maya deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + import_reference: bool = Field(title="Use Scene with Imported Reference") + asset_dependencies: bool = Field(title="Use Asset dependencies") + priority: int = Field(title="Priority") + tile_priority: int = Field(title="Tile Priority") + group: str = Field(title="Group") + limit: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + tile_assembler_plugin: str = Field( + title="Tile Assembler Plugin", + enum_resolver=tile_assembler_enum, + ) + jobInfo: str = Field( + title="Additional JobInfo data", + widget="textarea", + ) + pluginInfo: str = Field( + title="Additional PluginInfo data", + widget="textarea", + ) + + scene_patches: list[ScenePatchesSubmodel] = Field( + default_factory=list, + title="Scene patches", + ) + strict_error_checking: bool = Field( + title="Disable Strict Error Check profiles" + ) + + @validator("limit", "scene_patches") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class MaxSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + +class EnvSearchReplaceSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class LimitGroupsSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Name") + value: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + + +def fusion_deadline_plugin_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "Fusion", + "label": "Fusion" + }, + { + "value": "FusionCmd", + "label": "FusionCmd" + } + ] + + +class FusionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + priority: int = Field(50, title="Priority") + chunk_size: int = Field(10, title="Frame per Task") + concurrent_tasks: int = Field(1, title="Number of concurrent tasks") + group: str = Field("", title="Group Name") + plugin: str = Field("Fusion", + enum_resolver=fusion_deadline_plugin_enum, + title="Deadline Plugin") + + +class NukeSubmitDeadlineModel(BaseSettingsModel): + """Nuke deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + concurrent_tasks: int = Field(title="Number of concurrent tasks") + group: str = Field(title="Group") + department: str = Field(title="Department") + use_gpu: bool = Field(title="Use GPU") + + env_allowed_keys: list[str] = Field( + default_factory=list, + title="Allowed environment keys" + ) + + env_search_replace_values: list[EnvSearchReplaceSubmodel] = Field( + default_factory=list, + title="Search & replace in environment values", + ) + + limit_groups: list[LimitGroupsSubmodel] = Field( + default_factory=list, + title="Limit Groups", + ) + + @validator("limit_groups", "env_allowed_keys", "env_search_replace_values") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class HarmonySubmitDeadlineModel(BaseSettingsModel): + """Harmony deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + + +class AfterEffectsSubmitDeadlineModel(BaseSettingsModel): + """After Effects deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + multiprocess: bool = Field(title="Optional") + + +class CelactionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + deadline_department: str = Field("", title="Deadline apartment") + deadline_priority: int = Field(50, title="Deadline priority") + deadline_pool: str = Field("", title="Deadline pool") + deadline_pool_secondary: str = Field("", title="Deadline pool (secondary)") + deadline_group: str = Field("", title="Deadline Group") + deadline_chunk_size: int = Field(10, title="Deadline Chunk size") + deadline_job_delay: str = Field( + "", title="Delay job (timecode dd:hh:mm:ss)" + ) + + +class BlenderSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + +class AOVFilterSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Host") + value: list[str] = Field( + default_factory=list, + title="AOV regex" + ) + + +class ProcessSubmittedJobOnFarmModel(BaseSettingsModel): + """Process submitted job on farm.""" + + enabled: bool = Field(title="Enabled") + deadline_department: str = Field(title="Department") + deadline_pool: str = Field(title="Pool") + deadline_group: str = Field(title="Group") + deadline_chunk_size: int = Field(title="Chunk Size") + deadline_priority: int = Field(title="Priority") + publishing_script: str = Field(title="Publishing script path") + skip_integration_repre_list: list[str] = Field( + default_factory=list, + title="Skip integration of representation with ext" + ) + aov_filter: list[AOVFilterSubmodel] = Field( + default_factory=list, + title="Reviewable products filter", + ) + + @validator("aov_filter", "skip_integration_repre_list") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class PublishPluginsModel(BaseSettingsModel): + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDeadlinePools: CollectDeadlinePoolsModel = Field( + default_factory=CollectDeadlinePoolsModel, + title="Default Pools") + ValidateExpectedFiles: ValidateExpectedFilesModel = Field( + default_factory=ValidateExpectedFilesModel, + title="Validate Expected Files" + ) + MayaSubmitDeadline: MayaSubmitDeadlineModel = Field( + default_factory=MayaSubmitDeadlineModel, + title="Maya Submit to deadline") + MaxSubmitDeadline: MaxSubmitDeadlineModel = Field( + default_factory=MaxSubmitDeadlineModel, + title="Max Submit to deadline") + FusionSubmitDeadline: FusionSubmitDeadlineModel = Field( + default_factory=FusionSubmitDeadlineModel, + title="Fusion submit to Deadline") + NukeSubmitDeadline: NukeSubmitDeadlineModel = Field( + default_factory=NukeSubmitDeadlineModel, + title="Nuke Submit to deadline") + HarmonySubmitDeadline: HarmonySubmitDeadlineModel = Field( + default_factory=HarmonySubmitDeadlineModel, + title="Harmony Submit to deadline") + AfterEffectsSubmitDeadline: AfterEffectsSubmitDeadlineModel = Field( + default_factory=AfterEffectsSubmitDeadlineModel, + title="After Effects to deadline") + CelactionSubmitDeadline: CelactionSubmitDeadlineModel = Field( + default_factory=CelactionSubmitDeadlineModel, + title="Celaction Submit Deadline") + BlenderSubmitDeadline: BlenderSubmitDeadlineModel = Field( + default_factory=BlenderSubmitDeadlineModel, + title="Blender Submit Deadline") + ProcessSubmittedJobOnFarm: ProcessSubmittedJobOnFarmModel = Field( + default_factory=ProcessSubmittedJobOnFarmModel, + title="Process submitted job on farm.") + + +DEFAULT_DEADLINE_PLUGINS_SETTINGS = { + "CollectDefaultDeadlineServer": { + "pass_mongo_url": True + }, + "CollectDeadlinePools": { + "primary_pool": "", + "secondary_pool": "" + }, + "ValidateExpectedFiles": { + "enabled": True, + "active": True, + "allow_user_override": True, + "families": [ + "render" + ], + "targets": [ + "deadline" + ] + }, + "MayaSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "tile_assembler_plugin": "DraftTileAssembler", + "use_published": True, + "import_reference": False, + "asset_dependencies": True, + "strict_error_checking": True, + "priority": 50, + "tile_priority": 50, + "group": "none", + "limit": [], + # this used to be empty dict + "jobInfo": "", + # this used to be empty dict + "pluginInfo": "", + "scene_patches": [] + }, + "MaxSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, + "FusionSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "" + }, + "NukeSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "", + "department": "", + "use_gpu": True, + "env_allowed_keys": [], + "env_search_replace_values": [], + "limit_groups": [] + }, + "HarmonySubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "" + }, + "AfterEffectsSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "", + "multiprocess": True + }, + "CelactionSubmitDeadline": { + "enabled": True, + "deadline_department": "", + "deadline_priority": 50, + "deadline_pool": "", + "deadline_pool_secondary": "", + "deadline_group": "", + "deadline_chunk_size": 10, + "deadline_job_delay": "00:00:00:00" + }, + "BlenderSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, + "ProcessSubmittedJobOnFarm": { + "enabled": True, + "deadline_department": "", + "deadline_pool": "", + "deadline_group": "", + "deadline_chunk_size": 1, + "deadline_priority": 50, + "publishing_script": "", + "skip_integration_repre_list": [], + "aov_filter": [ + { + "name": "maya", + "value": [ + ".*([Bb]eauty).*" + ] + }, + { + "name": "blender", + "value": [ + ".*([Bb]eauty).*" + ] + }, + { + "name": "aftereffects", + "value": [ + ".*" + ] + }, + { + "name": "celaction", + "value": [ + ".*" + ] + }, + { + "name": "harmony", + "value": [ + ".*" + ] + }, + { + "name": "max", + "value": [ + ".*" + ] + }, + { + "name": "fusion", + "value": [ + ".*" + ] + } + ] + } +} diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py new file mode 100644 index 0000000000..b3f4756216 --- /dev/null +++ b/server_addon/deadline/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/flame/server/__init__.py b/server_addon/flame/server/__init__.py new file mode 100644 index 0000000000..7d5eb3960f --- /dev/null +++ b/server_addon/flame/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FlameSettings, DEFAULT_VALUES + + +class FlameAddon(BaseServerAddon): + name = "flame" + title = "Flame" + version = __version__ + settings_model: Type[FlameSettings] = FlameSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/flame/server/settings/__init__.py b/server_addon/flame/server/settings/__init__.py new file mode 100644 index 0000000000..39b8220d40 --- /dev/null +++ b/server_addon/flame/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + FlameSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "FlameSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/flame/server/settings/create_plugins.py b/server_addon/flame/server/settings/create_plugins.py new file mode 100644 index 0000000000..374a7368d2 --- /dev/null +++ b/server_addon/flame/server/settings/create_plugins.py @@ -0,0 +1,120 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModel(BaseSettingsModel): + hierarchy: str = Field( + "shot", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + useShotName: bool = Field( + True, + title="Use Shot Name", + ) + clipRename: bool = Field( + False, + title="Rename clips", + ) + clipName: str = Field( + "{sequence}{shot}", + title="Clip name template" + ) + segmentIndex: bool = Field( + True, + title="Accept segment order" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "a", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "####", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + includeHandles: bool = Field( + False, + title="Enable handles including" + ) + retimedHandles: bool = Field( + True, + title="Enable retimed handles" + ) + retimedFramerange: bool = Field( + True, + title="Enable retimed shot frameranges" + ) + + +class CreatePuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModel = Field( + default_factory=CreateShotClipModel, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "useShotName": True, + "clipRename": False, + "clipName": "{sequence}{shot}", + "segmentIndex": True, + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "a", + "track": "{_track_}", + "shot": "####", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 5, + "handleEnd": 5, + "includeHandles": False, + "retimedHandles": True, + "retimedFramerange": True + } +} diff --git a/server_addon/flame/server/settings/imageio.py b/server_addon/flame/server/settings/imageio.py new file mode 100644 index 0000000000..ef1e4721d1 --- /dev/null +++ b/server_addon/flame/server/settings/imageio.py @@ -0,0 +1,130 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ProfileNamesMappingInputsModel(BaseSettingsModel): + _layout = "expanded" + + flameName: str = Field("", title="Flame name") + ocioName: str = Field("", title="OCIO name") + + +class ProfileNamesMappingModel(BaseSettingsModel): + _layout = "expanded" + + inputs: list[ProfileNamesMappingInputsModel] = Field( + default_factory=list, + title="Profile names mapping" + ) + + +class ImageIOProjectModel(BaseSettingsModel): + colourPolicy: str = Field( + "ACES 1.1", + title="Colour Policy (name or path)", + section="Project" + ) + frameDepth: str = Field( + "16-bit fp", + title="Image Depth" + ) + fieldDominance: str = Field( + "PROGRESSIVE", + title="Field Dominance" + ) + + +class FlameImageIOModel(BaseSettingsModel): + _isGroup = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + # NOTE 'project' attribute was expanded to this model but that caused + # inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + project: ImageIOProjectModel = Field( + default_factory=ImageIOProjectModel, + title="Project" + ) + profilesMapping: ProfileNamesMappingModel = Field( + default_factory=ProfileNamesMappingModel, + title="Profile names mapping" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "project": { + "colourPolicy": "ACES 1.1", + "frameDepth": "16-bit fp", + "fieldDominance": "PROGRESSIVE" + }, + "profilesMapping": { + "inputs": [ + { + "flameName": "ACEScg", + "ocioName": "ACES - ACEScg" + }, + { + "flameName": "Rec.709 video", + "ocioName": "Output - Rec.709" + } + ] + } +} diff --git a/server_addon/flame/server/settings/loader_plugins.py b/server_addon/flame/server/settings/loader_plugins.py new file mode 100644 index 0000000000..6c27b926c2 --- /dev/null +++ b/server_addon/flame/server/settings/loader_plugins.py @@ -0,0 +1,99 @@ +from ayon_server.settings import Field, BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field(True) + + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_group_name: str = Field( + "OpenPype_Reels", + title="Reel group name" + ) + reel_name: str = Field( + "Loaded", + title="Reel name" + ) + + clip_name_template: str = Field( + "{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoadClipBatchModel(BaseSettingsModel): + enabled: bool = Field(True) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_name: str = Field( + "OP_LoadedReel", + title="Reel name" + ) + clip_name_template: str = Field( + "{batch}_{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoaderPluginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + LoadClipBatch: LoadClipBatchModel = Field( + default_factory=LoadClipBatchModel, + title="Load as clip to current batch" + ) + + +DEFAULT_LOADER_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_group_name": "OpenPype_Reels", + "reel_name": "Loaded", + "clip_name_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + }, + "LoadClipBatch": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_name": "OP_LoadedReel", + "clip_name_template": "{batch}_{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + } +} diff --git a/server_addon/flame/server/settings/main.py b/server_addon/flame/server/settings/main.py new file mode 100644 index 0000000000..f28de6641b --- /dev/null +++ b/server_addon/flame/server/settings/main.py @@ -0,0 +1,33 @@ +from ayon_server.settings import Field, BaseSettingsModel + +from .imageio import FlameImageIOModel, DEFAULT_IMAGEIO_SETTINGS +from .create_plugins import CreatePuginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_SETTINGS +from .loader_plugins import LoaderPluginsModel, DEFAULT_LOADER_SETTINGS + + +class FlameSettings(BaseSettingsModel): + imageio: FlameImageIOModel = Field( + default_factory=FlameImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatePuginsModel = Field( + default_factory=CreatePuginsModel, + title="Create plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + load: LoaderPluginsModel = Field( + default_factory=LoaderPluginsModel, + title="Loader plugins" + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADER_SETTINGS +} diff --git a/server_addon/flame/server/settings/publish_plugins.py b/server_addon/flame/server/settings/publish_plugins.py new file mode 100644 index 0000000000..ea7f109f73 --- /dev/null +++ b/server_addon/flame/server/settings/publish_plugins.py @@ -0,0 +1,190 @@ +from ayon_server.settings import Field, BaseSettingsModel, task_types_enum + + +class XMLPresetAttrsFromCommentsModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Attribute name") + type: str = Field( + default_factory=str, + title="Attribute type", + enum_resolver=lambda: ["number", "float", "string"] + ) + + +class AddTasksModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Task name") + type: str = Field( + default_factory=str, + title="Task type", + enum_resolver=task_types_enum + ) + create_batch_group: bool = Field( + True, + title="Create batch group" + ) + + +class CollectTimelineInstancesModel(BaseSettingsModel): + _isGroup = True + + xml_preset_attrs_from_comments: list[XMLPresetAttrsFromCommentsModel] = Field( + default_factory=list, + title="XML presets attributes parsable from segment comments" + ) + add_tasks: list[AddTasksModel] = Field( + default_factory=list, + title="Add tasks" + ) + + +class ExportPresetsMappingModel(BaseSettingsModel): + _layout = "expanded" + + name: str = Field( + ..., + title="Name" + ) + active: bool = Field(True, title="Is active") + export_type: str = Field( + "File Sequence", + title="Eport clip type", + enum_resolver=lambda: ["Movie", "File Sequence", "Sequence Publish"] + ) + ext: str = Field("exr", title="Output extension") + xml_preset_file: str = Field( + "OpenEXR (16-bit fp DWAA).xml", + title="XML preset file (with ext)" + ) + colorspace_out: str = Field( + "ACES - ACEScg", + title="Output color (imageio)" + ) + # TODO remove when resolved or v3 is not a thing anymore + # NOTE next 4 attributes were grouped under 'other_parameters' but that + # created inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + xml_preset_dir: str = Field( + "", + title="XML preset directory" + ) + parsed_comment_attrs: bool = Field( + True, + title="Parsed comment attributes" + ) + representation_add_range: bool = Field( + True, + title="Add range to representation name" + ) + representation_tags: list[str] = Field( + default_factory=list, + title="Representation tags" + ) + load_to_batch_group: bool = Field( + True, + title="Load to batch group reel" + ) + batch_group_loader_name: str = Field( + "LoadClipBatch", + title="Use loader name" + ) + filter_path_regex: str = Field( + ".*", + title="Regex in clip path" + ) + + +class ExtractProductResourcesModel(BaseSettingsModel): + _isGroup = True + + keep_original_representation: bool = Field( + False, + title="Publish clip's original media" + ) + export_presets_mapping: list[ExportPresetsMappingModel] = Field( + default_factory=list, + title="Export presets mapping" + ) + + +class IntegrateBatchGroupModel(BaseSettingsModel): + enabled: bool = Field( + False, + title="Enabled" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectTimelineInstances: CollectTimelineInstancesModel = Field( + default_factory=CollectTimelineInstancesModel, + title="Collect Timeline Instances" + ) + + ExtractProductResources: ExtractProductResourcesModel = Field( + default_factory=ExtractProductResourcesModel, + title="Extract Product Resources" + ) + + IntegrateBatchGroup: IntegrateBatchGroupModel = Field( + default_factory=IntegrateBatchGroupModel, + title="IntegrateBatchGroup" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectTimelineInstances": { + "xml_preset_attrs_from_comments": [ + { + "name": "width", + "type": "number" + }, + { + "name": "height", + "type": "number" + }, + { + "name": "pixelRatio", + "type": "float" + }, + { + "name": "resizeType", + "type": "string" + }, + { + "name": "resizeFilter", + "type": "string" + } + ], + "add_tasks": [ + { + "name": "compositing", + "type": "Compositing", + "create_batch_group": True + } + ] + }, + "ExtractProductResources": { + "keep_original_representation": False, + "export_presets_mapping": [ + { + "name": "exr16fpdwaa", + "active": True, + "export_type": "File Sequence", + "ext": "exr", + "xml_preset_file": "OpenEXR (16-bit fp DWAA).xml", + "colorspace_out": "ACES - ACEScg", + "xml_preset_dir": "", + "parsed_comment_attrs": True, + "representation_add_range": True, + "representation_tags": [], + "load_to_batch_group": True, + "batch_group_loader_name": "LoadClipBatch", + "filter_path_regex": ".*" + } + ] + }, + "IntegrateBatchGroup": { + "enabled": False + } +} diff --git a/server_addon/flame/server/version.py b/server_addon/flame/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/flame/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/fusion/server/__init__.py b/server_addon/fusion/server/__init__.py new file mode 100644 index 0000000000..4d43f28812 --- /dev/null +++ b/server_addon/fusion/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FusionSettings, DEFAULT_VALUES + + +class FusionAddon(BaseServerAddon): + name = "fusion" + title = "Fusion" + version = __version__ + settings_model: Type[FusionSettings] = FusionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/fusion/server/imageio.py b/server_addon/fusion/server/imageio.py new file mode 100644 index 0000000000..fe867af424 --- /dev/null +++ b/server_addon/fusion/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class FusionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/fusion/server/settings.py b/server_addon/fusion/server/settings.py new file mode 100644 index 0000000000..92fb362c66 --- /dev/null +++ b/server_addon/fusion/server/settings.py @@ -0,0 +1,95 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, +) + +from .imageio import FusionImageIOModel + + +class CopyFusionSettingsModel(BaseSettingsModel): + copy_path: str = Field("", title="Local Fusion profile directory") + copy_status: bool = Field(title="Copy profile on first launch") + force_sync: bool = Field(title="Resync profile on each launch") + + +def _create_saver_instance_attributes_enum(): + return [ + { + "value": "reviewable", + "label": "Reviewable" + }, + { + "value": "farm_rendering", + "label": "Farm rendering" + } + ] + + +class CreateSaverPluginModel(BaseSettingsModel): + _isGroup = True + temp_rendering_path_template: str = Field( + "", title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default variants" + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=_create_saver_instance_attributes_enum, + title="Instance attributes" + ) + + +class CreatPluginsModel(BaseSettingsModel): + CreateSaver: CreateSaverPluginModel = Field( + default_factory=CreateSaverPluginModel, + title="Create Saver" + ) + + +class FusionSettings(BaseSettingsModel): + imageio: FusionImageIOModel = Field( + default_factory=FusionImageIOModel, + title="Color Management (ImageIO)" + ) + copy_fusion_settings: CopyFusionSettingsModel = Field( + default_factory=CopyFusionSettingsModel, + title="Local Fusion profile settings" + ) + create: CreatPluginsModel = Field( + default_factory=CreatPluginsModel, + title="Creator plugins" + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "copy_fusion_settings": { + "copy_path": "~/.openpype/hosts/fusion/profiles", + "copy_status": False, + "force_sync": False + }, + "create": { + "CreateSaver": { + "temp_rendering_path_template": "{workdir}/renders/fusion/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ] + } + } +} diff --git a/server_addon/fusion/server/version.py b/server_addon/fusion/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/fusion/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/harmony/LICENSE b/server_addon/harmony/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/harmony/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/harmony/README.md b/server_addon/harmony/README.md new file mode 100644 index 0000000000..d971fa39f9 --- /dev/null +++ b/server_addon/harmony/README.md @@ -0,0 +1,4 @@ +ToonBoom Harmony Addon +=============== + +Integration with ToonBoom Harmony. diff --git a/server_addon/harmony/server/__init__.py b/server_addon/harmony/server/__init__.py new file mode 100644 index 0000000000..4ecda1989e --- /dev/null +++ b/server_addon/harmony/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import HarmonySettings, DEFAULT_HARMONY_SETTING +from .version import __version__ + + +class Harmony(BaseServerAddon): + name = "harmony" + title = "Harmony" + version = __version__ + + settings_model = HarmonySettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_HARMONY_SETTING) diff --git a/server_addon/harmony/server/settings/__init__.py b/server_addon/harmony/server/settings/__init__.py new file mode 100644 index 0000000000..4a8118d4da --- /dev/null +++ b/server_addon/harmony/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HarmonySettings, + DEFAULT_HARMONY_SETTING, +) + + +__all__ = ( + "HarmonySettings", + "DEFAULT_HARMONY_SETTING", +) diff --git a/server_addon/harmony/server/settings/imageio.py b/server_addon/harmony/server/settings/imageio.py new file mode 100644 index 0000000000..4e01fae3d4 --- /dev/null +++ b/server_addon/harmony/server/settings/imageio.py @@ -0,0 +1,55 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class HarmonyImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/harmony/server/settings/main.py b/server_addon/harmony/server/settings/main.py new file mode 100644 index 0000000000..0936bc1fc7 --- /dev/null +++ b/server_addon/harmony/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import HarmonyImageIOModel +from .publish_plugins import HarmonyPublishPlugins + + +class HarmonySettings(BaseSettingsModel): + """Harmony Project Settings.""" + + imageio: HarmonyImageIOModel = Field( + default_factory=HarmonyImageIOModel, + title="OCIO config" + ) + publish: HarmonyPublishPlugins = Field( + default_factory=HarmonyPublishPlugins, + title="Publish plugins" + ) + + +DEFAULT_HARMONY_SETTING = { + "load": { + "ImageSequenceLoader": { + "family": [ + "shot", + "render", + "image", + "plate", + "reference" + ], + "representations": [ + "jpeg", + "png", + "jpg" + ] + } + }, + "publish": { + "CollectPalettes": { + "allowed_tasks": [ + ".*" + ] + }, + "ValidateAudio": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "frame_check_filter": [], + "skip_resolution_check": [], + "skip_timelines_check": [] + } + } +} diff --git a/server_addon/harmony/server/settings/publish_plugins.py b/server_addon/harmony/server/settings/publish_plugins.py new file mode 100644 index 0000000000..bdaec2bbd4 --- /dev/null +++ b/server_addon/harmony/server/settings/publish_plugins.py @@ -0,0 +1,76 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectPalettesPlugin(BaseSettingsModel): + """Set regular expressions to filter triggering on specific task names. '.*' means on all.""" # noqa + + allowed_tasks: list[str] = Field( + default_factory=list, + title="Allowed tasks" + ) + + +class ValidateAudioPlugin(BaseSettingsModel): + """Check if scene contains audio track.""" # + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check if loaded container is scene are latest versions.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateSceneSettingsPlugin(BaseSettingsModel): + """Validate if FrameStart, FrameEnd and Resolution match shot data in DB. + Use regular expressions to limit validations only on particular asset + or task names.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + frame_check_filter: list[str] = Field( + default_factory=list, + title="Skip Frame check for Assets with name containing" + ) + + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks" + ) + + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks" + ) + + +class HarmonyPublishPlugins(BaseSettingsModel): + + CollectPalettes: CollectPalettesPlugin = Field( + title="Collect Palettes", + default_factory=CollectPalettesPlugin, + ) + + ValidateAudio: ValidateAudioPlugin = Field( + title="Validate Audio", + default_factory=ValidateAudioPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateSceneSettings: ValidateSceneSettingsPlugin = Field( + title="Validate Scene Settings", + default_factory=ValidateSceneSettingsPlugin, + ) diff --git a/server_addon/harmony/server/version.py b/server_addon/harmony/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/harmony/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/hiero/server/__init__.py b/server_addon/hiero/server/__init__.py new file mode 100644 index 0000000000..d0f9bcefc3 --- /dev/null +++ b/server_addon/hiero/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HieroSettings, DEFAULT_VALUES + + +class HieroAddon(BaseServerAddon): + name = "hiero" + title = "Hiero" + version = __version__ + settings_model: Type[HieroSettings] = HieroSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/hiero/server/settings/__init__.py b/server_addon/hiero/server/settings/__init__.py new file mode 100644 index 0000000000..246c8203e9 --- /dev/null +++ b/server_addon/hiero/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HieroSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HieroSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/hiero/server/settings/common.py b/server_addon/hiero/server/settings/common.py new file mode 100644 index 0000000000..eb4791f93e --- /dev/null +++ b/server_addon/hiero/server/settings/common.py @@ -0,0 +1,98 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"} +] + + +class KnobModel(BaseSettingsModel): + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Value" + ) diff --git a/server_addon/hiero/server/settings/create_plugins.py b/server_addon/hiero/server/settings/create_plugins.py new file mode 100644 index 0000000000..daec4a7cea --- /dev/null +++ b/server_addon/hiero/server/settings/create_plugins.py @@ -0,0 +1,97 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/hiero/server/settings/filters.py b/server_addon/hiero/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/hiero/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/hiero/server/settings/imageio.py b/server_addon/hiero/server/settings/imageio.py new file mode 100644 index 0000000000..f2c2728057 --- /dev/null +++ b/server_addon/hiero/server/settings/imageio.py @@ -0,0 +1,169 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Hiero workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Hiero is excpecting camel case key names, + but for better code consistency we are using snake_case: + + ocio_config = ocioConfigName + working_space_name = workingSpace + int_16_name = sixteenBitLut + int_8_name = eightBitLut + float_name = floatLut + log_name = logLut + viewer_name = viewerLut + thumbnail_name = thumbnailLut + """ + + ocioConfigName: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + workingSpace: str = Field( + title="Working Space" + ) + viewerLut: str = Field( + title="Viewer" + ) + eightBitLut: str = Field( + title="8-bit files" + ) + sixteenBitLut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + thumbnailLut: str = Field( + title="Thumnails" + ) + monitorOutLut: str = Field( + title="Monitor" + ) + + +class ClipColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ClipColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Hiero color management project settings. """ + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to clips via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "workfile": { + "ocioConfigName": "nuke-default", + "workingSpace": "linear", + "viewerLut": "sRGB", + "eightBitLut": "sRGB", + "sixteenBitLut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear", + "thumbnailLut": "sRGB", + "monitorOutLut": "sRGB" + }, + "regexInputs": { + "inputs": [ + { + "regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)", + "colorspace": "sRGB" + } + ] + } +} diff --git a/server_addon/hiero/server/settings/loader_plugins.py b/server_addon/hiero/server/settings/loader_plugins.py new file mode 100644 index 0000000000..83b3564c2a --- /dev/null +++ b/server_addon/hiero/server/settings/loader_plugins.py @@ -0,0 +1,38 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + clip_name_template: str = Field( + title="Clip name template" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "clip_name_template": "{folder[name]}_{product[name]}_{representation}" + } +} diff --git a/server_addon/hiero/server/settings/main.py b/server_addon/hiero/server/settings/main.py new file mode 100644 index 0000000000..47f8110c22 --- /dev/null +++ b/server_addon/hiero/server/settings/main.py @@ -0,0 +1,64 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .filters import PublishGUIFilterItemModel + + +class HieroSettings(BaseSettingsModel): + """Nuke addon settings.""" + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "filters": [], +} diff --git a/server_addon/hiero/server/settings/publish_plugins.py b/server_addon/hiero/server/settings/publish_plugins.py new file mode 100644 index 0000000000..a85e62724b --- /dev/null +++ b/server_addon/hiero/server/settings/publish_plugins.py @@ -0,0 +1,48 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CollectInstanceVersionModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + + +class ExtractReviewCutUpVideoModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + tags_addition: list[str] = Field( + default_factory=list, + title="Additional tags" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceVersion: CollectInstanceVersionModel = Field( + default_factory=CollectInstanceVersionModel, + title="Collect Instance Version" + ) + """# TODO: enhance settings with host api: + Rename class name and plugin name + to match title (it makes more sense) + """ + ExtractReviewCutUpVideo: ExtractReviewCutUpVideoModel = Field( + default_factory=ExtractReviewCutUpVideoModel, + title="Exctract Review Trim" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceVersion": { + "enabled": False, + }, + "ExtractReviewCutUpVideo": { + "enabled": True, + "tags_addition": [ + "review" + ] + } +} diff --git a/server_addon/hiero/server/settings/scriptsmenu.py b/server_addon/hiero/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..51cb088298 --- /dev/null +++ b/server_addon/hiero/server/settings/scriptsmenu.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + """# TODO: enhance settings with host api: + - in api rename key `name` to `menu_name` + """ + name: str = Field(title="Menu name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition") + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_hiero')", + "tooltip": "Open the OpenPype Hiero user doc page" + } + ] +} diff --git a/server_addon/hiero/server/version.py b/server_addon/hiero/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/hiero/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/houdini/server/__init__.py b/server_addon/houdini/server/__init__.py new file mode 100644 index 0000000000..870ec2d0b7 --- /dev/null +++ b/server_addon/houdini/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HoudiniSettings, DEFAULT_VALUES + + +class Houdini(BaseServerAddon): + name = "houdini" + title = "Houdini" + version = __version__ + settings_model: Type[HoudiniSettings] = HoudiniSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/houdini/server/settings/__init__.py b/server_addon/houdini/server/settings/__init__.py new file mode 100644 index 0000000000..9fd2678925 --- /dev/null +++ b/server_addon/houdini/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HoudiniSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HoudiniSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/houdini/server/settings/general.py b/server_addon/houdini/server/settings/general.py new file mode 100644 index 0000000000..21cc4c452c --- /dev/null +++ b/server_addon/houdini/server/settings/general.py @@ -0,0 +1,45 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class HoudiniVarModel(BaseSettingsModel): + _layout = "expanded" + var: str = Field("", title="Var") + value: str = Field("", title="Value") + is_directory: bool = Field(False, title="Treat as directory") + + +class UpdateHoudiniVarcontextModel(BaseSettingsModel): + """Sync vars with context changes. + + If a value is treated as a directory on update + it will be ensured the folder exists. + """ + + enabled: bool = Field(title="Enabled") + # TODO this was dynamic dictionary '{var: path}' + houdini_vars: list[HoudiniVarModel] = Field( + default_factory=list, + title="Houdini Vars" + ) + + +class GeneralSettingsModel(BaseSettingsModel): + update_houdini_var_context: UpdateHoudiniVarcontextModel = Field( + default_factory=UpdateHoudiniVarcontextModel, + title="Update Houdini Vars on context change" + ) + + +DEFAULT_GENERAL_SETTINGS = { + "update_houdini_var_context": { + "enabled": True, + "houdini_vars": [ + { + "var": "JOB", + "value": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task[name]}", # noqa + "is_directory": True + } + ] + } +} diff --git a/server_addon/houdini/server/settings/imageio.py b/server_addon/houdini/server/settings/imageio.py new file mode 100644 index 0000000000..88aa40ecd6 --- /dev/null +++ b/server_addon/houdini/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class HoudiniImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/houdini/server/settings/main.py b/server_addon/houdini/server/settings/main.py new file mode 100644 index 0000000000..0c2e160c87 --- /dev/null +++ b/server_addon/houdini/server/settings/main.py @@ -0,0 +1,87 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) +from .general import ( + GeneralSettingsModel, + DEFAULT_GENERAL_SETTINGS +) +from .imageio import HoudiniImageIOModel +from .publish_plugins import ( + PublishPluginsModel, + CreatePluginsModel, + DEFAULT_HOUDINI_PUBLISH_SETTINGS, + DEFAULT_HOUDINI_CREATE_SETTINGS +) + + +class ShelfToolsModel(BaseSettingsModel): + name: str = Field(title="Name") + help: str = Field(title="Help text") + script: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Script Path " + ) + icon: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Icon Path " + ) + + +class ShelfDefinitionModel(BaseSettingsModel): + _layout = "expanded" + shelf_name: str = Field(title="Shelf name") + tools_list: list[ShelfToolsModel] = Field( + default_factory=list, + title="Shelf Tools" + ) + + +class ShelvesModel(BaseSettingsModel): + _layout = "expanded" + shelf_set_name: str = Field(title="Shelfs set name") + + shelf_set_source_path: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Shelf Set Path (optional)" + ) + + shelf_definition: list[ShelfDefinitionModel] = Field( + default_factory=list, + title="Shelf Definitions" + ) + + +class HoudiniSettings(BaseSettingsModel): + general: GeneralSettingsModel = Field( + default_factory=GeneralSettingsModel, + title="General" + ) + imageio: HoudiniImageIOModel = Field( + default_factory=HoudiniImageIOModel, + title="Color Management (ImageIO)" + ) + shelves: list[ShelvesModel] = Field( + default_factory=list, + title="Houdini Scripts Shelves", + ) + + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Creator Plugins", + ) + + +DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, + "shelves": [], + "create": DEFAULT_HOUDINI_CREATE_SETTINGS, + "publish": DEFAULT_HOUDINI_PUBLISH_SETTINGS +} diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py new file mode 100644 index 0000000000..58240b0205 --- /dev/null +++ b/server_addon/houdini/server/settings/publish_plugins.py @@ -0,0 +1,219 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +# Creator Plugins +class CreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + + +class CreateArnoldAssModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + ext: str = Field(Title="Extension") + + +class CreateStaticMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + static_mesh_prefixes: str = Field("S", title="Static Mesh Prefix") + collision_prefixes: list[str] = Field( + default_factory=list, + title="Collision Prefixes" + ) + + +class CreatePluginsModel(BaseSettingsModel): + CreateArnoldAss: CreateArnoldAssModel = Field( + default_factory=CreateArnoldAssModel, + title="Create Alembic Camera") + # "-" is not compatible in the new model + CreateStaticMesh: CreateStaticMeshModel = Field( + default_factory=CreateStaticMeshModel, + title="Create Static Mesh" + ) + CreateAlembicCamera: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Alembic Camera") + CreateCompositeSequence: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Composite Sequence") + CreatePointCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Point Cache") + CreateRedshiftROP: CreatorModel = Field( + default_factory=CreatorModel, + title="Create RedshiftROP") + CreateRemotePublish: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Remote Publish") + CreateVDBCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create VDB Cache") + CreateUSD: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD") + CreateUSDModel: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD model") + USDCreateShadingWorkspace: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD shading workspace") + CreateUSDRender: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD render") + + +DEFAULT_HOUDINI_CREATE_SETTINGS = { + "CreateArnoldAss": { + "enabled": True, + "default_variants": ["Main"], + "ext": ".ass" + }, + "CreateStaticMesh": { + "enabled": True, + "default_variants": [ + "Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, + "CreateAlembicCamera": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateCompositeSequence": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreatePointCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRedshiftROP": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRemotePublish": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateVDBCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateUSD": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDModel": { + "enabled": False, + "default_variants": ["Main"] + }, + "USDCreateShadingWorkspace": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDRender": { + "enabled": False, + "default_variants": ["Main"] + }, +} + + +# Publish Plugins +class ValidateWorkfilePathsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + node_types: list[str] = Field( + default_factory=list, + title="Node Types" + ) + prohibited_vars: list[str] = Field( + default_factory=list, + title="Prohibited Variables" + ) + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPluginsModel(BaseSettingsModel): + ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field( + default_factory=ValidateWorkfilePathsModel, + title="Validate workfile paths settings.") + ValidateReviewColorspace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Review Colorspace.") + ValidateContainers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Latest Containers.") + ValidateSubsetName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Subset Name.") + ValidateMeshIsStatic: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh is Static.") + ValidateUnrealStaticMeshName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unreal Static Mesh Name.") + + +DEFAULT_HOUDINI_PUBLISH_SETTINGS = { + "ValidateWorkfilePaths": { + "enabled": True, + "optional": True, + "node_types": [ + "file", + "alembic" + ], + "prohibited_vars": [ + "$HIP", + "$JOB" + ] + }, + "ValidateReviewColorspace": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSubsetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshIsStatic": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateUnrealStaticMeshName": { + "enabled": False, + "optional": True, + "active": True + } +} diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py new file mode 100644 index 0000000000..bbab0242f6 --- /dev/null +++ b/server_addon/houdini/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.4" diff --git a/server_addon/kitsu/server/__init__.py b/server_addon/kitsu/server/__init__.py new file mode 100644 index 0000000000..69cf812dea --- /dev/null +++ b/server_addon/kitsu/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import KitsuSettings, DEFAULT_VALUES + + +class KitsuAddon(BaseServerAddon): + name = "kitsu" + title = "Kitsu" + version = __version__ + settings_model: Type[KitsuSettings] = KitsuSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/kitsu/server/settings.py b/server_addon/kitsu/server/settings.py new file mode 100644 index 0000000000..a4d10d889d --- /dev/null +++ b/server_addon/kitsu/server/settings.py @@ -0,0 +1,112 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class EntityPattern(BaseSettingsModel): + episode: str = Field(title="Episode") + sequence: str = Field(title="Sequence") + shot: str = Field(title="Shot") + + +def _status_change_cond_enum(): + return [ + {"value": "equal", "label": "Equal"}, + {"value": "not_equal", "label": "Not equal"} + ] + + +class StatusChangeCondition(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + short_name: str = Field("", title="Short name") + + +class StatusChangeProductTypeRequirementModel(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + product_type: str = Field("", title="Product type") + + +class StatusChangeConditionsModel(BaseSettingsModel): + status_conditions: list[StatusChangeCondition] = Field( + default_factory=list, + title="Status conditions" + ) + product_type_requirements: list[StatusChangeProductTypeRequirementModel] = Field( + default_factory=list, + title="Product type requirements") + + +class CustomCommentTemplateModel(BaseSettingsModel): + enabled: bool = Field(True) + comment_template: str = Field("", title="Custom comment") + + +class IntegrateKitsuNotes(BaseSettingsModel): + """Kitsu supports markdown and here you can create a custom comment template. + + You can use data from your publishing instance's data. + """ + + set_status_note: bool = Field(title="Set status on note") + note_status_shortname: str = Field(title="Note shortname") + status_change_conditions: StatusChangeConditionsModel = Field( + default_factory=StatusChangeConditionsModel, + title="Status change conditions" + ) + custom_comment_template: CustomCommentTemplateModel = Field( + default_factory=CustomCommentTemplateModel, + title="Custom Comment Template", + ) + + +class PublishPlugins(BaseSettingsModel): + IntegrateKitsuNote: IntegrateKitsuNotes = Field( + default_factory=IntegrateKitsuNotes, + title="Integrate Kitsu Note" + ) + + +class KitsuSettings(BaseSettingsModel): + server: str = Field( + "", + title="Kitsu Server", + scope=["studio"], + ) + entities_naming_pattern: EntityPattern = Field( + default_factory=EntityPattern, + title="Entities naming pattern", + ) + publish: PublishPlugins = Field( + default_factory=PublishPlugins, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "entities_naming_pattern": { + "episode": "E##", + "sequence": "SQ##", + "shot": "SH##" + }, + "publish": { + "IntegrateKitsuNote": { + "set_status_note": False, + "note_status_shortname": "wfa", + "status_change_conditions": { + "status_conditions": [], + "product_type_requirements": [] + }, + "custom_comment_template": { + "enabled": False, + "comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| product type | `{product[type]}` |\n| name | `{name}` |" + } + } + } +} diff --git a/server_addon/kitsu/server/version.py b/server_addon/kitsu/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/kitsu/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/max/server/__init__.py b/server_addon/max/server/__init__.py new file mode 100644 index 0000000000..31c694a084 --- /dev/null +++ b/server_addon/max/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MaxSettings, DEFAULT_VALUES + + +class MaxAddon(BaseServerAddon): + name = "max" + title = "Max" + version = __version__ + settings_model: Type[MaxSettings] = MaxSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/max/server/settings/__init__.py b/server_addon/max/server/settings/__init__.py new file mode 100644 index 0000000000..986b1903a5 --- /dev/null +++ b/server_addon/max/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + MaxSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "MaxSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/max/server/settings/imageio.py b/server_addon/max/server/settings/imageio.py new file mode 100644 index 0000000000..5e46104fa7 --- /dev/null +++ b/server_addon/max/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/max/server/settings/main.py b/server_addon/max/server/settings/main.py new file mode 100644 index 0000000000..7f4561cbb1 --- /dev/null +++ b/server_addon/max/server/settings/main.py @@ -0,0 +1,60 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import ImageIOSettings +from .render_settings import ( + RenderSettingsModel, DEFAULT_RENDER_SETTINGS +) +from .publishers import ( + PublishersModel, DEFAULT_PUBLISH_SETTINGS +) + + +class PRTAttributesModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Attribute") + + +class PointCloudSettings(BaseSettingsModel): + attribute: list[PRTAttributesModel] = Field( + default_factory=list, title="Channel Attribute") + + +class MaxSettings(BaseSettingsModel): + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (ImageIO)" + ) + RenderSettings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, + title="Render Settings" + ) + PointCloud: PointCloudSettings = Field( + default_factory=PointCloudSettings, + title="Point Cloud" + ) + publish: PublishersModel = Field( + default_factory=PublishersModel, + title="Publish Plugins") + + +DEFAULT_VALUES = { + "RenderSettings": DEFAULT_RENDER_SETTINGS, + "PointCloud": { + "attribute": [ + {"name": "Age", "value": "age"}, + {"name": "Radius", "value": "radius"}, + {"name": "Position", "value": "position"}, + {"name": "Rotation", "value": "rotation"}, + {"name": "Scale", "value": "scale"}, + {"name": "Velocity", "value": "velocity"}, + {"name": "Color", "value": "color"}, + {"name": "TextureCoordinate", "value": "texcoord"}, + {"name": "MaterialID", "value": "matid"}, + {"name": "custFloats", "value": "custFloats"}, + {"name": "custVecs", "value": "custVecs"}, + ] + }, + "publish": DEFAULT_PUBLISH_SETTINGS + +} diff --git a/server_addon/max/server/settings/publishers.py b/server_addon/max/server/settings/publishers.py new file mode 100644 index 0000000000..a695b85e89 --- /dev/null +++ b/server_addon/max/server/settings/publishers.py @@ -0,0 +1,26 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishersModel(BaseSettingsModel): + ValidateFrameRange: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Frame Range", + section="Validators" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/max/server/settings/render_settings.py b/server_addon/max/server/settings/render_settings.py new file mode 100644 index 0000000000..c00cb5e436 --- /dev/null +++ b/server_addon/max/server/settings/render_settings.py @@ -0,0 +1,49 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def image_format_enum(): + """Return enumerator for image output formats.""" + return [ + {"label": "bmp", "value": "bmp"}, + {"label": "exr", "value": "exr"}, + {"label": "tif", "value": "tif"}, + {"label": "tiff", "value": "tiff"}, + {"label": "jpg", "value": "jpg"}, + {"label": "png", "value": "png"}, + {"label": "tga", "value": "tga"}, + {"label": "dds", "value": "dds"} + ] + + +class RenderSettingsModel(BaseSettingsModel): + default_render_image_folder: str = Field( + title="Default render image folder" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + image_format: str = Field( + enum_resolver=image_format_enum, + title="Output Image Format" + ) + multipass: bool = Field(title="multipass") + + +DEFAULT_RENDER_SETTINGS = { + "default_render_image_folder": "renders/3dsmax", + "aov_separator": "underscore", + "image_format": "exr", + "multipass": True +} diff --git a/server_addon/max/server/version.py b/server_addon/max/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/max/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/LICENSE b/server_addon/maya/LICENCE similarity index 99% rename from openpype/modules/ftrack/python2_vendor/arrow/LICENSE rename to server_addon/maya/LICENCE index 2bef500de7..261eeb9e9f 100644 --- a/openpype/modules/ftrack/python2_vendor/arrow/LICENSE +++ b/server_addon/maya/LICENCE @@ -186,7 +186,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright 2019 Chris Smith + Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/server_addon/maya/README.md b/server_addon/maya/README.md new file mode 100644 index 0000000000..c65c09fba0 --- /dev/null +++ b/server_addon/maya/README.md @@ -0,0 +1,4 @@ +Maya Integration Addon +====================== + +WIP diff --git a/server_addon/maya/server/__init__.py b/server_addon/maya/server/__init__.py new file mode 100644 index 0000000000..8784427dcf --- /dev/null +++ b/server_addon/maya/server/__init__.py @@ -0,0 +1,16 @@ +"""Maya Addon Module""" +from ayon_server.addons import BaseServerAddon + +from .settings.main import MayaSettings, DEFAULT_MAYA_SETTING +from .version import __version__ + + +class MayaAddon(BaseServerAddon): + name = "maya" + title = "Maya" + version = __version__ + settings_model = MayaSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_MAYA_SETTING) diff --git a/server_addon/maya/server/settings/__init__.py b/server_addon/maya/server/settings/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py new file mode 100644 index 0000000000..84e873589d --- /dev/null +++ b/server_addon/maya/server/settings/creators.py @@ -0,0 +1,421 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings import task_types_enum + + +class CreateLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + make_tx: bool = Field(title="Make tx files") + rs_tex: bool = Field(title="Make Redshift texture files") + default_variants: list[str] = Field( + default_factory=list, title="Default Products" + ) + + +class BasicCreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateUnrealStaticMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + static_mesh_prefixes: str = Field("S", title="Static Mesh Prefix") + collision_prefixes: list[str] = Field( + default_factory=list, + title="Collision Prefixes" + ) + + +class CreateUnrealSkeletalMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + joint_hints: str = Field("jnt_org", title="Joint root hint") + + +class CreateMultiverseLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + publish_mip_map: bool = Field(title="publish_mip_map") + + +class BasicExportMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAnimationModel(BaseSettingsModel): + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_parent_hierarchy: bool = Field( + title="Include Parent Hierarchy") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreatePointCacheModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAssModel(BasicCreatorModel): + expandProcedurals: bool = Field(title="Expand Procedurals") + motionBlur: bool = Field(title="Motion Blur") + motionBlurKeys: int = Field(2, title="Motion Blur Keys") + motionBlurLength: float = Field(0.5, title="Motion Blur Length") + maskOptions: bool = Field(title="Mask Options") + maskCamera: bool = Field(title="Mask Camera") + maskLight: bool = Field(title="Mask Light") + maskShape: bool = Field(title="Mask Shape") + maskShader: bool = Field(title="Mask Shader") + maskOverride: bool = Field(title="Mask Override") + maskDriver: bool = Field(title="Mask Driver") + maskFilter: bool = Field(title="Mask Filter") + maskColor_manager: bool = Field(title="Mask Color Manager") + maskOperator: bool = Field(title="Mask Operator") + + +class CreateReviewModel(BasicCreatorModel): + useMayaTimeline: bool = Field(title="Use Maya Timeline for Frame Range.") + + +class CreateVrayProxyModel(BaseSettingsModel): + enabled: bool = Field(True) + vrmesh: bool = Field(title="VrMesh") + alembic: bool = Field(title="Alembic") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + + +class CreateMultishotLayout(BasicCreatorModel): + shotParent: str = Field(title="Shot Parent Folder") + groupLoadedAssets: bool = Field(title="Group Loaded Assets") + task_type: list[str] = Field( + title="Task types", + enum_resolver=task_types_enum + ) + task_name: str = Field(title="Task name (regex)") + + +class CreatorsModel(BaseSettingsModel): + CreateLook: CreateLookModel = Field( + default_factory=CreateLookModel, + title="Create Look" + ) + CreateRender: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render" + ) + # "-" is not compatible in the new model + CreateUnrealStaticMesh: CreateUnrealStaticMeshModel = Field( + default_factory=CreateUnrealStaticMeshModel, + title="Create Unreal_Static Mesh" + ) + # "-" is not compatible in the new model + CreateUnrealSkeletalMesh: CreateUnrealSkeletalMeshModel = Field( + default_factory=CreateUnrealSkeletalMeshModel, + title="Create Unreal_Skeletal Mesh" + ) + CreateMultiverseLook: CreateMultiverseLookModel = Field( + default_factory=CreateMultiverseLookModel, + title="Create Multiverse Look" + ) + CreateAnimation: CreateAnimationModel = Field( + default_factory=CreateAnimationModel, + title="Create Animation" + ) + CreateModel: BasicExportMeshModel = Field( + default_factory=BasicExportMeshModel, + title="Create Model" + ) + CreatePointCache: CreatePointCacheModel = Field( + default_factory=CreatePointCacheModel, + title="Create Point Cache" + ) + CreateProxyAlembic: CreateProxyAlembicModel = Field( + default_factory=CreateProxyAlembicModel, + title="Create Proxy Alembic" + ) + CreateMultiverseUsd: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD" + ) + CreateMultiverseUsdComp: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Composition" + ) + CreateMultiverseUsdOver: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Override" + ) + CreateAss: CreateAssModel = Field( + default_factory=CreateAssModel, + title="Create Ass" + ) + CreateAssembly: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Assembly" + ) + CreateCamera: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Camera" + ) + CreateLayout: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Layout" + ) + CreateMayaScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Maya Scene" + ) + CreateRenderSetup: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render Setup" + ) + CreateReview: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + CreateRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Rig" + ) + CreateSetDress: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Set Dress" + ) + CreateVrayProxy: CreateVrayProxyModel = Field( + default_factory=CreateVrayProxyModel, + title="Create VRay Proxy" + ) + CreateVRayScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create VRay Scene" + ) + CreateYetiRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Yeti Rig" + ) + + +DEFAULT_CREATORS_SETTINGS = { + "CreateLook": { + "enabled": True, + "make_tx": True, + "rs_tex": False, + "default_variants": [ + "Main" + ] + }, + "CreateRender": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateUnrealStaticMesh": { + "enabled": True, + "default_variants": [ + "", + "_Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, + "CreateUnrealSkeletalMesh": { + "enabled": True, + "default_variants": [ + "Main", + ], + "joint_hints": "jnt_org" + }, + "CreateMultiverseLook": { + "enabled": True, + "publish_mip_map": True + }, + "CreateAnimation": { + "write_color_sets": False, + "write_face_sets": False, + "include_parent_hierarchy": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateModel": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ] + }, + "CreatePointCache": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateProxyAlembic": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsd": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdComp": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdOver": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateAss": { + "enabled": True, + "default_variants": [ + "Main" + ], + "expandProcedurals": False, + "motionBlur": True, + "motionBlurKeys": 2, + "motionBlurLength": 0.5, + "maskOptions": False, + "maskCamera": False, + "maskLight": False, + "maskShape": False, + "maskShader": False, + "maskOverride": False, + "maskDriver": False, + "maskFilter": False, + "maskColor_manager": False, + "maskOperator": False + }, + "CreateAssembly": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateCamera": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateLayout": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMayaScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateRenderSetup": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateReview": { + "enabled": True, + "default_variants": [ + "Main" + ], + "useMayaTimeline": True + }, + "CreateRig": { + "enabled": True, + "default_variants": [ + "Main", + "Sim", + "Cloth" + ] + }, + "CreateSetDress": { + "enabled": True, + "default_variants": [ + "Main", + "Anim" + ] + }, + "CreateVrayProxy": { + "enabled": True, + "vrmesh": True, + "alembic": True, + "default_variants": [ + "Main" + ] + }, + "CreateVRayScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateYetiRig": { + "enabled": True, + "default_variants": [ + "Main" + ] + } +} diff --git a/server_addon/maya/server/settings/explicit_plugins_loading.py b/server_addon/maya/server/settings/explicit_plugins_loading.py new file mode 100644 index 0000000000..394adb728f --- /dev/null +++ b/server_addon/maya/server/settings/explicit_plugins_loading.py @@ -0,0 +1,429 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class PluginsModel(BaseSettingsModel): + _layout = "expanded" + enabled: bool = Field(title="Enabled") + name: str = Field("", title="Name") + + +class ExplicitPluginsLoadingModel(BaseSettingsModel): + """Maya Explicit Plugins Loading.""" + _isGroup: bool = True + enabled: bool = Field(title="enabled") + plugins_to_load: list[PluginsModel] = Field( + default_factory=list, title="Plugins To Load" + ) + + +DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS = { + "enabled": False, + "plugins_to_load": [ + { + "enabled": False, + "name": "AbcBullet" + }, + { + "enabled": True, + "name": "AbcExport" + }, + { + "enabled": True, + "name": "AbcImport" + }, + { + "enabled": False, + "name": "animImportExport" + }, + { + "enabled": False, + "name": "ArubaTessellator" + }, + { + "enabled": False, + "name": "ATFPlugin" + }, + { + "enabled": False, + "name": "atomImportExport" + }, + { + "enabled": False, + "name": "AutodeskPacketFile" + }, + { + "enabled": False, + "name": "autoLoader" + }, + { + "enabled": False, + "name": "bifmeshio" + }, + { + "enabled": False, + "name": "bifrostGraph" + }, + { + "enabled": False, + "name": "bifrostshellnode" + }, + { + "enabled": False, + "name": "bifrostvisplugin" + }, + { + "enabled": False, + "name": "blast2Cmd" + }, + { + "enabled": False, + "name": "bluePencil" + }, + { + "enabled": False, + "name": "Boss" + }, + { + "enabled": False, + "name": "bullet" + }, + { + "enabled": True, + "name": "cacheEvaluator" + }, + { + "enabled": False, + "name": "cgfxShader" + }, + { + "enabled": False, + "name": "cleanPerFaceAssignment" + }, + { + "enabled": False, + "name": "clearcoat" + }, + { + "enabled": False, + "name": "convertToComponentTags" + }, + { + "enabled": False, + "name": "curveWarp" + }, + { + "enabled": False, + "name": "ddsFloatReader" + }, + { + "enabled": True, + "name": "deformerEvaluator" + }, + { + "enabled": False, + "name": "dgProfiler" + }, + { + "enabled": False, + "name": "drawUfe" + }, + { + "enabled": False, + "name": "dx11Shader" + }, + { + "enabled": False, + "name": "fbxmaya" + }, + { + "enabled": False, + "name": "fltTranslator" + }, + { + "enabled": False, + "name": "freeze" + }, + { + "enabled": False, + "name": "Fur" + }, + { + "enabled": False, + "name": "gameFbxExporter" + }, + { + "enabled": False, + "name": "gameInputDevice" + }, + { + "enabled": False, + "name": "GamePipeline" + }, + { + "enabled": False, + "name": "gameVertexCount" + }, + { + "enabled": False, + "name": "geometryReport" + }, + { + "enabled": False, + "name": "geometryTools" + }, + { + "enabled": False, + "name": "glslShader" + }, + { + "enabled": True, + "name": "GPUBuiltInDeformer" + }, + { + "enabled": False, + "name": "gpuCache" + }, + { + "enabled": False, + "name": "hairPhysicalShader" + }, + { + "enabled": False, + "name": "ik2Bsolver" + }, + { + "enabled": False, + "name": "ikSpringSolver" + }, + { + "enabled": False, + "name": "invertShape" + }, + { + "enabled": False, + "name": "lges" + }, + { + "enabled": False, + "name": "lookdevKit" + }, + { + "enabled": False, + "name": "MASH" + }, + { + "enabled": False, + "name": "matrixNodes" + }, + { + "enabled": False, + "name": "mayaCharacterization" + }, + { + "enabled": False, + "name": "mayaHIK" + }, + { + "enabled": False, + "name": "MayaMuscle" + }, + { + "enabled": False, + "name": "mayaUsdPlugin" + }, + { + "enabled": False, + "name": "mayaVnnPlugin" + }, + { + "enabled": False, + "name": "melProfiler" + }, + { + "enabled": False, + "name": "meshReorder" + }, + { + "enabled": True, + "name": "modelingToolkit" + }, + { + "enabled": False, + "name": "mtoa" + }, + { + "enabled": False, + "name": "mtoh" + }, + { + "enabled": False, + "name": "nearestPointOnMesh" + }, + { + "enabled": True, + "name": "objExport" + }, + { + "enabled": False, + "name": "OneClick" + }, + { + "enabled": False, + "name": "OpenEXRLoader" + }, + { + "enabled": False, + "name": "pgYetiMaya" + }, + { + "enabled": False, + "name": "pgyetiVrayMaya" + }, + { + "enabled": False, + "name": "polyBoolean" + }, + { + "enabled": False, + "name": "poseInterpolator" + }, + { + "enabled": False, + "name": "quatNodes" + }, + { + "enabled": False, + "name": "randomizerDevice" + }, + { + "enabled": False, + "name": "redshift4maya" + }, + { + "enabled": True, + "name": "renderSetup" + }, + { + "enabled": False, + "name": "retargeterNodes" + }, + { + "enabled": False, + "name": "RokokoMotionLibrary" + }, + { + "enabled": False, + "name": "rotateHelper" + }, + { + "enabled": False, + "name": "sceneAssembly" + }, + { + "enabled": False, + "name": "shaderFXPlugin" + }, + { + "enabled": False, + "name": "shotCamera" + }, + { + "enabled": False, + "name": "snapTransform" + }, + { + "enabled": False, + "name": "stage" + }, + { + "enabled": True, + "name": "stereoCamera" + }, + { + "enabled": False, + "name": "stlTranslator" + }, + { + "enabled": False, + "name": "studioImport" + }, + { + "enabled": False, + "name": "Substance" + }, + { + "enabled": False, + "name": "substancelink" + }, + { + "enabled": False, + "name": "substancemaya" + }, + { + "enabled": False, + "name": "substanceworkflow" + }, + { + "enabled": False, + "name": "svgFileTranslator" + }, + { + "enabled": False, + "name": "sweep" + }, + { + "enabled": False, + "name": "testify" + }, + { + "enabled": False, + "name": "tiffFloatReader" + }, + { + "enabled": False, + "name": "timeSliderBookmark" + }, + { + "enabled": False, + "name": "Turtle" + }, + { + "enabled": False, + "name": "Type" + }, + { + "enabled": False, + "name": "udpDevice" + }, + { + "enabled": False, + "name": "ufeSupport" + }, + { + "enabled": False, + "name": "Unfold3D" + }, + { + "enabled": False, + "name": "VectorRender" + }, + { + "enabled": False, + "name": "vrayformaya" + }, + { + "enabled": False, + "name": "vrayvolumegrid" + }, + { + "enabled": False, + "name": "xgenToolkit" + }, + { + "enabled": False, + "name": "xgenVray" + } + ] +} diff --git a/server_addon/maya/server/settings/imageio.py b/server_addon/maya/server/settings/imageio.py new file mode 100644 index 0000000000..946a14c866 --- /dev/null +++ b/server_addon/maya/server/settings/imageio.py @@ -0,0 +1,127 @@ +"""Providing models and setting values for image IO in Maya. + +Note: Names were changed to get rid of the versions in class names. +""" +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ColorManagementPreferenceV2Model(BaseSettingsModel): + """Color Management Preference v2 (Maya 2022+). + + Please migrate all to 'imageio/workfile' and enable it. + """ + + enabled: bool = Field(True, title="Use Color Management Preference v2") + + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ColorManagementPreferenceModel(BaseSettingsModel): + """Color Management Preference (legacy).""" + + renderSpace: str = Field(title="Rendering Space") + viewTransform: str = Field(title="Viewer Transform ") + + +class WorkfileImageIOModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ImageIOSettings(BaseSettingsModel): + """Maya color management project settings. + + Todo: What to do with color management preferences version? + """ + + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileImageIOModel = Field( + default_factory=WorkfileImageIOModel, + title="Workfile" + ) + # Deprecated + colorManagementPreference_v2: ColorManagementPreferenceV2Model = Field( + default_factory=ColorManagementPreferenceV2Model, + title="DEPRECATED: Color Management Preference v2 (Maya 2022+)" + ) + colorManagementPreference: ColorManagementPreferenceModel = Field( + default_factory=ColorManagementPreferenceModel, + title="DEPRECATED: Color Management Preference (legacy)" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "activate_host_color_management": True, + "ocio_config": { + "override_global_config": False, + "filepath": [] + }, + "file_rules": { + "activate_host_rules": False, + "rules": [] + }, + "workfile": { + "enabled": False, + "renderSpace": "ACES - ACEScg", + "displayName": "ACES", + "viewName": "sRGB" + }, + "colorManagementPreference_v2": { + "enabled": True, + "renderSpace": "ACEScg", + "displayName": "sRGB", + "viewName": "ACES 1.0 SDR-video" + }, + "colorManagementPreference": { + "renderSpace": "scene-linear Rec 709/sRGB", + "viewTransform": "sRGB gamma" + } +} diff --git a/server_addon/maya/server/settings/include_handles.py b/server_addon/maya/server/settings/include_handles.py new file mode 100644 index 0000000000..3ba6aca66b --- /dev/null +++ b/server_addon/maya/server/settings/include_handles.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class IncludeByTaskTypeModel(BaseSettingsModel): + task_type: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + include_handles: bool = Field(True, title="Include handles") + + +class IncludeHandlesModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + include_handles_default: bool = Field( + True, title="Include handles by default" + ) + per_task_type: list[IncludeByTaskTypeModel] = Field( + default_factory=list, + title="Include/exclude handles by task type" + ) + + +DEFAULT_INCLUDE_HANDLES = { + "include_handles_default": False, + "per_task_type": [] +} diff --git a/server_addon/maya/server/settings/loaders.py b/server_addon/maya/server/settings/loaders.py new file mode 100644 index 0000000000..ed6b6fd2ac --- /dev/null +++ b/server_addon/maya/server/settings/loaders.py @@ -0,0 +1,129 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class ColorsSetting(BaseSettingsModel): + model: ColorRGBA_uint8 = Field( + (209, 132, 30, 1.0), title="Model:") + rig: ColorRGBA_uint8 = Field( + (59, 226, 235, 1.0), title="Rig:") + pointcache: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Pointcache:") + animation: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Animation:") + ass: ColorRGBA_uint8 = Field( + (249, 135, 53, 1.0), title="Arnold StandIn:") + camera: ColorRGBA_uint8 = Field( + (136, 114, 244, 1.0), title="Camera:") + fbx: ColorRGBA_uint8 = Field( + (215, 166, 255, 1.0), title="FBX:") + mayaAscii: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Ascii:") + mayaScene: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Scene:") + setdress: ColorRGBA_uint8 = Field( + (255, 250, 90, 1.0), title="Set Dress:") + layout: ColorRGBA_uint8 = Field(( + 255, 250, 90, 1.0), title="Layout:") + vdbcache: ColorRGBA_uint8 = Field( + (249, 54, 0, 1.0), title="VDB Cache:") + vrayproxy: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Proxy:") + vrayscene_layer: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Scene:") + yeticache: ColorRGBA_uint8 = Field( + (99, 206, 220, 1.0), title="Yeti Cache:") + yetiRig: ColorRGBA_uint8 = Field( + (0, 205, 125, 1.0), title="Yeti Rig:") + + +class ReferenceLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + display_handle: bool = Field(title="Display Handle On Load References") + + +class ImportLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + + +class LoadersModel(BaseSettingsModel): + colors: ColorsSetting = Field( + default_factory=ColorsSetting, + title="Loaded Products Outliner Colors") + + reference_loader: ReferenceLoaderModel = Field( + default_factory=ReferenceLoaderModel, + title="Reference Loader" + ) + + import_loader: ImportLoaderModel = Field( + default_factory=ImportLoaderModel, + title="Import Loader" + ) + +DEFAULT_LOADERS_SETTING = { + "colors": { + "model": [ + 209, 132, 30, 1.0 + ], + "rig": [ + 59, 226, 235, 1.0 + ], + "pointcache": [ + 94, 209, 30, 1.0 + ], + "animation": [ + 94, 209, 30, 1.0 + ], + "ass": [ + 249, 135, 53, 1.0 + ], + "camera": [ + 136, 114, 244, 1.0 + ], + "fbx": [ + 215, 166, 255, 1.0 + ], + "mayaAscii": [ + 67, 174, 255, 1.0 + ], + "mayaScene": [ + 67, 174, 255, 1.0 + ], + "setdress": [ + 255, 250, 90, 1.0 + ], + "layout": [ + 255, 250, 90, 1.0 + ], + "vdbcache": [ + 249, 54, 0, 1.0 + ], + "vrayproxy": [ + 255, 150, 12, 1.0 + ], + "vrayscene_layer": [ + 255, 150, 12, 1.0 + ], + "yeticache": [ + 99, 206, 220, 1.0 + ], + "yetiRig": [ + 0, 205, 125, 1.0 + ] + }, + "reference_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + }, + "import_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + } +} diff --git a/server_addon/maya/server/settings/main.py b/server_addon/maya/server/settings/main.py new file mode 100644 index 0000000000..c8021614be --- /dev/null +++ b/server_addon/maya/server/settings/main.py @@ -0,0 +1,141 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from .imageio import ImageIOSettings, DEFAULT_IMAGEIO_SETTINGS +from .maya_dirmap import MayaDirmapModel, DEFAULT_MAYA_DIRMAP_SETTINGS +from .include_handles import IncludeHandlesModel, DEFAULT_INCLUDE_HANDLES +from .explicit_plugins_loading import ( + ExplicitPluginsLoadingModel, DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS +) +from .scriptsmenu import ScriptsmenuModel, DEFAULT_SCRIPTSMENU_SETTINGS +from .render_settings import RenderSettingsModel, DEFAULT_RENDER_SETTINGS +from .creators import CreatorsModel, DEFAULT_CREATORS_SETTINGS +from .publishers import PublishersModel, DEFAULT_PUBLISH_SETTINGS +from .loaders import LoadersModel, DEFAULT_LOADERS_SETTING +from .workfile_build_settings import ProfilesModel, DEFAULT_WORKFILE_SETTING +from .templated_workfile_settings import ( + TemplatedProfilesModel, DEFAULT_TEMPLATED_WORKFILE_SETTINGS +) + + +class ExtMappingItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Product type") + value: str = Field(title="Extension") + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class MayaSettings(BaseSettingsModel): + """Maya Project Settings.""" + + open_workfile_post_initialization: bool = Field( + True, title="Open Workfile Post Initialization") + explicit_plugins_loading: ExplicitPluginsLoadingModel = Field( + default_factory=ExplicitPluginsLoadingModel, + title="Explicit Plugins Loading") + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, title="Color Management (imageio)") + mel_workspace: str = Field(title="Maya MEL Workspace", widget="textarea") + ext_mapping: list[ExtMappingItemModel] = Field( + default_factory=list, title="Extension Mapping") + maya_dirmap: MayaDirmapModel = Field( + default_factory=MayaDirmapModel, title="Maya dirmap Settings") + include_handles: IncludeHandlesModel = Field( + default_factory=IncludeHandlesModel, + title="Include/Exclude Handles in default playback & render range" + ) + scriptsmenu: ScriptsmenuModel = Field( + default_factory=ScriptsmenuModel, + title="Scriptsmenu Settings" + ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") + create: CreatorsModel = Field( + default_factory=CreatorsModel, title="Creators") + publish: PublishersModel = Field( + default_factory=PublishersModel, title="Publishers") + load: LoadersModel = Field( + default_factory=LoadersModel, title="Loaders") + workfile_build: ProfilesModel = Field( + default_factory=ProfilesModel, title="Workfile Build Settings") + templated_workfile_build: TemplatedProfilesModel = Field( + default_factory=TemplatedProfilesModel, + title="Templated Workfile Build Settings") + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters", "ext_mapping") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join(( + 'workspace -fr "shaders" "renderData/shaders";', + 'workspace -fr "images" "renders/maya";', + 'workspace -fr "particles" "particles";', + 'workspace -fr "mayaAscii" "";', + 'workspace -fr "mayaBinary" "";', + 'workspace -fr "scene" "";', + 'workspace -fr "alembicCache" "cache/alembic";', + 'workspace -fr "renderData" "renderData";', + 'workspace -fr "sourceImages" "sourceimages";', + 'workspace -fr "fileCache" "cache/nCache";', + '', +)) + +DEFAULT_MAYA_SETTING = { + "open_workfile_post_initialization": False, + "explicit_plugins_loading": DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "mel_workspace": DEFAULT_MEL_WORKSPACE_SETTINGS, + "ext_mapping": [ + {"name": "model", "value": "ma"}, + {"name": "mayaAscii", "value": "ma"}, + {"name": "camera", "value": "ma"}, + {"name": "rig", "value": "ma"}, + {"name": "workfile", "value": "ma"}, + {"name": "yetiRig", "value": "ma"} + ], + # `maya_dirmap` was originally with dash - `maya-dirmap` + "maya_dirmap": DEFAULT_MAYA_DIRMAP_SETTINGS, + "include_handles": DEFAULT_INCLUDE_HANDLES, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "render_settings": DEFAULT_RENDER_SETTINGS, + "create": DEFAULT_CREATORS_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADERS_SETTING, + "workfile_build": DEFAULT_WORKFILE_SETTING, + "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS, + "filters": [ + { + "name": "preset 1", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + {"name": "ValidateShapeDefaultNames", "value": False}, + ] + }, + { + "name": "preset 2", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + ] + }, + ] +} diff --git a/server_addon/maya/server/settings/maya_dirmap.py b/server_addon/maya/server/settings/maya_dirmap.py new file mode 100644 index 0000000000..243261dc87 --- /dev/null +++ b/server_addon/maya/server/settings/maya_dirmap.py @@ -0,0 +1,40 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class MayaDirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, title="Destination Paths" + ) + + +class MayaDirmapModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + # Use ${} placeholder instead of absolute value of a root in + # referenced filepaths. + use_env_var_as_root: bool = Field( + title="Use env var placeholder in referenced paths" + ) + paths: MayaDirmapPathsSubmodel = Field( + default_factory=MayaDirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +DEFAULT_MAYA_DIRMAP_SETTINGS = { + "use_env_var_as_root": False, + "enabled": False, + "paths": { + "source-path": [], + "destination-path": [] + } +} diff --git a/server_addon/maya/server/settings/publish_playblast.py b/server_addon/maya/server/settings/publish_playblast.py new file mode 100644 index 0000000000..acfcaf5988 --- /dev/null +++ b/server_addon/maya/server/settings/publish_playblast.py @@ -0,0 +1,382 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.types import ColorRGBA_uint8 + + +def hardware_falloff_enum(): + return [ + {"label": "Linear", "value": "0"}, + {"label": "Exponential", "value": "1"}, + {"label": "Exponential Squared", "value": "2"} + ] + + +def renderer_enum(): + return [ + {"label": "Viewport 2.0", "value": "vp2Renderer"} + ] + + +def displayLights_enum(): + return [ + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "nolights"} + ] + + +def plugin_objects_default(): + return [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + + +class CodecSetting(BaseSettingsModel): + _layout = "expanded" + compression: str = Field("png", title="Encoding") + format: str = Field("image", title="Format") + quality: int = Field(95, title="Quality", ge=0, le=100) + + +class DisplayOptionsSetting(BaseSettingsModel): + _layout = "expanded" + override_display: bool = Field(True, title="Override display options") + background: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Color" + ) + displayGradient: bool = Field(True, title="Display background gradient") + backgroundTop: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Top" + ) + backgroundBottom: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Bottom" + ) + + +class GenericSetting(BaseSettingsModel): + _layout = "expanded" + isolate_view: bool = Field(True, title="Isolate View") + off_screen: bool = Field(True, title="Off Screen") + pan_zoom: bool = Field(False, title="2D Pan/Zoom") + + +class RendererSetting(BaseSettingsModel): + _layout = "expanded" + rendererName: str = Field( + "vp2Renderer", + enum_resolver=renderer_enum, + title="Renderer name" + ) + + +class ResolutionSetting(BaseSettingsModel): + _layout = "expanded" + width: int = Field(0, title="Width") + height: int = Field(0, title="Height") + + +class PluginObjectsModel(BaseSettingsModel): + name: str = Field("", title="Name") + value: bool = Field(True, title="Enabled") + + +class ViewportOptionsSetting(BaseSettingsModel): + override_viewport_options: bool = Field( + True, title="Override viewport options" + ) + displayLights: str = Field( + "default", enum_resolver=displayLights_enum, title="Display Lights" + ) + displayTextures: bool = Field(True, title="Display Textures") + textureMaxResolution: int = Field(1024, title="Texture Clamp Resolution") + renderDepthOfField: bool = Field( + True, title="Depth of Field", section="Depth of Field" + ) + shadows: bool = Field(True, title="Display Shadows") + twoSidedLighting: bool = Field(True, title="Two Sided Lighting") + lineAAEnable: bool = Field( + True, title="Enable Anti-Aliasing", section="Anti-Aliasing" + ) + multiSample: int = Field(8, title="Anti Aliasing Samples") + useDefaultMaterial: bool = Field(False, title="Use Default Material") + wireframeOnShaded: bool = Field(False, title="Wireframe On Shaded") + xray: bool = Field(False, title="X-Ray") + jointXray: bool = Field(False, title="X-Ray Joints") + backfaceCulling: bool = Field(False, title="Backface Culling") + ssaoEnable: bool = Field( + False, title="Screen Space Ambient Occlusion", section="SSAO" + ) + ssaoAmount: int = Field(1, title="SSAO Amount") + ssaoRadius: int = Field(16, title="SSAO Radius") + ssaoFilterRadius: int = Field(16, title="SSAO Filter Radius") + ssaoSamples: int = Field(16, title="SSAO Samples") + fogging: bool = Field(False, title="Enable Hardware Fog", section="Fog") + hwFogFalloff: str = Field( + "0", enum_resolver=hardware_falloff_enum, title="Hardware Falloff" + ) + hwFogDensity: float = Field(0.0, title="Fog Density") + hwFogStart: int = Field(0, title="Fog Start") + hwFogEnd: int = Field(100, title="Fog End") + hwFogAlpha: int = Field(0, title="Fog Alpha") + hwFogColorR: float = Field(1.0, title="Fog Color R") + hwFogColorG: float = Field(1.0, title="Fog Color G") + hwFogColorB: float = Field(1.0, title="Fog Color B") + motionBlurEnable: bool = Field( + False, title="Enable Motion Blur", section="Motion Blur" + ) + motionBlurSampleCount: int = Field(8, title="Motion Blur Sample Count") + motionBlurShutterOpenFraction: float = Field( + 0.2, title="Shutter Open Fraction" + ) + cameras: bool = Field(False, title="Cameras", section="Show") + clipGhosts: bool = Field(False, title="Clip Ghosts") + deformers: bool = Field(False, title="Deformers") + dimensions: bool = Field(False, title="Dimensions") + dynamicConstraints: bool = Field(False, title="Dynamic Constraints") + dynamics: bool = Field(False, title="Dynamics") + fluids: bool = Field(False, title="Fluids") + follicles: bool = Field(False, title="Follicles") + greasePencils: bool = Field(False, title="Grease Pencils") + grid: bool = Field(False, title="Grid") + hairSystems: bool = Field(True, title="Hair Systems") + handles: bool = Field(False, title="Handles") + headsUpDisplay: bool = Field(False, title="HUD") + ikHandles: bool = Field(False, title="IK Handles") + imagePlane: bool = Field(True, title="Image Plane") + joints: bool = Field(False, title="Joints") + lights: bool = Field(False, title="Lights") + locators: bool = Field(False, title="Locators") + manipulators: bool = Field(False, title="Manipulators") + motionTrails: bool = Field(False, title="Motion Trails") + nCloths: bool = Field(False, title="nCloths") + nParticles: bool = Field(False, title="nParticles") + nRigids: bool = Field(False, title="nRigids") + controlVertices: bool = Field(False, title="NURBS CVs") + nurbsCurves: bool = Field(False, title="NURBS Curves") + hulls: bool = Field(False, title="NURBS Hulls") + nurbsSurfaces: bool = Field(False, title="NURBS Surfaces") + particleInstancers: bool = Field(False, title="Particle Instancers") + pivots: bool = Field(False, title="Pivots") + planes: bool = Field(False, title="Planes") + pluginShapes: bool = Field(False, title="Plugin Shapes") + polymeshes: bool = Field(True, title="Polygons") + strokes: bool = Field(False, title="Strokes") + subdivSurfaces: bool = Field(False, title="Subdiv Surfaces") + textures: bool = Field(False, title="Texture Placements") + pluginObjects: list[PluginObjectsModel] = Field( + default_factory=plugin_objects_default, + title="Plugin Objects" + ) + + @validator("pluginObjects") + def validate_unique_plugin_objects(cls, value): + ensure_unique_names(value) + return value + + +class CameraOptionsSetting(BaseSettingsModel): + displayGateMask: bool = Field(False, title="Display Gate Mask") + displayResolution: bool = Field(False, title="Display Resolution") + displayFilmGate: bool = Field(False, title="Display Film Gate") + displayFieldChart: bool = Field(False, title="Display Field Chart") + displaySafeAction: bool = Field(False, title="Display Safe Action") + displaySafeTitle: bool = Field(False, title="Display Safe Title") + displayFilmPivot: bool = Field(False, title="Display Film Pivot") + displayFilmOrigin: bool = Field(False, title="Display Film Origin") + overscan: int = Field(1.0, title="Overscan") + + +class CapturePresetSetting(BaseSettingsModel): + Codec: CodecSetting = Field( + default_factory=CodecSetting, + title="Codec", + section="Codec") + DisplayOptions: DisplayOptionsSetting = Field( + default_factory=DisplayOptionsSetting, + title="Display Options", + section="Display Options") + Generic: GenericSetting = Field( + default_factory=GenericSetting, + title="Generic", + section="Generic") + Renderer: RendererSetting = Field( + default_factory=RendererSetting, + title="Renderer", + section="Renderer") + Resolution: ResolutionSetting = Field( + default_factory=ResolutionSetting, + title="Resolution", + section="Resolution") + ViewportOptions: ViewportOptionsSetting = Field( + default_factory=ViewportOptionsSetting, + title="Viewport Options") + CameraOptions: CameraOptionsSetting = Field( + default_factory=CameraOptionsSetting, + title="Camera Options") + + +class ProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + product_names: list[str] = Field(default_factory=list, title="Products names") + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="Capture Preset" + ) + + +class ExtractPlayblastSetting(BaseSettingsModel): + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="DEPRECATED! Please use \"Profiles\" below. Capture Preset" + ) + profiles: list[ProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_PLAYBLAST_SETTING = { + "capture_preset": { + "Codec": { + "compression": "png", + "format": "image", + "quality": 95 + }, + "DisplayOptions": { + "override_display": True, + "background": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundBottom": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundTop": [ + 125, + 125, + 125, + 1.0 + ], + "displayGradient": True + }, + "Generic": { + "isolate_view": True, + "off_screen": True, + "pan_zoom": False + }, + "Renderer": { + "rendererName": "vp2Renderer" + }, + "Resolution": { + "width": 1920, + "height": 1080 + }, + "ViewportOptions": { + "override_viewport_options": True, + "displayLights": "default", + "displayTextures": True, + "textureMaxResolution": 1024, + "renderDepthOfField": True, + "shadows": True, + "twoSidedLighting": True, + "lineAAEnable": True, + "multiSample": 8, + "useDefaultMaterial": False, + "wireframeOnShaded": False, + "xray": False, + "jointXray": False, + "backfaceCulling": False, + "ssaoEnable": False, + "ssaoAmount": 1, + "ssaoRadius": 16, + "ssaoFilterRadius": 16, + "ssaoSamples": 16, + "fogging": False, + "hwFogFalloff": "0", + "hwFogDensity": 0.0, + "hwFogStart": 0, + "hwFogEnd": 100, + "hwFogAlpha": 0, + "hwFogColorR": 1.0, + "hwFogColorG": 1.0, + "hwFogColorB": 1.0, + "motionBlurEnable": False, + "motionBlurSampleCount": 8, + "motionBlurShutterOpenFraction": 0.2, + "cameras": False, + "clipGhosts": False, + "deformers": False, + "dimensions": False, + "dynamicConstraints": False, + "dynamics": False, + "fluids": False, + "follicles": False, + "greasePencils": False, + "grid": False, + "hairSystems": True, + "handles": False, + "headsUpDisplay": False, + "ikHandles": False, + "imagePlane": True, + "joints": False, + "lights": False, + "locators": False, + "manipulators": False, + "motionTrails": False, + "nCloths": False, + "nParticles": False, + "nRigids": False, + "controlVertices": False, + "nurbsCurves": False, + "hulls": False, + "nurbsSurfaces": False, + "particleInstancers": False, + "pivots": False, + "planes": False, + "pluginShapes": False, + "polymeshes": True, + "strokes": False, + "subdivSurfaces": False, + "textures": False, + "pluginObjects": [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + }, + "CameraOptions": { + "displayGateMask": False, + "displayResolution": False, + "displayFilmGate": False, + "displayFieldChart": False, + "displaySafeAction": False, + "displaySafeTitle": False, + "displayFilmPivot": False, + "displayFilmOrigin": False, + "overscan": 1.0 + } + }, + "profiles": [] +} diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py new file mode 100644 index 0000000000..dd8d4a0a37 --- /dev/null +++ b/server_addon/maya/server/settings/publishers.py @@ -0,0 +1,1336 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException +from .publish_playblast import ( + ExtractPlayblastSetting, + DEFAULT_PLAYBLAST_SETTING, +) + + +def linear_unit_enum(): + """Get linear units enumerator.""" + return [ + {"label": "mm", "value": "millimeter"}, + {"label": "cm", "value": "centimeter"}, + {"label": "m", "value": "meter"}, + {"label": "km", "value": "kilometer"}, + {"label": "in", "value": "inch"}, + {"label": "ft", "value": "foot"}, + {"label": "yd", "value": "yard"}, + {"label": "mi", "value": "mile"} + ] + + +def angular_unit_enum(): + """Get angular units enumerator.""" + return [ + {"label": "deg", "value": "degree"}, + {"label": "rad", "value": "radian"}, + ] + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateMeshUVSetMap1Model(BasicValidateModel): + """Validate model's default uv set exists and is named 'map1'.""" + pass + + +class ValidateNoAnimationModel(BasicValidateModel): + """Ensure no keyframes on nodes in the Instance.""" + pass + + +class ValidateRigOutSetNodeIdsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateSkinclusterDeformerSet") + optional: bool = Field(title="Optional") + allow_history_only: bool = Field(title="Allow history only") + + +class ValidateModelNameModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + database: bool = Field(title="Use database shader name definitions") + material_file: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Material File", + description=( + "Path to material file defining list of material names to check." + ) + ) + regex: str = Field( + "(.*)_(\\d)*_(?P.*)_(GEO)", + title="Validation regex", + description=( + "Regex for validating name of top level group name. You can use" + " named capturing groups:(?P.*) for Asset name" + ) + ) + top_level_regex: str = Field( + ".*_GRP", + title="Top level group name regex", + description=( + "To check for asset in name so *_some_asset_name_GRP" + " is valid, use:.*?_(?P.*)_GEO" + ) + ) + + +class ValidateModelContentModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_top_group: bool = Field(title="Validate one top group") + + +class ValidateTransformNamingSuffixModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + SUFFIX_NAMING_TABLE: str = Field( + "{}", + title="Suffix Naming Tables", + widget="textarea", + description=( + "Validates transform suffix based on" + " the type of its children shapes." + ) + ) + + @validator("SUFFIX_NAMING_TABLE") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + ALLOW_IF_NOT_IN_SUFFIX_TABLE: bool = Field( + title="Allow if suffix not in table" + ) + + +class CollectMayaRenderModel(BaseSettingsModel): + sync_workfile_version: bool = Field( + title="Sync render version with workfile" + ) + + +class CollectFbxAnimationModel(BaseSettingsModel): + enabled: bool = Field(title="Collect Fbx Animation") + + +class CollectFbxCameraModel(BaseSettingsModel): + enabled: bool = Field(title="CollectFbxCamera") + + +class CollectGLTFModel(BaseSettingsModel): + enabled: bool = Field(title="CollectGLTF") + + +class ValidateFrameRangeModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFrameRange") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_product_types: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + +class ValidateShaderNameModel(BaseSettingsModel): + """ + Shader name regex can use named capture group asset to validate against current asset name. + """ + enabled: bool = Field(title="ValidateShaderName") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + regex: str = Field("(?P.*)_(.*)_SHD", title="Validation regex") + + +class ValidateAttributesModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateAttributes") + attributes: str = Field( + "{}", title="Attributes", widget="textarea") + + @validator("attributes") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The attibutes can't be parsed as json object" + ) + return value + + +class ValidateLoadedPluginModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateLoadedPlugin") + optional: bool = Field(title="Optional") + whitelist_native_plugins: bool = Field( + title="Whitelist Maya Native Plugins" + ) + authorized_plugins: list[str] = Field( + default_factory=list, title="Authorized plugins" + ) + + +class ValidateMayaUnitsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateMayaUnits") + optional: bool = Field(title="Optional") + validate_linear_units: bool = Field(title="Validate linear units") + linear_units: str = Field( + enum_resolver=linear_unit_enum, title="Linear Units" + ) + validate_angular_units: bool = Field(title="Validate angular units") + angular_units: str = Field( + enum_resolver=angular_unit_enum, title="Angular units" + ) + validate_fps: bool = Field(title="Validate fps") + + +class ValidateUnrealStaticMeshNameModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateUnrealStaticMeshName") + optional: bool = Field(title="Optional") + validate_mesh: bool = Field(title="Validate mesh names") + validate_collision: bool = Field(title="Validate collison names") + + +class ValidateCycleErrorModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateCycleError") + optional: bool = Field(title="Optional") + families: list[str] = Field(default_factory=list, title="Families") + + +class ValidatePluginPathAttributesAttrModel(BaseSettingsModel): + name: str = Field(title="Node type") + value: str = Field(title="Attribute") + + +class ValidatePluginPathAttributesModel(BaseSettingsModel): + """Fill in the node types and attributes you want to validate. + +

e.g. AlembicNode.abc_file, the node type is AlembicNode + and the node attribute is abc_file + """ + + enabled: bool = True + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + attribute: list[ValidatePluginPathAttributesAttrModel] = Field( + default_factory=list, + title="File Attribute" + ) + + @validator("attribute") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +# Validate Render Setting +class RendererAttributesModel(BaseSettingsModel): + _layout = "compact" + type: str = Field(title="Type") + value: str = Field(title="Value") + + +class ValidateRenderSettingsModel(BaseSettingsModel): + arnold_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Arnold Render Attributes") + vray_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="VRay Render Attributes") + redshift_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Redshift Render Attributes") + renderman_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Renderman Render Attributes") + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateCameraContentsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_shapes: bool = Field(title="Validate presence of shapes") + + +class ExtractProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractObjModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + + +class ExtractMayaSceneRawModel(BaseSettingsModel): + """Add loaded instances to those published families:""" + enabled: bool = Field(title="ExtractMayaSceneRaw") + add_for_families: list[str] = Field(default_factory=list, title="Families") + + +class ExtractCameraAlembicModel(BaseSettingsModel): + """ + List of attributes that will be added to the baked alembic camera. Needs to be written in python list syntax. + """ + enabled: bool = Field(title="ExtractCameraAlembic") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + bake_attributes: str = Field( + "[]", title="Base Attributes", widget="textarea" + ) + + @validator("bake_attributes") + def validate_json_list(cls, value): + if not value.strip(): + return "[]" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, list) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + + +class ExtractGLBModel(BaseSettingsModel): + enabled: bool = True + active: bool = Field(title="Active") + ogsfx_path: str = Field(title="GLSL Shader Directory") + + +class ExtractLookArgsModel(BaseSettingsModel): + argument: str = Field(title="Argument") + parameters: list[str] = Field(default_factory=list, title="Parameters") + + +class ExtractLookModel(BaseSettingsModel): + maketx_arguments: list[ExtractLookArgsModel] = Field( + default_factory=list, + title="Extra arguments for maketx command line" + ) + + +class ExtractGPUCacheModel(BaseSettingsModel): + enabled: bool = True + families: list[str] = Field(default_factory=list, title="Families") + step: float = Field(1.0, ge=1.0, title="Step") + stepSave: int = Field(1, ge=1, title="Step Save") + optimize: bool = Field(title="Optimize Hierarchy") + optimizationThreshold: int = Field(1, ge=1, title="Optimization Threshold") + optimizeAnimationsForMotionBlur: bool = Field( + title="Optimize Animations For Motion Blur" + ) + writeMaterials: bool = Field(title="Write Materials") + useBaseTessellation: bool = Field(title="User Base Tesselation") + + +class PublishersModel(BaseSettingsModel): + CollectMayaRender: CollectMayaRenderModel = Field( + default_factory=CollectMayaRenderModel, + title="Collect Render Layers", + section="Collectors" + ) + CollectFbxAnimation: CollectFbxAnimationModel = Field( + default_factory=CollectFbxAnimationModel, + title="Collect FBX Animation", + ) + CollectFbxCamera: CollectFbxCameraModel = Field( + default_factory=CollectFbxCameraModel, + title="Collect Camera for FBX export", + ) + CollectGLTF: CollectGLTFModel = Field( + default_factory=CollectGLTFModel, + title="Collect Assets for GLB/GLTF export" + ) + ValidateInstanceInContext: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instance In Context", + section="Validators" + ) + ValidateContainers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Containers" + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + default_factory=ValidateFrameRangeModel, + title="Validate Frame Range" + ) + ValidateShaderName: ValidateShaderNameModel = Field( + default_factory=ValidateShaderNameModel, + title="Validate Shader Name" + ) + ValidateShadingEngine: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Look Shading Engine Naming" + ) + ValidateMayaColorSpace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Colorspace" + ) + ValidateAttributes: ValidateAttributesModel = Field( + default_factory=ValidateAttributesModel, + title="Validate Attributes" + ) + ValidateLoadedPlugin: ValidateLoadedPluginModel = Field( + default_factory=ValidateLoadedPluginModel, + title="Validate Loaded Plugin" + ) + ValidateMayaUnits: ValidateMayaUnitsModel = Field( + default_factory=ValidateMayaUnitsModel, + title="Validate Maya Units" + ) + ValidateUnrealStaticMeshName: ValidateUnrealStaticMeshNameModel = Field( + default_factory=ValidateUnrealStaticMeshNameModel, + title="Validate Unreal Static Mesh Name" + ) + ValidateCycleError: ValidateCycleErrorModel = Field( + default_factory=ValidateCycleErrorModel, + title="Validate Cycle Error" + ) + ValidatePluginPathAttributes: ValidatePluginPathAttributesModel = Field( + default_factory=ValidatePluginPathAttributesModel, + title="Plug-in Path Attributes" + ) + ValidateRenderSettings: ValidateRenderSettingsModel = Field( + default_factory=ValidateRenderSettingsModel, + title="Validate Render Settings" + ) + ValidateResolution: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Resolution Setting" + ) + ValidateCurrentRenderLayerIsRenderable: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Current Render Layer Has Renderable Camera" + ) + ValidateGLSLMaterial: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Material" + ) + ValidateGLSLPlugin: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Plugin" + ) + ValidateRenderImageRule: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Image Rule (Workspace)" + ) + ValidateRenderNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras Renderable" + ) + ValidateRenderSingleCamera: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Single Camera " + ) + ValidateRenderLayerAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Passes/AOVs Are Registered" + ) + ValidateStepSize: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Step Size" + ) + ValidateVRayDistributedRendering: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Distributed Rendering" + ) + ValidateVrayReferencedAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Referenced AOVs" + ) + ValidateVRayTranslatorEnabled: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Translator Settings" + ) + ValidateVrayProxy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Settings" + ) + ValidateVrayProxyMembers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Members" + ) + ValidateYetiRenderScriptCallbacks: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Render Script Callbacks" + ) + ValidateYetiRigCacheState: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Cache State" + ) + ValidateYetiRigInputShapesInInstance: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Input Shapes In Instance" + ) + ValidateYetiRigSettings: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Settings" + ) + # Model - START + ValidateModelName: ValidateModelNameModel = Field( + default_factory=ValidateModelNameModel, + title="Validate Model Name", + section="Model", + ) + ValidateModelContent: ValidateModelContentModel = Field( + default_factory=ValidateModelContentModel, + title="Validate Model Content", + ) + ValidateTransformNamingSuffix: ValidateTransformNamingSuffixModel = Field( + default_factory=ValidateTransformNamingSuffixModel, + title="Validate Transform Naming Suffix", + ) + ValidateColorSets: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Color Sets", + ) + ValidateMeshHasOverlappingUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has Overlapping UVs", + ) + ValidateMeshArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Arnold Attributes", + ) + ValidateMeshShaderConnections: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Shader Connections", + ) + ValidateMeshSingleUVSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Single UV Set", + ) + ValidateMeshHasUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has UVs", + ) + ValidateMeshLaminaFaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Lamina Faces", + ) + ValidateMeshNgons: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Ngons", + ) + ValidateMeshNonManifold: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Non-Manifold", + ) + ValidateMeshNoNegativeScale: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh No Negative Scale", + ) + ValidateMeshNonZeroEdgeLength: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Edge Length Non Zero", + ) + ValidateMeshNormalsUnlocked: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Normals Unlocked", + ) + ValidateMeshUVSetMap1: ValidateMeshUVSetMap1Model = Field( + default_factory=ValidateMeshUVSetMap1Model, + title="Validate Mesh UV Set Map 1", + ) + ValidateMeshVerticesHaveEdges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Vertices Have Edges", + ) + ValidateNoAnimation: ValidateNoAnimationModel = Field( + default_factory=ValidateNoAnimationModel, + title="Validate No Animation", + ) + ValidateNoNamespace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Namespace", + ) + ValidateNoNullTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Null Transforms", + ) + ValidateNoUnknownNodes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Unknown Nodes", + ) + ValidateNodeNoGhosting: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Node No Ghosting", + ) + ValidateShapeDefaultNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Default Names", + ) + ValidateShapeRenderStats: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Render Stats", + ) + ValidateShapeZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Zero", + ) + ValidateTransformZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Transform Zero", + ) + ValidateUniqueNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unique Names", + ) + ValidateNoVRayMesh: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No V-Ray Proxies (VRayMesh)", + ) + ValidateUnrealMeshTriangulated: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate if Mesh is Triangulated", + ) + ValidateAlembicVisibleOnly: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Alembic Visible Node", + ) + ExtractProxyAlembic: ExtractProxyAlembicModel = Field( + default_factory=ExtractProxyAlembicModel, + title="Extract Proxy Alembic", + section="Model Extractors", + ) + ExtractAlembic: ExtractAlembicModel = Field( + default_factory=ExtractAlembicModel, + title="Extract Alembic", + ) + ExtractObj: ExtractObjModel = Field( + default_factory=ExtractObjModel, + title="Extract OBJ" + ) + # Model - END + + # Rig - START + ValidateRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Contents", + section="Rig", + ) + ValidateRigJointsHidden: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Joints Hidden", + ) + ValidateRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers", + ) + ValidateAnimatedReferenceRig: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animated Reference Rig", + ) + ValidateAnimationContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Content", + ) + ValidateOutRelatedNodeIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Out Set Related Node Ids", + ) + ValidateRigControllersArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers (Arnold Attributes)", + ) + ValidateSkeletalMeshHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeletal Mesh Top Node", + ) + ValidateSkeletonRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Contents" + ) + ValidateSkeletonRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Controllers" + ) + ValidateSkinclusterDeformerSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skincluster Deformer Relationships", + ) + ValidateSkeletonRigOutputIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Output Ids" + ) + ValidateSkeletonTopGroupHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Top Group Hierarchy", + ) + ValidateRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Rig Out Set Node Ids", + ) + ValidateSkeletonRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Skeleton Rig Out Set Node Ids", + ) + # Rig - END + ValidateCameraAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Camera Attributes" + ) + ValidateAssemblyName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Name" + ) + ValidateAssemblyNamespaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Namespaces" + ) + ValidateAssemblyModelTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Model Transforms" + ) + ValidateAssRelativePaths: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Ass Relative Paths" + ) + ValidateInstancerContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Content" + ) + ValidateInstancerFrameRanges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Cache Frame Ranges" + ) + ValidateNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras" + ) + ValidateUnrealUpAxis: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unreal Up-Axis Check" + ) + ValidateCameraContents: ValidateCameraContentsModel = Field( + default_factory=ValidateCameraContentsModel, + title="Validate Camera Content" + ) + ExtractPlayblast: ExtractPlayblastSetting = Field( + default_factory=ExtractPlayblastSetting, + title="Extract Playblast Settings", + section="Extractors" + ) + ExtractMayaSceneRaw: ExtractMayaSceneRawModel = Field( + default_factory=ExtractMayaSceneRawModel, + title="Maya Scene(Raw)" + ) + ExtractCameraAlembic: ExtractCameraAlembicModel = Field( + default_factory=ExtractCameraAlembicModel, + title="Extract Camera Alembic" + ) + ExtractGLB: ExtractGLBModel = Field( + default_factory=ExtractGLBModel, + title="Extract GLB" + ) + ExtractLook: ExtractLookModel = Field( + default_factory=ExtractLookModel, + title="Extract Look" + ) + ExtractGPUCache: ExtractGPUCacheModel = Field( + default_factory=ExtractGPUCacheModel, + title="Extract GPU Cache", + ) + + +DEFAULT_SUFFIX_NAMING = { + "mesh": ["_GEO", "_GES", "_GEP", "_OSD"], + "nurbsCurve": ["_CRV"], + "nurbsSurface": ["_NRB"], + "locator": ["_LOC"], + "group": ["_GRP"] +} + +DEFAULT_PUBLISH_SETTINGS = { + "CollectMayaRender": { + "sync_workfile_version": False + }, + "CollectFbxAnimation": { + "enabled": True + }, + "CollectFbxCamera": { + "enabled": False + }, + "CollectGLTF": { + "enabled": False + }, + "ValidateInstanceInContext": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True, + "exclude_product_types": [ + "model", + "rig", + "staticMesh" + ] + }, + "ValidateShaderName": { + "enabled": False, + "optional": True, + "active": True, + "regex": "(?P.*)_(.*)_SHD" + }, + "ValidateShadingEngine": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMayaColorSpace": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAttributes": { + "enabled": False, + "attributes": "{}" + }, + "ValidateLoadedPlugin": { + "enabled": False, + "optional": True, + "whitelist_native_plugins": False, + "authorized_plugins": [] + }, + "ValidateMayaUnits": { + "enabled": True, + "optional": False, + "validate_linear_units": True, + "linear_units": "cm", + "validate_angular_units": True, + "angular_units": "deg", + "validate_fps": True + }, + "ValidateUnrealStaticMeshName": { + "enabled": True, + "optional": True, + "validate_mesh": False, + "validate_collision": True + }, + "ValidateCycleError": { + "enabled": True, + "optional": False, + "families": [ + "rig" + ] + }, + "ValidatePluginPathAttributes": { + "enabled": True, + "optional": False, + "active": True, + "attribute": [ + {"name": "AlembicNode", "value": "abc_File"}, + {"name": "VRayProxy", "value": "fileName"}, + {"name": "RenderManArchive", "value": "filename"}, + {"name": "pgYetiMaya", "value": "cacheFileName"}, + {"name": "aiStandIn", "value": "dso"}, + {"name": "RedshiftSprite", "value": "tex0"}, + {"name": "RedshiftBokeh", "value": "dofBokehImage"}, + {"name": "RedshiftCameraMap", "value": "tex0"}, + {"name": "RedshiftEnvironment", "value": "tex2"}, + {"name": "RedshiftDomeLight", "value": "tex1"}, + {"name": "RedshiftIESLight", "value": "profile"}, + {"name": "RedshiftLightGobo", "value": "tex0"}, + {"name": "RedshiftNormalMap", "value": "tex0"}, + {"name": "RedshiftProxyMesh", "value": "fileName"}, + {"name": "RedshiftVolumeShape", "value": "fileName"}, + {"name": "VRayTexGLSL", "value": "fileName"}, + {"name": "VRayMtlGLSL", "value": "fileName"}, + {"name": "VRayVRmatMtl", "value": "fileName"}, + {"name": "VRayPtex", "value": "ptexFile"}, + {"name": "VRayLightIESShape", "value": "iesFile"}, + {"name": "VRayMesh", "value": "materialAssignmentsFile"}, + {"name": "VRayMtlOSL", "value": "fileName"}, + {"name": "VRayTexOSL", "value": "fileName"}, + {"name": "VRayTexOCIO", "value": "ocioConfigFile"}, + {"name": "VRaySettingsNode", "value": "pmap_autoSaveFile2"}, + {"name": "VRayScannedMtl", "value": "file"}, + {"name": "VRayScene", "value": "parameterOverrideFilePath"}, + {"name": "VRayMtlMDL", "value": "filename"}, + {"name": "VRaySimbiont", "value": "file"}, + {"name": "dlOpenVDBShape", "value": "filename"}, + {"name": "pgYetiMayaShape", "value": "liveABCFilename"}, + {"name": "gpuCache", "value": "cacheFileName"}, + ] + }, + "ValidateRenderSettings": { + "arnold_render_attributes": [], + "vray_render_attributes": [], + "redshift_render_attributes": [], + "renderman_render_attributes": [] + }, + "ValidateResolution": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateCurrentRenderLayerIsRenderable": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLMaterial": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLPlugin": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderImageRule": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderSingleCamera": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderLayerAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateStepSize": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayDistributedRendering": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayReferencedAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayTranslatorEnabled": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxyMembers": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRenderScriptCallbacks": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigCacheState": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigInputShapesInInstance": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigSettings": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateModelName": { + "enabled": False, + "database": True, + "material_file": { + "windows": "", + "darwin": "", + "linux": "" + }, + "regex": "(.*)_(\\d)*_(?P.*)_(GEO)", + "top_level_regex": ".*_GRP" + }, + "ValidateModelContent": { + "enabled": True, + "optional": False, + "validate_top_group": True + }, + "ValidateTransformNamingSuffix": { + "enabled": True, + "optional": True, + "SUFFIX_NAMING_TABLE": json.dumps(DEFAULT_SUFFIX_NAMING, indent=4), + "ALLOW_IF_NOT_IN_SUFFIX_TABLE": True + }, + "ValidateColorSets": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasOverlappingUVs": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshArnoldAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshShaderConnections": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshSingleUVSet": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshHasUVs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshLaminaFaces": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNgons": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNonManifold": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateMeshNonZeroEdgeLength": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNormalsUnlocked": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshUVSetMap1": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshVerticesHaveEdges": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNoAnimation": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoNamespace": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoNullTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoUnknownNodes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNodeNoGhosting": { + "enabled": False, + "optional": False, + "active": True + }, + "ValidateShapeDefaultNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeRenderStats": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateTransformZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateUniqueNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoVRayMesh": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealMeshTriangulated": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAlembicVisibleOnly": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractProxyAlembic": { + "enabled": True, + "families": [ + "proxyAbc" + ] + }, + "ExtractAlembic": { + "enabled": True, + "families": [ + "pointcache", + "model", + "vrayproxy.alembic" + ] + }, + "ExtractObj": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigContents": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigJointsHidden": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAnimatedReferenceRig": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAnimationContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateOutRelatedNodeIds": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigControllersArnoldAttributes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkeletalMeshHierarchy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkeletonRigContents": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSkeletonRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateSkinclusterDeformerSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigOutSetNodeIds": { + "enabled": True, + "optional": False, + "allow_history_only": False + }, + "ValidateSkeletonRigOutSetNodeIds": { + "enabled": False, + "optional": False, + "allow_history_only": False + }, + "ValidateSkeletonRigOutputIds": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateSkeletonTopGroupHierarchy": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateCameraAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssemblyName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAssemblyNamespaces": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssemblyModelTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssRelativePaths": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerFrameRanges": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealUpAxis": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateCameraContents": { + "enabled": True, + "optional": False, + "validate_shapes": True + }, + "ExtractPlayblast": DEFAULT_PLAYBLAST_SETTING, + "ExtractMayaSceneRaw": { + "enabled": True, + "add_for_families": [ + "layout" + ] + }, + "ExtractCameraAlembic": { + "enabled": True, + "optional": True, + "active": True, + "bake_attributes": "[]" + }, + "ExtractGLB": { + "enabled": True, + "active": True, + "ogsfx_path": "/maya2glTF/PBR/shaders/glTF_PBR.ogsfx" + }, + "ExtractLook": { + "maketx_arguments": [] + }, + "ExtractGPUCache": { + "enabled": False, + "families": [ + "model", + "animation", + "pointcache" + ], + "step": 1.0, + "stepSave": 1, + "optimize": True, + "optimizationThreshold": 40000, + "optimizeAnimationsForMotionBlur": True, + "writeMaterials": True, + "useBaseTessellation": True + } +} diff --git a/server_addon/maya/server/settings/render_settings.py b/server_addon/maya/server/settings/render_settings.py new file mode 100644 index 0000000000..b6163a04ce --- /dev/null +++ b/server_addon/maya/server/settings/render_settings.py @@ -0,0 +1,500 @@ +"""Providing models and values for Maya Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def arnold_image_format_enum(): + """Return enumerator for Arnold output formats.""" + return [ + {"label": "jpeg", "value": "jpeg"}, + {"label": "png", "value": "png"}, + {"label": "deepexr", "value": "deep exr"}, + {"label": "tif", "value": "tif"}, + {"label": "exr", "value": "exr"}, + {"label": "maya", "value": "maya"}, + {"label": "mtoa_shaders", "value": "mtoa_shaders"} + ] + + +def arnold_aov_list_enum(): + """Return enumerator for Arnold AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "ID", "label": "ID"}, + {"value": "N", "label": "N"}, + {"value": "P", "label": "P"}, + {"value": "Pref", "label": "Pref"}, + {"value": "RGBA", "label": "RGBA"}, + {"value": "Z", "label": "Z"}, + {"value": "albedo", "label": "albedo"}, + {"value": "background", "label": "background"}, + {"value": "coat", "label": "coat"}, + {"value": "coat_albedo", "label": "coat_albedo"}, + {"value": "coat_direct", "label": "coat_direct"}, + {"value": "coat_indirect", "label": "coat_indirect"}, + {"value": "cputime", "label": "cputime"}, + {"value": "crypto_asset", "label": "crypto_asset"}, + {"value": "crypto_material", "label": "cypto_material"}, + {"value": "crypto_object", "label": "crypto_object"}, + {"value": "diffuse", "label": "diffuse"}, + {"value": "diffuse_albedo", "label": "diffuse_albedo"}, + {"value": "diffuse_direct", "label": "diffuse_direct"}, + {"value": "diffuse_indirect", "label": "diffuse_indirect"}, + {"value": "direct", "label": "direct"}, + {"value": "emission", "label": "emission"}, + {"value": "highlight", "label": "highlight"}, + {"value": "indirect", "label": "indirect"}, + {"value": "motionvector", "label": "motionvector"}, + {"value": "opacity", "label": "opacity"}, + {"value": "raycount", "label": "raycount"}, + {"value": "rim_light", "label": "rim_light"}, + {"value": "shadow", "label": "shadow"}, + {"value": "shadow_diff", "label": "shadow_diff"}, + {"value": "shadow_mask", "label": "shadow_mask"}, + {"value": "shadow_matte", "label": "shadow_matte"}, + {"value": "sheen", "label": "sheen"}, + {"value": "sheen_albedo", "label": "sheen_albedo"}, + {"value": "sheen_direct", "label": "sheen_direct"}, + {"value": "sheen_indirect", "label": "sheen_indirect"}, + {"value": "specular", "label": "specular"}, + {"value": "specular_albedo", "label": "specular_albedo"}, + {"value": "specular_direct", "label": "specular_direct"}, + {"value": "specular_indirect", "label": "specular_indirect"}, + {"value": "sss", "label": "sss"}, + {"value": "sss_albedo", "label": "sss_albedo"}, + {"value": "sss_direct", "label": "sss_direct"}, + {"value": "sss_indirect", "label": "sss_indirect"}, + {"value": "transmission", "label": "transmission"}, + {"value": "transmission_albedo", "label": "transmission_albedo"}, + {"value": "transmission_direct", "label": "transmission_direct"}, + {"value": "transmission_indirect", "label": "transmission_indirect"}, + {"value": "volume", "label": "volume"}, + {"value": "volume_Z", "label": "volume_Z"}, + {"value": "volume_albedo", "label": "volume_albedo"}, + {"value": "volume_direct", "label": "volume_direct"}, + {"value": "volume_indirect", "label": "volume_indirect"}, + {"value": "volume_opacity", "label": "volume_opacity"}, + ] + + +def vray_image_output_enum(): + """Return output format for Vray enumerator.""" + return [ + {"label": "png", "value": "png"}, + {"label": "jpg", "value": "jpg"}, + {"label": "vrimg", "value": "vrimg"}, + {"label": "hdr", "value": "hdr"}, + {"label": "exr", "value": "exr"}, + {"label": "exr (multichannel)", "value": "exr (multichannel)"}, + {"label": "exr (deep)", "value": "exr (deep)"}, + {"label": "tga", "value": "tga"}, + {"label": "bmp", "value": "bmp"}, + {"label": "sgi", "value": "sgi"} + ] + + +def vray_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "atmosphereChannel", "label": "atmosphere"}, + {"value": "backgroundChannel", "label": "background"}, + {"value": "bumpNormalsChannel", "label": "bumpnormals"}, + {"value": "causticsChannel", "label": "caustics"}, + {"value": "coatFilterChannel", "label": "coat_filter"}, + {"value": "coatGlossinessChannel", "label": "coatGloss"}, + {"value": "coatReflectionChannel", "label": "coat_reflection"}, + {"value": "vrayCoatChannel", "label": "coat_specular"}, + {"value": "CoverageChannel", "label": "coverage"}, + {"value": "cryptomatteChannel", "label": "cryptomatte"}, + {"value": "customColor", "label": "custom_color"}, + {"value": "drBucketChannel", "label": "DR"}, + {"value": "denoiserChannel", "label": "denoiser"}, + {"value": "diffuseChannel", "label": "diffuse"}, + {"value": "ExtraTexElement", "label": "extraTex"}, + {"value": "giChannel", "label": "GI"}, + {"value": "LightMixElement", "label": "None"}, + {"value": "lightingChannel", "label": "lighting"}, + {"value": "LightingAnalysisChannel", "label": "LightingAnalysis"}, + {"value": "materialIDChannel", "label": "materialID"}, + {"value": "MaterialSelectElement", "label": "materialSelect"}, + {"value": "matteShadowChannel", "label": "matteShadow"}, + {"value": "MultiMatteElement", "label": "multimatte"}, + {"value": "multimatteIDChannel", "label": "multimatteID"}, + {"value": "normalsChannel", "label": "normals"}, + {"value": "nodeIDChannel", "label": "objectId"}, + {"value": "objectSelectChannel", "label": "objectSelect"}, + {"value": "rawCoatFilterChannel", "label": "raw_coat_filter"}, + {"value": "rawCoatReflectionChannel", "label": "raw_coat_reflection"}, + {"value": "rawDiffuseFilterChannel", "label": "rawDiffuseFilter"}, + {"value": "rawGiChannel", "label": "rawGI"}, + {"value": "rawLightChannel", "label": "rawLight"}, + {"value": "rawReflectionChannel", "label": "rawReflection"}, + { + "value": "rawReflectionFilterChannel", + "label": "rawReflectionFilter" + }, + {"value": "rawRefractionChannel", "label": "rawRefraction"}, + { + "value": "rawRefractionFilterChannel", + "label": "rawRefractionFilter" + }, + {"value": "rawShadowChannel", "label": "rawShadow"}, + {"value": "rawSheenFilterChannel", "label": "raw_sheen_filter"}, + { + "value": "rawSheenReflectionChannel", + "label": "raw_sheen_reflection" + }, + {"value": "rawTotalLightChannel", "label": "rawTotalLight"}, + {"value": "reflectIORChannel", "label": "reflIOR"}, + {"value": "reflectChannel", "label": "reflect"}, + {"value": "reflectionFilterChannel", "label": "reflectionFilter"}, + {"value": "reflectGlossinessChannel", "label": "reflGloss"}, + {"value": "refractChannel", "label": "refract"}, + {"value": "refractionFilterChannel", "label": "refractionFilter"}, + {"value": "refractGlossinessChannel", "label": "refrGloss"}, + {"value": "renderIDChannel", "label": "renderId"}, + {"value": "FastSSS2Channel", "label": "SSS"}, + {"value": "sampleRateChannel", "label": "sampleRate"}, + {"value": "samplerInfo", "label": "samplerInfo"}, + {"value": "selfIllumChannel", "label": "selfIllum"}, + {"value": "shadowChannel", "label": "shadow"}, + {"value": "sheenFilterChannel", "label": "sheen_filter"}, + {"value": "sheenGlossinessChannel", "label": "sheenGloss"}, + {"value": "sheenReflectionChannel", "label": "sheen_reflection"}, + {"value": "vraySheenChannel", "label": "sheen_specular"}, + {"value": "specularChannel", "label": "specular"}, + {"value": "Toon", "label": "Toon"}, + {"value": "toonLightingChannel", "label": "toonLighting"}, + {"value": "toonSpecularChannel", "label": "toonSpecular"}, + {"value": "totalLightChannel", "label": "totalLight"}, + {"value": "unclampedColorChannel", "label": "unclampedColor"}, + {"value": "VRScansPaintMaskChannel", "label": "VRScansPaintMask"}, + {"value": "VRScansZoneMaskChannel", "label": "VRScansZoneMask"}, + {"value": "velocityChannel", "label": "velocity"}, + {"value": "zdepthChannel", "label": "zDepth"}, + {"value": "LightSelectElement", "label": "lightselect"}, + ] + + +def redshift_engine_enum(): + """Get Redshift engine type enumerator.""" + return [ + {"value": "0", "label": "None"}, + {"value": "1", "label": "Photon Map"}, + {"value": "2", "label": "Irradiance Cache"}, + {"value": "3", "label": "Brute Force"} + ] + + +def redshift_image_output_enum(): + """Return output format for Redshift enumerator.""" + return [ + {"value": "iff", "label": "Maya IFF"}, + {"value": "exr", "label": "OpenEXR"}, + {"value": "tif", "label": "TIFF"}, + {"value": "png", "label": "PNG"}, + {"value": "tga", "label": "Targa"}, + {"value": "jpg", "label": "JPEG"} + ] + + +def redshift_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< none >"}, + {"value": "AO", "label": "Ambient Occlusion"}, + {"value": "Background", "label": "Background"}, + {"value": "Beauty", "label": "Beauty"}, + {"value": "BumpNormals", "label": "Bump Normals"}, + {"value": "Caustics", "label": "Caustics"}, + {"value": "CausticsRaw", "label": "Caustics Raw"}, + {"value": "Cryptomatte", "label": "Cryptomatte"}, + {"value": "Custom", "label": "Custom"}, + {"value": "Z", "label": "Depth"}, + {"value": "DiffuseFilter", "label": "Diffuse Filter"}, + {"value": "DiffuseLighting", "label": "Diffuse Lighting"}, + {"value": "DiffuseLightingRaw", "label": "Diffuse Lighting Raw"}, + {"value": "Emission", "label": "Emission"}, + {"value": "GI", "label": "Global Illumination"}, + {"value": "GIRaw", "label": "Global Illumination Raw"}, + {"value": "Matte", "label": "Matte"}, + {"value": "MotionVectors", "label": "Ambient Occlusion"}, + {"value": "N", "label": "Normals"}, + {"value": "ID", "label": "ObjectID"}, + {"value": "ObjectBumpNormal", "label": "Object-Space Bump Normals"}, + {"value": "ObjectPosition", "label": "Object-Space Positions"}, + {"value": "PuzzleMatte", "label": "Puzzle Matte"}, + {"value": "Reflections", "label": "Reflections"}, + {"value": "ReflectionsFilter", "label": "Reflections Filter"}, + {"value": "ReflectionsRaw", "label": "Reflections Raw"}, + {"value": "Refractions", "label": "Refractions"}, + {"value": "RefractionsFilter", "label": "Refractions Filter"}, + {"value": "RefractionsRaw", "label": "Refractions Filter"}, + {"value": "Shadows", "label": "Shadows"}, + {"value": "SpecularLighting", "label": "Specular Lighting"}, + {"value": "SSS", "label": "Sub Surface Scatter"}, + {"value": "SSSRaw", "label": "Sub Surface Scatter Raw"}, + { + "value": "TotalDiffuseLightingRaw", + "label": "Total Diffuse Lighting Raw" + }, + { + "value": "TotalTransLightingRaw", + "label": "Total Translucency Filter" + }, + {"value": "TransTint", "label": "Translucency Filter"}, + {"value": "TransGIRaw", "label": "Translucency Lighting Raw"}, + {"value": "VolumeFogEmission", "label": "Volume Fog Emission"}, + {"value": "VolumeFogTint", "label": "Volume Fog Tint"}, + {"value": "VolumeLighting", "label": "Volume Lighting"}, + {"value": "P", "label": "World Position"}, + ] + + +class AdditionalOptionsModel(BaseSettingsModel): + """Additional Option""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field("", title="Value") + + +class ArnoldSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + image_format: str = Field( + enum_resolver=arnold_image_format_enum, title="Output Image Format") + multilayer_exr: bool = Field(title="Multilayer (exr)") + tiled: bool = Field(title="Tiled (tif, exr)") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=arnold_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Arnold Options", + description=( + "Add additional options - put attribute and value, like AASamples" + ) + ) + + +class VraySettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # engine was str because of JSON limitation (key must be string) + engine: str = Field( + enum_resolver=lambda: [ + {"label": "V-Ray", "value": "1"}, + {"label": "V-Ray GPU", "value": "2"} + ], + title="Production Engine" + ) + image_format: str = Field( + enum_resolver=vray_image_output_enum, + title="Output Image Format" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=vray_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like aaFilterSize" + ) + ) + + +class RedshiftSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # both engines are using the same enumerator, + # both were originally str because of JSON limitation. + primary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Primary GI Engine" + ) + secondary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Secondary GI Engine" + ) + image_format: str = Field( + enum_resolver=redshift_image_output_enum, + title="Output Image Format" + ) + multilayer_exr: bool = Field(title="Multilayer (exr)") + force_combine: bool = Field(title="Force combine beauty and AOVs") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=redshift_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like reflectionMaxTraceDepth" + ) + ) + + +def renderman_display_filters(): + return [ + "PxrBackgroundDisplayFilter", + "PxrCopyAOVDisplayFilter", + "PxrEdgeDetect", + "PxrFilmicTonemapperDisplayFilter", + "PxrGradeDisplayFilter", + "PxrHalfBufferErrorFilter", + "PxrImageDisplayFilter", + "PxrLightSaturation", + "PxrShadowDisplayFilter", + "PxrStylizedHatching", + "PxrStylizedLines", + "PxrStylizedToon", + "PxrWhitePointDisplayFilter" + ] + + +def renderman_sample_filters_enum(): + return [ + "PxrBackgroundSampleFilter", + "PxrCopyAOVSampleFilter", + "PxrCryptomatte", + "PxrFilmicTonemapperSampleFilter", + "PxrGradeSampleFilter", + "PxrShadowFilter", + "PxrWatermarkFilter", + "PxrWhitePointSampleFilter" + ] + + +class RendermanSettingsModel(BaseSettingsModel): + image_prefix: str = Field( + "", title="Image prefix template") + image_dir: str = Field( + "", title="Image Output Directory") + display_filters: list[str] = Field( + default_factory=list, + title="Display Filters", + enum_resolver=renderman_display_filters + ) + imageDisplay_dir: str = Field( + "", title="Image Display Filter Directory") + sample_filters: list[str] = Field( + default_factory=list, + title="Sample Filters", + enum_resolver=renderman_sample_filters_enum + ) + cryptomatte_dir: str = Field( + "", title="Cryptomatte Output Directory") + watermark_dir: str = Field( + "", title="Watermark Filter Directory") + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Renderer Options" + ) + + +class RenderSettingsModel(BaseSettingsModel): + apply_render_settings: bool = Field( + title="Apply Render Settings on creation" + ) + default_render_image_folder: str = Field( + title="Default render image folder" + ) + enable_all_lights: bool = Field( + title="Include all lights in Render Setup Layers by default" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + reset_current_frame: bool = Field( + title="Reset Current Frame") + remove_aovs: bool = Field( + title="Remove existing AOVs") + arnold_renderer: ArnoldSettingsModel = Field( + default_factory=ArnoldSettingsModel, + title="Arnold Renderer") + vray_renderer: VraySettingsModel = Field( + default_factory=VraySettingsModel, + title="Vray Renderer") + redshift_renderer: RedshiftSettingsModel = Field( + default_factory=RedshiftSettingsModel, + title="Redshift Renderer") + renderman_renderer: RendermanSettingsModel = Field( + default_factory=RendermanSettingsModel, + title="Renderman Renderer") + + +DEFAULT_RENDER_SETTINGS = { + "apply_render_settings": True, + "default_render_image_folder": "renders/maya", + "enable_all_lights": True, + "aov_separator": "underscore", + "reset_current_frame": False, + "remove_aovs": False, + "arnold_renderer": { + "image_prefix": "//_", + "image_format": "exr", + "multilayer_exr": True, + "tiled": True, + "aov_list": [], + "additional_options": [] + }, + "vray_renderer": { + "image_prefix": "//", + "engine": "1", + "image_format": "exr", + "aov_list": [], + "additional_options": [] + }, + "redshift_renderer": { + "image_prefix": "//", + "primary_gi_engine": "0", + "secondary_gi_engine": "0", + "image_format": "exr", + "multilayer_exr": True, + "force_combine": True, + "aov_list": [], + "additional_options": [] + }, + "renderman_renderer": { + "image_prefix": "{aov_separator}..", + "image_dir": "/", + "display_filters": [], + "imageDisplay_dir": "/{aov_separator}imageDisplayFilter..", + "sample_filters": [], + "cryptomatte_dir": "/{aov_separator}cryptomatte..", + "watermark_dir": "/{aov_separator}watermarkFilter..", + "additional_options": [] + } +} diff --git a/server_addon/maya/server/settings/scriptsmenu.py b/server_addon/maya/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..82c1c2e53c --- /dev/null +++ b/server_addon/maya/server/settings/scriptsmenu.py @@ -0,0 +1,43 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + tags: list[str] = Field(default_factory=list, title="A list of tags") + + +class ScriptsmenuModel(BaseSettingsModel): + _isGroup = True + + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Menu Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "command": "import openpype.hosts.maya.api.commands as op_cmds; op_cmds.edit_shader_definitions()", + "sourcetype": "python", + "title": "Edit shader name definitions", + "tooltip": "Edit shader name definitions used in validation and renaming.", + "tags": [ + "pipeline", + "shader" + ] + } + ] +} diff --git a/server_addon/maya/server/settings/templated_workfile_settings.py b/server_addon/maya/server/settings/templated_workfile_settings.py new file mode 100644 index 0000000000..ef81b31a07 --- /dev/null +++ b/server_addon/maya/server/settings/templated_workfile_settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class WorkfileBuildProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + path: str = Field("", title="Path to template") + + +class TemplatedProfilesModel(BaseSettingsModel): + profiles: list[WorkfileBuildProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_TEMPLATED_WORKFILE_SETTINGS = { + "profiles": [] +} diff --git a/server_addon/maya/server/settings/workfile_build_settings.py b/server_addon/maya/server/settings/workfile_build_settings.py new file mode 100644 index 0000000000..dc56d1a320 --- /dev/null +++ b/server_addon/maya/server/settings/workfile_build_settings.py @@ -0,0 +1,131 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ContextItemModel(BaseSettingsModel): + _layout = "expanded" + product_name_filters: list[str] = Field( + default_factory=list, title="Product name Filters") + product_types: list[str] = Field( + default_factory=list, title="Product types") + repre_names: list[str] = Field( + default_factory=list, title="Repre Names") + loaders: list[str] = Field( + default_factory=list, title="Loaders") + + +class WorkfileSettingModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + enum_resolver=task_types_enum, + title="Task types") + tasks: list[str] = Field( + default_factory=list, + title="Task names") + current_context: list[ContextItemModel] = Field( + default_factory=list, + title="Current Context") + linked_assets: list[ContextItemModel] = Field( + default_factory=list, + title="Linked Assets") + + +class ProfilesModel(BaseSettingsModel): + profiles: list[WorkfileSettingModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_WORKFILE_SETTING = { + "profiles": [ + { + "task_types": [], + "tasks": [ + "Lighting" + ], + "current_context": [ + { + "product_name_filters": [ + ".+[Mm]ain" + ], + "product_types": [ + "model" + ], + "repre_names": [ + "abc", + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "animation", + "pointcache", + "proxyAbc" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "rendersetup" + ], + "repre_names": [ + "json" + ], + "loaders": [ + "RenderSetupLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "camera" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + } + ], + "linked_assets": [ + { + "product_name_filters": [], + "product_types": [ + "sedress" + ], + "repre_names": [ + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "ArnoldStandin" + ], + "repre_names": [ + "ass" + ], + "loaders": [ + "assLoader" + ] + } + ] + } + ] +} diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py new file mode 100644 index 0000000000..90ce344d3e --- /dev/null +++ b/server_addon/maya/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.5" diff --git a/server_addon/muster/server/__init__.py b/server_addon/muster/server/__init__.py new file mode 100644 index 0000000000..2cb8943554 --- /dev/null +++ b/server_addon/muster/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MusterSettings, DEFAULT_VALUES + + +class MusterAddon(BaseServerAddon): + name = "muster" + version = __version__ + title = "Muster" + settings_model: Type[MusterSettings] = MusterSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/muster/server/settings.py b/server_addon/muster/server/settings.py new file mode 100644 index 0000000000..e37c762870 --- /dev/null +++ b/server_addon/muster/server/settings.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TemplatesMapping(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: int = Field(title="mapping") + + +class MusterSettings(BaseSettingsModel): + enabled: bool = True + MUSTER_REST_URL: str = Field( + "", + title="Muster Rest URL", + scope=["studio"], + ) + + templates_mapping: list[TemplatesMapping] = Field( + default_factory=list, + title="Templates mapping", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "MUSTER_REST_URL": "http://127.0.0.1:9890", + "templates_mapping": [ + {"name": "file_layers", "value": 7}, + {"name": "mentalray", "value": 2}, + {"name": "mentalray_sf", "value": 6}, + {"name": "redshift", "value": 55}, + {"name": "renderman", "value": 29}, + {"name": "software", "value": 1}, + {"name": "software_sf", "value": 5}, + {"name": "turtle", "value": 10}, + {"name": "vector", "value": 4}, + {"name": "vray", "value": 37}, + {"name": "ffmpeg", "value": 48} + ] +} diff --git a/server_addon/muster/server/version.py b/server_addon/muster/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/muster/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/nuke/server/__init__.py b/server_addon/nuke/server/__init__.py new file mode 100644 index 0000000000..032ceea5fb --- /dev/null +++ b/server_addon/nuke/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import NukeSettings, DEFAULT_VALUES + + +class NukeAddon(BaseServerAddon): + name = "nuke" + title = "Nuke" + version = __version__ + settings_model: Type[NukeSettings] = NukeSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/nuke/server/settings/__init__.py b/server_addon/nuke/server/settings/__init__.py new file mode 100644 index 0000000000..1e58865395 --- /dev/null +++ b/server_addon/nuke/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + NukeSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "NukeSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/nuke/server/settings/common.py b/server_addon/nuke/server/settings/common.py new file mode 100644 index 0000000000..2bc3c9be81 --- /dev/null +++ b/server_addon/nuke/server/settings/common.py @@ -0,0 +1,136 @@ +import json +from pydantic import Field +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +class Box(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + r: float = Field(1.0, title="R") + t: float = Field(1.0, title="T") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"}, + {"value": "box", "label": "Box"}, + {"value": "expression", "label": "Expression"} +] + + +class KnobModel(BaseSettingsModel): + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + box: Box = Field( + default_factory=Box, + title="Value" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Formatable" + ) + expression: str = Field( + "", + title="Expression" + ) diff --git a/server_addon/nuke/server/settings/create_plugins.py b/server_addon/nuke/server/settings/create_plugins.py new file mode 100644 index 0000000000..80aec51ae0 --- /dev/null +++ b/server_addon/nuke/server/settings/create_plugins.py @@ -0,0 +1,208 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) +from .common import KnobModel + + +def instance_attributes_enum(): + """Return create write instance attributes.""" + return [ + {"value": "reviewable", "label": "Reviewable"}, + {"value": "farm_rendering", "label": "Farm rendering"}, + {"value": "use_range_limit", "label": "Use range limit"} + ] + + +class PrenodeModel(BaseSettingsModel): + name: str = Field( + title="Node name" + ) + + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + + knobs: list[KnobModel] = Field( + default_factory=list, + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteRenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + prenodes: list[PrenodeModel] = Field( + default_factory=list, + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWritePrerenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + prenodes: list[PrenodeModel] = Field( + default_factory=list, + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteImageModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + prenodes: list[PrenodeModel] = Field( + default_factory=list, + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateWriteRender: CreateWriteRenderModel = Field( + default_factory=CreateWriteRenderModel, + title="Create Write Render" + ) + CreateWritePrerender: CreateWritePrerenderModel = Field( + default_factory=CreateWritePrerenderModel, + title="Create Write Prerender" + ) + CreateWriteImage: CreateWriteImageModel = Field( + default_factory=CreateWriteImageModel, + title="Create Write Image" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateWriteRender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ], + "prenodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependent": "", + "knobs": [ + { + "type": "text", + "name": "resize", + "text": "none" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + } + ] + } + ] + }, + "CreateWritePrerender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Key01", + "Bg01", + "Fg01", + "Branch01", + "Part01" + ], + "instance_attributes": [ + "farm_rendering", + "use_range_limit" + ], + "prenodes": [] + }, + "CreateWriteImage": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{ext}", + "default_variants": [ + "StillFrame", + "MPFrame", + "LayoutFrame" + ], + "instance_attributes": [ + "use_range_limit" + ], + "prenodes": [ + { + "name": "FrameHold01", + "nodeclass": "FrameHold", + "dependent": "", + "knobs": [ + { + "type": "expression", + "name": "first_frame", + "expression": "parent.first" + } + ] + } + ] + } +} diff --git a/server_addon/nuke/server/settings/dirmap.py b/server_addon/nuke/server/settings/dirmap.py new file mode 100644 index 0000000000..7e3c443957 --- /dev/null +++ b/server_addon/nuke/server/settings/dirmap.py @@ -0,0 +1,34 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class DirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, + title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, + title="Destination Paths" + ) + + +class DirmapSettings(BaseSettingsModel): + """Nuke color management project settings.""" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + paths: DirmapPathsSubmodel = Field( + default_factory=DirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +DEFAULT_DIRMAP_SETTINGS = { + "enabled": False, + "paths": { + "source_path": [], + "destination_path": [] + } +} diff --git a/server_addon/nuke/server/settings/filters.py b/server_addon/nuke/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/nuke/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/nuke/server/settings/general.py b/server_addon/nuke/server/settings/general.py new file mode 100644 index 0000000000..bcbb183952 --- /dev/null +++ b/server_addon/nuke/server/settings/general.py @@ -0,0 +1,42 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class MenuShortcut(BaseSettingsModel): + """Nuke general project settings.""" + + create: str = Field( + title="Create..." + ) + publish: str = Field( + title="Publish..." + ) + load: str = Field( + title="Load..." + ) + manage: str = Field( + title="Manage..." + ) + build_workfile: str = Field( + title="Build Workfile..." + ) + + +class GeneralSettings(BaseSettingsModel): + """Nuke general project settings.""" + + menu: MenuShortcut = Field( + default_factory=MenuShortcut, + title="Menu Shortcuts", + ) + + +DEFAULT_GENERAL_SETTINGS = { + "menu": { + "create": "ctrl+alt+c", + "publish": "ctrl+alt+p", + "load": "ctrl+alt+l", + "manage": "ctrl+alt+m", + "build_workfile": "ctrl+alt+b" + } +} diff --git a/server_addon/nuke/server/settings/gizmo.py b/server_addon/nuke/server/settings/gizmo.py new file mode 100644 index 0000000000..4cdd614da8 --- /dev/null +++ b/server_addon/nuke/server/settings/gizmo.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + + +class SubGizmoItem(BaseSettingsModel): + title: str = Field( + title="Label" + ) + sourcetype: str = Field( + title="Type of usage" + ) + command: str = Field( + title="Python command" + ) + icon: str = Field( + title="Icon Path" + ) + shortcut: str = Field( + title="Hotkey" + ) + + +class GizmoDefinitionItem(BaseSettingsModel): + gizmo_toolbar_path: str = Field( + title="Gizmo Menu" + ) + sub_gizmo_list: list[SubGizmoItem] = Field( + default_factory=list, title="Sub Gizmo List") + + +class GizmoItem(BaseSettingsModel): + """Nuke gizmo item """ + + toolbar_menu_name: str = Field( + title="Toolbar Menu Name" + ) + gizmo_source_dir: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Gizmo Directory Path" + ) + toolbar_icon_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Toolbar Icon Path" + ) + gizmo_definition: list[GizmoDefinitionItem] = Field( + default_factory=list, title="Gizmo Definition") + + +DEFAULT_GIZMO_ITEM = { + "toolbar_menu_name": "OpenPype Gizmo", + "gizmo_source_dir": { + "windows": [], + "darwin": [], + "linux": [] + }, + "toolbar_icon_path": { + "windows": "", + "darwin": "", + "linux": "" + }, + "gizmo_definition": [ + { + "gizmo_toolbar_path": "/path/to/menu", + "sub_gizmo_list": [ + { + "sourcetype": "python", + "title": "Gizmo Note", + "command": "nuke.nodes.StickyNote(label='You can create your own toolbar menu in the Nuke GizmoMenu of OpenPype')", + "icon": "", + "shortcut": "" + } + ] + } + ] +} diff --git a/server_addon/nuke/server/settings/imageio.py b/server_addon/nuke/server/settings/imageio.py new file mode 100644 index 0000000000..811b12104b --- /dev/null +++ b/server_addon/nuke/server/settings/imageio.py @@ -0,0 +1,373 @@ +from typing import Literal +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .common import KnobModel + + +class NodesModel(BaseSettingsModel): + _layout = "expanded" + plugins: list[str] = Field( + default_factory=list, + title="Used in plugins" + ) + nukeNodeClass: str = Field( + title="Nuke Node Class", + ) + + knobs: list[KnobModel] = Field( + default_factory=list, + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class NodesSetting(BaseSettingsModel): + # TODO: rename `requiredNodes` to `required_nodes` + requiredNodes: list[NodesModel] = Field( + title="Plugin required", + default_factory=list + ) + # TODO: rename `overrideNodes` to `override_nodes` + overrideNodes: list[NodesModel] = Field( + title="Plugin's node overrides", + default_factory=list + ) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Nuke workfile colorspace preset. """ + + colorManagement: Literal["Nuke", "OCIO"] = Field( + title="Color Management" + ) + + OCIO_config: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + + workingSpaceLUT: str = Field( + title="Working Space" + ) + monitorLut: str = Field( + title="Monitor" + ) + + +class ReadColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ReadColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ViewProcessModel(BaseSettingsModel): + viewerProcess: str = Field( + title="Viewer Process Name" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Nuke color management project settings. """ + _isGroup: bool = True + + """# TODO: enhance settings with host api: + to restructure settings for simplification. + + now: nuke/imageio/viewer/viewerProcess + future: nuke/imageio/viewer + """ + activate_host_color_management: bool = Field( + True, title="Enable Color Management") + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + viewer: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Viewer", + description="""Viewer profile is used during + Creation of new viewer node at knob viewerProcess""" + ) + + """# TODO: enhance settings with host api: + to restructure settings for simplification. + + now: nuke/imageio/baking/viewerProcess + future: nuke/imageio/baking + """ + baking: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Baking", + description="""Baking profile is used during + publishing baked colorspace data at knob viewerProcess""" + ) + + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + + nodes: NodesSetting = Field( + default_factory=NodesSetting, + title="Nodes" + ) + """# TODO: enhance settings with host api: + - [ ] old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - [ ] no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to read nodes via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "viewer": { + "viewerProcess": "sRGB" + }, + "baking": { + "viewerProcess": "rec709" + }, + "workfile": { + "colorManagement": "Nuke", + "OCIO_config": "nuke-default", + "workingSpaceLUT": "linear", + "monitorLut": "sRGB", + }, + "nodes": { + "requiredNodes": [ + { + "plugins": [ + "CreateWriteRender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 186, + 35, + 35 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWritePrerender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 171, + 171, + 10 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWriteImage" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "tiff" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit" + }, + { + "type": "text", + "name": "compression", + "text": "Deflate" + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 56, + 162, + 7 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "sRGB" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + } + ], + "overrideNodes": [] + }, + "regexInputs": { + "inputs": [ + { + "regex": "(beauty).*(?=.exr)", + "colorspace": "linear" + } + ] + } +} diff --git a/server_addon/nuke/server/settings/loader_plugins.py b/server_addon/nuke/server/settings/loader_plugins.py new file mode 100644 index 0000000000..51e2c2149b --- /dev/null +++ b/server_addon/nuke/server/settings/loader_plugins.py @@ -0,0 +1,72 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadImageModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + + +class LoadClipOptionsModel(BaseSettingsModel): + start_at_workfile: bool = Field( + title="Start at workfile's start frame" + ) + add_retime: bool = Field( + title="Add retime" + ) + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + options_defaults: LoadClipOptionsModel = Field( + default_factory=LoadClipOptionsModel, + title="Loader option defaults" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image" + ) + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadImage": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}" + }, + "LoadClip": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}", + "options_defaults": { + "start_at_workfile": True, + "add_retime": True + } + } +} diff --git a/server_addon/nuke/server/settings/main.py b/server_addon/nuke/server/settings/main.py new file mode 100644 index 0000000000..cdaaa3a9e2 --- /dev/null +++ b/server_addon/nuke/server/settings/main.py @@ -0,0 +1,126 @@ +from pydantic import validator, Field + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) + +from .general import ( + GeneralSettings, + DEFAULT_GENERAL_SETTINGS +) +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .dirmap import ( + DirmapSettings, + DEFAULT_DIRMAP_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .gizmo import ( + GizmoItem, + DEFAULT_GIZMO_ITEM +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .workfile_builder import ( + WorkfileBuilderModel, + DEFAULT_WORKFILE_BUILDER_SETTINGS +) +from .templated_workfile_build import ( + TemplatedWorkfileBuildModel +) +from .filters import PublishGUIFilterItemModel + + +class NukeSettings(BaseSettingsModel): + """Nuke addon settings.""" + + general: GeneralSettings = Field( + default_factory=GeneralSettings, + title="General", + ) + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + + dirmap: DirmapSettings = Field( + default_factory=DirmapSettings, + title="Nuke Directory Mapping", + ) + + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + + gizmo: list[GizmoItem] = Field( + default_factory=list, title="Gizmo Menu") + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins", + ) + + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader Plugins", + ) + + workfile_builder: WorkfileBuilderModel = Field( + default_factory=WorkfileBuilderModel, + title="Workfile Builder", + ) + + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + title="Templated Workfile Build", + default_factory=TemplatedWorkfileBuildModel + ) + + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + @validator("filters") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "dirmap": DEFAULT_DIRMAP_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "gizmo": [DEFAULT_GIZMO_ITEM], + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "workfile_builder": DEFAULT_WORKFILE_BUILDER_SETTINGS, + "templated_workfile_build": { + "profiles": [] + }, + "filters": [] +} diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py new file mode 100644 index 0000000000..692b2bd240 --- /dev/null +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -0,0 +1,561 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum +) +from .common import KnobModel, validate_json_dict + + +def nuke_render_publish_types_enum(): + """Return all nuke render families available in creators.""" + return [ + {"value": "render", "label": "Render"}, + {"value": "prerender", "label": "Prerender"}, + {"value": "image", "label": "Image"} + ] + + +def nuke_product_types_enum(): + """Return all nuke families available in creators.""" + return [ + {"value": "nukenodes", "label": "Nukenodes"}, + {"value": "model", "label": "Model"}, + {"value": "camera", "label": "Camera"}, + {"value": "gizmo", "label": "Gizmo"}, + {"value": "source", "label": "Source"} + ] + nuke_render_publish_types_enum() + + +class NodeModel(BaseSettingsModel): + name: str = Field( + title="Node name" + ) + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + knobs: list[KnobModel] = Field( + default_factory=list, + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class ThumbnailRepositionNodeModel(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field(title="Knobs", default_factory=list) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CollectInstanceDataModel(BaseSettingsModel): + sync_workfile_version_on_product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_product_types_enum, + title="Sync workfile versions for familes" + ) + + +class OptionalPluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateKnobsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + knobs: str = Field( + "{}", + title="Knobs", + widget="textarea", + ) + + @validator("knobs") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ExtractThumbnailModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + use_rendered: bool = Field(title="Use rendered images") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + + nodes: list[NodeModel] = Field( + default_factory=list, + title="Nodes (deprecated)" + ) + reposition_nodes: list[ThumbnailRepositionNodeModel] = Field( + title="Reposition nodes", + default_factory=list + ) + + +class ExtractReviewDataModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class ExtractReviewDataLutModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class BakingStreamFilterModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_render_publish_types_enum, + title="Sync workfile versions for familes" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names") + + +class ReformatNodesRepositionNodes(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field( + default_factory=list, + title="Node knobs") + + +class ReformatNodesConfigModel(BaseSettingsModel): + """Only reposition nodes supported. + + You can add multiple reformat nodes and set their knobs. + Order of reformat nodes is important. First reformat node will + be applied first and last reformat node will be applied last. + """ + enabled: bool = Field(False) + reposition_nodes: list[ReformatNodesRepositionNodes] = Field( + default_factory=list, + title="Reposition knobs" + ) + + +class IntermediateOutputModel(BaseSettingsModel): + name: str = Field(title="Output name") + filter: BakingStreamFilterModel = Field( + title="Filter", default_factory=BakingStreamFilterModel) + read_raw: bool = Field(title="Read raw switch") + viewer_process_override: str = Field(title="Viewer process override") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + reformat_nodes_config: ReformatNodesConfigModel = Field( + default_factory=ReformatNodesConfigModel, + title="Reformat Nodes") + extension: str = Field(title="File extension") + add_custom_tags: list[str] = Field( + title="Custom tags", default_factory=list) + + +class ExtractReviewDataMovModel(BaseSettingsModel): + """[deprecated] use Extract Review Data Baking + Streams instead. + """ + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[IntermediateOutputModel] = Field( + default_factory=list, + title="Baking streams" + ) + + +class ExtractReviewIntermediatesModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[IntermediateOutputModel] = Field( + default_factory=list, + title="Baking streams" + ) + + +class FSubmissionNoteModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FSubmistingForModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FVFXScopeOfWorkModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class ExctractSlateFrameParamModel(BaseSettingsModel): + f_submission_note: FSubmissionNoteModel = Field( + title="f_submission_note", + default_factory=FSubmissionNoteModel + ) + f_submitting_for: FSubmistingForModel = Field( + title="f_submitting_for", + default_factory=FSubmistingForModel + ) + f_vfx_scope_of_work: FVFXScopeOfWorkModel = Field( + title="f_vfx_scope_of_work", + default_factory=FVFXScopeOfWorkModel + ) + + +class ExtractSlateFrameModel(BaseSettingsModel): + viewer_lut_raw: bool = Field(title="Viewer lut raw") + key_value_mapping: ExctractSlateFrameParamModel = Field( + title="Key value mapping", + default_factory=ExctractSlateFrameParamModel + ) + + +class IncrementScriptVersionModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceData: CollectInstanceDataModel = Field( + title="Collect Instance Version", + default_factory=CollectInstanceDataModel, + section="Collectors" + ) + ValidateCorrectAssetContext: OptionalPluginModel = Field( + title="Validate Correct Folder Name", + default_factory=OptionalPluginModel, + section="Validators" + ) + ValidateContainers: OptionalPluginModel = Field( + title="Validate Containers", + default_factory=OptionalPluginModel + ) + ValidateKnobs: ValidateKnobsModel = Field( + title="Validate Knobs", + default_factory=ValidateKnobsModel + ) + ValidateOutputResolution: OptionalPluginModel = Field( + title="Validate Output Resolution", + default_factory=OptionalPluginModel + ) + ValidateGizmo: OptionalPluginModel = Field( + title="Validate Gizmo", + default_factory=OptionalPluginModel + ) + ValidateBackdrop: OptionalPluginModel = Field( + title="Validate Backdrop", + default_factory=OptionalPluginModel + ) + ValidateScript: OptionalPluginModel = Field( + title="Validate Script", + default_factory=OptionalPluginModel + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + title="Extract Thumbnail", + default_factory=ExtractThumbnailModel, + section="Extractors" + ) + ExtractReviewData: ExtractReviewDataModel = Field( + title="Extract Review Data", + default_factory=ExtractReviewDataModel + ) + ExtractReviewDataLut: ExtractReviewDataLutModel = Field( + title="Extract Review Data Lut", + default_factory=ExtractReviewDataLutModel + ) + ExtractReviewDataMov: ExtractReviewDataMovModel = Field( + title="Extract Review Data Mov", + default_factory=ExtractReviewDataMovModel + ) + ExtractReviewIntermediates: ExtractReviewIntermediatesModel = Field( + title="Extract Review Intermediates", + default_factory=ExtractReviewIntermediatesModel + ) + ExtractSlateFrame: ExtractSlateFrameModel = Field( + title="Extract Slate Frame", + default_factory=ExtractSlateFrameModel + ) + IncrementScriptVersion: IncrementScriptVersionModel = Field( + title="Increment Workfile Version", + default_factory=IncrementScriptVersionModel, + section="Integrators" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceData": { + "sync_workfile_version_on_product_types": [ + "nukenodes", + "camera", + "gizmo", + "source", + "render", + "write" + ] + }, + "ValidateCorrectAssetContext": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateKnobs": { + "enabled": False, + "knobs": "\n".join([ + '{', + ' "render": {', + ' "review": true', + ' }', + '}' + ]) + }, + "ValidateOutputResolution": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateGizmo": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateBackdrop": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateScript": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractThumbnail": { + "enabled": True, + "use_rendered": True, + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "nodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependency": "", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + }, + { + "type": "boolean", + "name": "pbb", + "boolean": False + } + ] + } + ], + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "ExtractReviewData": { + "enabled": False + }, + "ExtractReviewDataLut": { + "enabled": False + }, + "ExtractReviewDataMov": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, + "ExtractReviewIntermediates": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, + "ExtractSlateFrame": { + "viewer_lut_raw": False, + "key_value_mapping": { + "f_submission_note": { + "enabled": True, + "template": "{comment}" + }, + "f_submitting_for": { + "enabled": True, + "template": "{intent[value]}" + }, + "f_vfx_scope_of_work": { + "enabled": False, + "template": "" + } + } + }, + "IncrementScriptVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/nuke/server/settings/scriptsmenu.py b/server_addon/nuke/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..0b2d660da5 --- /dev/null +++ b/server_addon/nuke/server/settings/scriptsmenu.py @@ -0,0 +1,53 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')", + "tooltip": "Open the OpenPype Nuke user doc page" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set Frame Start (Read Node)", + "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();", + "tooltip": "Set frame start for read node(s)" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set non publish output for Write Node", + "command": "from openpype.hosts.nuke.startup.custom_write_node import main;main();", + "tooltip": "Open the OpenPype Nuke user doc page" + } + ] +} diff --git a/server_addon/nuke/server/settings/templated_workfile_build.py b/server_addon/nuke/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..0899be841e --- /dev/null +++ b/server_addon/nuke/server/settings/templated_workfile_build.py @@ -0,0 +1,34 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + """Settings for templated workfile builder.""" + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/nuke/server/settings/workfile_builder.py b/server_addon/nuke/server/settings/workfile_builder.py new file mode 100644 index 0000000000..3ae3b08788 --- /dev/null +++ b/server_addon/nuke/server/settings/workfile_builder.py @@ -0,0 +1,84 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, + MultiplatformPathModel, +) + + +class CustomTemplateModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Gizmo Directory Path" + ) + + +class BuilderProfileItemModel(BaseSettingsModel): + product_name_filters: list[str] = Field( + default_factory=list, + title="Product name" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + repre_names: list[str] = Field( + default_factory=list, + title="Representations" + ) + loaders: list[str] = Field( + default_factory=list, + title="Loader plugins" + ) + + +class BuilderProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + current_context: list[BuilderProfileItemModel] = Field( + default_factory=list, + title="Current context" + ) + linked_assets: list[BuilderProfileItemModel] = Field( + default_factory=list, + title="Linked assets/shots" + ) + + +class WorkfileBuilderModel(BaseSettingsModel): + """[deprecated] use Template Workfile Build Settings instead. + """ + create_first_version: bool = Field( + title="Create first workfile") + custom_templates: list[CustomTemplateModel] = Field( + default_factory=list, + title="Custom templates" + ) + builder_on_start: bool = Field( + default=False, + title="Run Builder at first workfile" + ) + profiles: list[BuilderProfileModel] = Field( + default_factory=list, + title="Builder profiles" + ) + + +DEFAULT_WORKFILE_BUILDER_SETTINGS = { + "create_first_version": False, + "custom_templates": [], + "builder_on_start": False, + "profiles": [] +} diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py new file mode 100644 index 0000000000..ae7362549b --- /dev/null +++ b/server_addon/nuke/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.3" diff --git a/server_addon/openpype/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml new file mode 100644 index 0000000000..6d5ac92ca7 --- /dev/null +++ b/server_addon/openpype/client/pyproject.toml @@ -0,0 +1,25 @@ +[project] +name="openpype" +description="OpenPype addon for AYON server." + +[tool.poetry.dependencies] +python = ">=3.9.1,<3.10" +aiohttp_json_rpc = "*" # TVPaint server +aiohttp-middlewares = "^2.0.0" +wsrpc_aiohttp = "^3.1.1" # websocket server +clique = "1.6.*" +shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"} +gazu = "^0.9.3" +google-api-python-client = "^1.12.8" # sync server google support (should be separate?) +jsonschema = "^2.6.0" +pymongo = "^3.11.2" +log4mongo = "^1.7" +pathlib2= "^2.3.5" # deadline submit publish job only (single place, maybe not needed?) +pyblish-base = "^1.8.11" +pynput = "^1.7.2" # Timers manager - TODO replace +"Qt.py" = "^1.3.3" +qtawesome = "0.7.3" +speedcopy = "^2.1" +slack-sdk = "^3.6.0" +pysftp = "^0.2.9" +dropbox = "^11.20.0" diff --git a/server_addon/openpype/server/__init__.py b/server_addon/openpype/server/__init__.py new file mode 100644 index 0000000000..df24c73c76 --- /dev/null +++ b/server_addon/openpype/server/__init__.py @@ -0,0 +1,9 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ + + +class OpenPypeAddon(BaseServerAddon): + name = "openpype" + title = "OpenPype" + version = __version__ diff --git a/server_addon/photoshop/LICENSE b/server_addon/photoshop/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/photoshop/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/photoshop/README.md b/server_addon/photoshop/README.md new file mode 100644 index 0000000000..2d1e1c745c --- /dev/null +++ b/server_addon/photoshop/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Photoshop. diff --git a/server_addon/photoshop/server/__init__.py b/server_addon/photoshop/server/__init__.py new file mode 100644 index 0000000000..3a45f7a809 --- /dev/null +++ b/server_addon/photoshop/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import PhotoshopSettings, DEFAULT_PHOTOSHOP_SETTING +from .version import __version__ + + +class Photoshop(BaseServerAddon): + name = "photoshop" + title = "Photoshop" + version = __version__ + + settings_model = PhotoshopSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_PHOTOSHOP_SETTING) diff --git a/server_addon/photoshop/server/settings/__init__.py b/server_addon/photoshop/server/settings/__init__.py new file mode 100644 index 0000000000..9ae5764362 --- /dev/null +++ b/server_addon/photoshop/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + PhotoshopSettings, + DEFAULT_PHOTOSHOP_SETTING, +) + + +__all__ = ( + "PhotoshopSettings", + "DEFAULT_PHOTOSHOP_SETTING", +) diff --git a/server_addon/photoshop/server/settings/creator_plugins.py b/server_addon/photoshop/server/settings/creator_plugins.py new file mode 100644 index 0000000000..2fe63a7e3a --- /dev/null +++ b/server_addon/photoshop/server/settings/creator_plugins.py @@ -0,0 +1,79 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateImagePluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AutoImageCreatorPluginModel(BaseSettingsModel): + enabled: bool = Field(False, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateReviewPlugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateWorkfilelugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class PhotoshopCreatorPlugins(BaseSettingsModel): + ImageCreator: CreateImagePluginModel = Field( + title="Create Image", + default_factory=CreateImagePluginModel, + ) + AutoImageCreator: AutoImageCreatorPluginModel = Field( + title="Create Flatten Image", + default_factory=AutoImageCreatorPluginModel, + ) + ReviewCreator: CreateReviewPlugin = Field( + title="Create Review", + default_factory=CreateReviewPlugin, + ) + WorkfileCreator: CreateWorkfilelugin = Field( + title="Create Workfile", + default_factory=CreateWorkfilelugin, + ) + + +DEFAULT_CREATE_SETTINGS = { + "ImageCreator": { + "enabled": True, + "active_on_create": True, + "mark_for_review": False, + "default_variants": [ + "Main" + ] + }, + "AutoImageCreator": { + "enabled": False, + "active_on_create": True, + "mark_for_review": False, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main" + } +} diff --git a/server_addon/photoshop/server/settings/imageio.py b/server_addon/photoshop/server/settings/imageio.py new file mode 100644 index 0000000000..56b7f2fa32 --- /dev/null +++ b/server_addon/photoshop/server/settings/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class PhotoshopImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/photoshop/server/settings/main.py b/server_addon/photoshop/server/settings/main.py new file mode 100644 index 0000000000..ae7705b3db --- /dev/null +++ b/server_addon/photoshop/server/settings/main.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import PhotoshopImageIOModel +from .creator_plugins import PhotoshopCreatorPlugins, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PhotoshopPublishPlugins, DEFAULT_PUBLISH_SETTINGS +from .workfile_builder import WorkfileBuilderPlugin + + +class PhotoshopSettings(BaseSettingsModel): + """Photoshop Project Settings.""" + + imageio: PhotoshopImageIOModel = Field( + default_factory=PhotoshopImageIOModel, + title="OCIO config" + ) + + create: PhotoshopCreatorPlugins = Field( + default_factory=PhotoshopCreatorPlugins, + title="Creator plugins" + ) + + publish: PhotoshopPublishPlugins = Field( + default_factory=PhotoshopPublishPlugins, + title="Publish plugins" + ) + + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + + +DEFAULT_PHOTOSHOP_SETTING = { + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/photoshop/server/settings/publish_plugins.py b/server_addon/photoshop/server/settings/publish_plugins.py new file mode 100644 index 0000000000..6bc72b4072 --- /dev/null +++ b/server_addon/photoshop/server/settings/publish_plugins.py @@ -0,0 +1,221 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +create_flatten_image_enum = [ + {"value": "flatten_with_images", "label": "Flatten with images"}, + {"value": "flatten_only", "label": "Flatten only"}, + {"value": "no", "label": "No"}, +] + + +color_code_enum = [ + {"value": "red", "label": "Red"}, + {"value": "orange", "label": "Orange"}, + {"value": "yellowColor", "label": "Yellow"}, + {"value": "grain", "label": "Green"}, + {"value": "blue", "label": "Blue"}, + {"value": "violet", "label": "Violet"}, + {"value": "gray", "label": "Gray"}, +] + + +class ColorCodeMappings(BaseSettingsModel): + color_code: list[str] = Field( + title="Color codes for layers", + default_factory=list, + enum_resolver=lambda: color_code_enum, + ) + + layer_name_regex: list[str] = Field( + "", + title="Layer name regex" + ) + + product_type: str = Field( + "", + title="Resulting product type" + ) + + product_name_template: str = Field( + "", + title="Product name template" + ) + + +class ExtractedOptions(BaseSettingsModel): + tags: list[str] = Field( + title="Tags", + default_factory=list + ) + + +class CollectColorCodedInstancesPlugin(BaseSettingsModel): + """Set color for publishable layers, set its resulting product type + and template for product name. \n Can create flatten image from published + instances. + (Applicable only for remote publishing!)""" + + enabled: bool = Field(True, title="Enabled") + create_flatten_image: str = Field( + "", + title="Create flatten image", + enum_resolver=lambda: create_flatten_image_enum, + ) + + flatten_product_type_template: str = Field( + "", + title="Subset template for flatten image" + ) + + color_code_mapping: list[ColorCodeMappings] = Field( + title="Color code mappings", + default_factory=ColorCodeMappings, + ) + + +class CollectReviewPlugin(BaseSettingsModel): + """Should review product be created""" + enabled: bool = Field(True, title="Enabled") + + +class CollectVersionPlugin(BaseSettingsModel): + """Synchronize version for image and review instances by workfile version""" # noqa + enabled: bool = Field(True, title="Enabled") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check that workfile contains latest version of loaded items""" # noqa + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateNamingPlugin(BaseSettingsModel): + """Validate naming of products and layers""" # noqa + invalid_chars: str = Field( + '', + title="Regex pattern of invalid characters" + ) + + replace_char: str = Field( + '', + title="Replacement character" + ) + + +class ExtractImagePlugin(BaseSettingsModel): + """Currently only jpg and png are supported""" + formats: list[str] = Field( + title="Extract Formats", + default_factory=list, + ) + + +class ExtractReviewPlugin(BaseSettingsModel): + make_image_sequence: bool = Field( + False, + title="Make an image sequence instead of flatten image" + ) + + max_downscale_size: int = Field( + 8192, + title="Maximum size of sources for review", + description="FFMpeg can only handle limited resolution for creation of review and/or thumbnail", # noqa + gt=300, # greater than + le=16384, # less or equal + ) + + jpg_options: ExtractedOptions = Field( + title="Extracted jpg Options", + default_factory=ExtractedOptions + ) + + mov_options: ExtractedOptions = Field( + title="Extracted mov Options", + default_factory=ExtractedOptions + ) + + +class PhotoshopPublishPlugins(BaseSettingsModel): + CollectColorCodedInstances: CollectColorCodedInstancesPlugin = Field( + title="Collect Color Coded Instances", + default_factory=CollectColorCodedInstancesPlugin, + ) + CollectReview: CollectReviewPlugin = Field( + title="Collect Review", + default_factory=CollectReviewPlugin, + ) + + CollectVersion: CollectVersionPlugin = Field( + title="Create Image", + default_factory=CollectVersionPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateNaming: ValidateNamingPlugin = Field( + title="Validate naming of products and layers", + default_factory=ValidateNamingPlugin, + ) + + ExtractImage: ExtractImagePlugin = Field( + title="Extract Image", + default_factory=ExtractImagePlugin, + ) + + ExtractReview: ExtractReviewPlugin = Field( + title="Extract Review", + default_factory=ExtractReviewPlugin, + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectColorCodedInstances": { + "create_flatten_image": "no", + "flatten_product_type_template": "", + "color_code_mapping": [] + }, + "CollectReview": { + "enabled": True + }, + "CollectVersion": { + "enabled": False + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNaming": { + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,;]", + "replace_char": "_" + }, + "ExtractImage": { + "formats": [ + "png", + "jpg" + ] + }, + "ExtractReview": { + "make_image_sequence": False, + "max_downscale_size": 8192, + "jpg_options": { + "tags": [ + "review", + "ftrackreview" + ] + }, + "mov_options": { + "tags": [ + "review", + "ftrackreview" + ] + } + } +} diff --git a/server_addon/photoshop/server/settings/workfile_builder.py b/server_addon/photoshop/server/settings/workfile_builder.py new file mode 100644 index 0000000000..ec2ee136ad --- /dev/null +++ b/server_addon/photoshop/server/settings/workfile_builder.py @@ -0,0 +1,41 @@ +from pydantic import Field +from pathlib import Path + +from ayon_server.settings import BaseSettingsModel + + +class PathsTemplate(BaseSettingsModel): + windows: Path = Field( + '', + title="Windows" + ) + darwin: Path = Field( + '', + title="MacOS" + ) + linux: Path = Field( + '', + title="Linux" + ) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: PathsTemplate = Field( + default_factory=PathsTemplate + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/photoshop/server/version.py b/server_addon/photoshop/server/version.py new file mode 100644 index 0000000000..d4b9e2d7f3 --- /dev/null +++ b/server_addon/photoshop/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.0" diff --git a/server_addon/resolve/server/__init__.py b/server_addon/resolve/server/__init__.py new file mode 100644 index 0000000000..a84180d0f5 --- /dev/null +++ b/server_addon/resolve/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ResolveSettings, DEFAULT_VALUES + + +class ResolveAddon(BaseServerAddon): + name = "resolve" + title = "DaVinci Resolve" + version = __version__ + settings_model: Type[ResolveSettings] = ResolveSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/resolve/server/imageio.py b/server_addon/resolve/server/imageio.py new file mode 100644 index 0000000000..c2bfcd40d0 --- /dev/null +++ b/server_addon/resolve/server/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class ResolveImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/resolve/server/settings.py b/server_addon/resolve/server/settings.py new file mode 100644 index 0000000000..326f6bea1e --- /dev/null +++ b/server_addon/resolve/server/settings.py @@ -0,0 +1,114 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import ResolveImageIOModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +class ResolveSettings(BaseSettingsModel): + launch_openpype_menu_on_start: bool = Field( + False, title="Launch OpenPype menu on start of Resolve" + ) + imageio: ResolveImageIOModel = Field( + default_factory=ResolveImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatorPuginsModel = Field( + default_factory=CreatorPuginsModel, + title="Creator plugins", + ) + + +DEFAULT_VALUES = { + "launch_openpype_menu_on_start": False, + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/resolve/server/version.py b/server_addon/resolve/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/resolve/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/royal_render/server/__init__.py b/server_addon/royal_render/server/__init__.py new file mode 100644 index 0000000000..c5f0aafa00 --- /dev/null +++ b/server_addon/royal_render/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import RoyalRenderSettings, DEFAULT_VALUES + + +class RoyalRenderAddon(BaseServerAddon): + name = "royalrender" + version = __version__ + title = "Royal Render" + settings_model: Type[RoyalRenderSettings] = RoyalRenderSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/royal_render/server/settings.py b/server_addon/royal_render/server/settings.py new file mode 100644 index 0000000000..677d7e2671 --- /dev/null +++ b/server_addon/royal_render/server/settings.py @@ -0,0 +1,70 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomPath(MultiplatformPathModel): + _layout = "expanded" + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + value: CustomPath = Field( + default_factory=CustomPath + ) + + +class CollectSequencesFromJobModel(BaseSettingsModel): + review: bool = Field(True, title="Generate reviews from sequences") + + +class PublishPluginsModel(BaseSettingsModel): + CollectSequencesFromJob: CollectSequencesFromJobModel = Field( + default_factory=CollectSequencesFromJobModel, + title="Collect Sequences from the Job" + ) + + +class RoyalRenderSettings(BaseSettingsModel): + enabled: bool = True + # WARNING/TODO this needs change + # - both system and project settings contained 'rr_path' + # where project settings did choose one of rr_path from system settings + # that is not possible in AYON + rr_paths: list[ServerListSubmodel] = Field( + default_factory=list, + title="Royal Render Root Paths", + scope=["studio"], + ) + # This was 'rr_paths' in project settings and should be enum of + # 'rr_paths' from system settings, but that's not possible in AYON + selected_rr_paths: list[str] = Field( + default_factory=list, + title="Selected Royal Render Paths", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "rr_paths": [ + { + "name": "default", + "value": { + "windows": "", + "darwin": "", + "linux": "" + } + } + ], + "selected_rr_paths": ["default"], + "publish": { + "CollectSequencesFromJob": { + "review": True + } + } +} diff --git a/server_addon/royal_render/server/version.py b/server_addon/royal_render/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/royal_render/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/timers_manager/server/__init__.py b/server_addon/timers_manager/server/__init__.py new file mode 100644 index 0000000000..29f9d47370 --- /dev/null +++ b/server_addon/timers_manager/server/__init__.py @@ -0,0 +1,13 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TimersManagerSettings + + +class TimersManagerAddon(BaseServerAddon): + name = "timers_manager" + version = __version__ + title = "Timers Manager" + settings_model: Type[TimersManagerSettings] = TimersManagerSettings diff --git a/server_addon/timers_manager/server/settings.py b/server_addon/timers_manager/server/settings.py new file mode 100644 index 0000000000..a5c5721a57 --- /dev/null +++ b/server_addon/timers_manager/server/settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TimersManagerSettings(BaseSettingsModel): + auto_stop: bool = Field( + True, + title="Auto stop timer", + scope=["studio"], + ) + full_time: int = Field( + 15, + title="Max idle time", + scope=["studio"], + ) + message_time: float = Field( + 0.5, + title="When dialog will show", + scope=["studio"], + ) + disregard_publishing: bool = Field( + False, + title="Disregard publishing", + scope=["studio"], + ) diff --git a/server_addon/timers_manager/server/version.py b/server_addon/timers_manager/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/timers_manager/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/traypublisher/server/LICENSE b/server_addon/traypublisher/server/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/traypublisher/server/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/traypublisher/server/README.md b/server_addon/traypublisher/server/README.md new file mode 100644 index 0000000000..c0029bc782 --- /dev/null +++ b/server_addon/traypublisher/server/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Traypublisher. diff --git a/server_addon/traypublisher/server/__init__.py b/server_addon/traypublisher/server/__init__.py new file mode 100644 index 0000000000..e6f079609f --- /dev/null +++ b/server_addon/traypublisher/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TraypublisherSettings, DEFAULT_TRAYPUBLISHER_SETTING + + +class Traypublisher(BaseServerAddon): + name = "traypublisher" + title = "TrayPublisher" + version = __version__ + + settings_model = TraypublisherSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_TRAYPUBLISHER_SETTING) diff --git a/server_addon/traypublisher/server/settings/__init__.py b/server_addon/traypublisher/server/settings/__init__.py new file mode 100644 index 0000000000..bcf8beffa7 --- /dev/null +++ b/server_addon/traypublisher/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TraypublisherSettings, + DEFAULT_TRAYPUBLISHER_SETTING, +) + + +__all__ = ( + "TraypublisherSettings", + "DEFAULT_TRAYPUBLISHER_SETTING", +) diff --git a/server_addon/traypublisher/server/settings/creator_plugins.py b/server_addon/traypublisher/server/settings/creator_plugins.py new file mode 100644 index 0000000000..345cb92e63 --- /dev/null +++ b/server_addon/traypublisher/server/settings/creator_plugins.py @@ -0,0 +1,46 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BatchMovieCreatorPlugin(BaseSettingsModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + + default_tasks: list[str] = Field( + title="Default tasks", + default_factory=list + ) + + extensions: list[str] = Field( + title="Extensions", + default_factory=list + ) + + +class TrayPublisherCreatePluginsModel(BaseSettingsModel): + BatchMovieCreator: BatchMovieCreatorPlugin = Field( + title="Batch Movie Creator", + default_factory=BatchMovieCreatorPlugin + ) + + +DEFAULT_CREATORS = { + "BatchMovieCreator": { + "default_variants": [ + "Main" + ], + "default_tasks": [ + "Compositing" + ], + "extensions": [ + ".mov" + ] + }, +} diff --git a/server_addon/traypublisher/server/settings/editorial_creators.py b/server_addon/traypublisher/server/settings/editorial_creators.py new file mode 100644 index 0000000000..4111f22576 --- /dev/null +++ b/server_addon/traypublisher/server/settings/editorial_creators.py @@ -0,0 +1,181 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ClipNameTokenizerItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field("#TODO", title="Tokenizer name") + regex: str = Field("", title="Tokenizer regex") + + +class ShotAddTasksItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field('', title="Key") + task_type: list[str] = Field( + title="Task type", + default_factory=list, + enum_resolver=task_types_enum) + + +class ShotRenameSubmodel(BaseSettingsModel): + enabled: bool = True + shot_rename_template: str = Field( + "", + title="Shot rename template" + ) + + +parent_type_enum = [ + {"value": "Project", "label": "Project"}, + {"value": "Folder", "label": "Folder"}, + {"value": "Episode", "label": "Episode"}, + {"value": "Sequence", "label": "Sequence"}, +] + + +class TokenToParentConvertorItem(BaseSettingsModel): + # TODO - was 'type' must be renamed in code to `parent_type` + parent_type: str = Field( + "Project", + enum_resolver=lambda: parent_type_enum + ) + name: str = Field( + "", + title="Parent token name", + description="Unique name used in `Parent path template`" + ) + value: str = Field( + "", + title="Parent token value", + description="Template where any text, Anatomy keys and Tokens could be used" # noqa + ) + + +class ShotHierchySubmodel(BaseSettingsModel): + enabled: bool = True + parents_path: str = Field( + "", + title="Parents path template", + description="Using keys from \"Token to parent convertor\" or tokens directly" # noqa + ) + parents: list[TokenToParentConvertorItem] = Field( + default_factory=TokenToParentConvertorItem, + title="Token to parent convertor" + ) + + +output_file_type = [ + {"value": ".mp4", "label": "MP4"}, + {"value": ".mov", "label": "MOV"}, + {"value": ".wav", "label": "WAV"} +] + + +class ProductTypePresetItem(BaseSettingsModel): + product_type: str = Field("", title="Product type") + # TODO add placeholder '< Inherited >' + variant: str = Field("", title="Variant") + review: bool = Field(True, title="Review") + output_file_type: str = Field( + ".mp4", + enum_resolver=lambda: output_file_type + ) + + +class EditorialSimpleCreatorPlugin(BaseSettingsModel): + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + clip_name_tokenizer: list[ClipNameTokenizerItem] = Field( + default_factory=ClipNameTokenizerItem, + description=( + "Using Regex expression to create tokens. \nThose can be used" + " later in \"Shot rename\" creator \nor \"Shot hierarchy\"." + "\n\nTokens should be decorated with \"_\" on each side" + ) + ) + shot_rename: ShotRenameSubmodel = Field( + title="Shot Rename", + default_factory=ShotRenameSubmodel + ) + shot_hierarchy: ShotHierchySubmodel = Field( + title="Shot Hierarchy", + default_factory=ShotHierchySubmodel + ) + shot_add_tasks: list[ShotAddTasksItem] = Field( + title="Add tasks to shot", + default_factory=ShotAddTasksItem + ) + product_type_presets: list[ProductTypePresetItem] = Field( + default_factory=list + ) + + +class TraypublisherEditorialCreatorPlugins(BaseSettingsModel): + editorial_simple: EditorialSimpleCreatorPlugin = Field( + title="Editorial simple creator", + default_factory=EditorialSimpleCreatorPlugin, + ) + + +DEFAULT_EDITORIAL_CREATORS = { + "editorial_simple": { + "default_variants": [ + "Main" + ], + "clip_name_tokenizer": [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"} + ], + "shot_rename": { + "enabled": True, + "shot_rename_template": "{project[code]}_{_sequence_}_{_shot_}" + }, + "shot_hierarchy": { + "enabled": True, + "parents_path": "{project}/{folder}/{sequence}", + "parents": [ + { + "parent_type": "Project", + "name": "project", + "value": "{project[name]}" + }, + { + "parent_type": "Folder", + "name": "folder", + "value": "shots" + }, + { + "parent_type": "Sequence", + "name": "sequence", + "value": "{_sequence_}" + } + ] + }, + "shot_add_tasks": [], + "product_type_presets": [ + { + "product_type": "review", + "variant": "Reference", + "review": True, + "output_file_type": ".mp4" + }, + { + "product_type": "plate", + "variant": "", + "review": False, + "output_file_type": ".mov" + }, + { + "product_type": "audio", + "variant": "", + "review": False, + "output_file_type": ".wav" + } + ] + } +} diff --git a/server_addon/traypublisher/server/settings/imageio.py b/server_addon/traypublisher/server/settings/imageio.py new file mode 100644 index 0000000000..3df0d2f2fb --- /dev/null +++ b/server_addon/traypublisher/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TrayPublisherImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/traypublisher/server/settings/main.py b/server_addon/traypublisher/server/settings/main.py new file mode 100644 index 0000000000..fad96bef2f --- /dev/null +++ b/server_addon/traypublisher/server/settings/main.py @@ -0,0 +1,52 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import TrayPublisherImageIOModel +from .simple_creators import ( + SimpleCreatorPlugin, + DEFAULT_SIMPLE_CREATORS, +) +from .editorial_creators import ( + TraypublisherEditorialCreatorPlugins, + DEFAULT_EDITORIAL_CREATORS, +) +from .creator_plugins import ( + TrayPublisherCreatePluginsModel, + DEFAULT_CREATORS, +) +from .publish_plugins import ( + TrayPublisherPublishPlugins, + DEFAULT_PUBLISH_PLUGINS, +) + + +class TraypublisherSettings(BaseSettingsModel): + """Traypublisher Project Settings.""" + imageio: TrayPublisherImageIOModel = Field( + default_factory=TrayPublisherImageIOModel, + title="Color Management (ImageIO)" + ) + simple_creators: list[SimpleCreatorPlugin] = Field( + title="Simple Create Plugins", + default_factory=SimpleCreatorPlugin, + ) + editorial_creators: TraypublisherEditorialCreatorPlugins = Field( + title="Editorial Creators", + default_factory=TraypublisherEditorialCreatorPlugins, + ) + create: TrayPublisherCreatePluginsModel = Field( + title="Create", + default_factory=TrayPublisherCreatePluginsModel + ) + publish: TrayPublisherPublishPlugins = Field( + title="Publish Plugins", + default_factory=TrayPublisherPublishPlugins + ) + + +DEFAULT_TRAYPUBLISHER_SETTING = { + "simple_creators": DEFAULT_SIMPLE_CREATORS, + "editorial_creators": DEFAULT_EDITORIAL_CREATORS, + "create": DEFAULT_CREATORS, + "publish": DEFAULT_PUBLISH_PLUGINS, +} diff --git a/server_addon/traypublisher/server/settings/publish_plugins.py b/server_addon/traypublisher/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8c844f29f2 --- /dev/null +++ b/server_addon/traypublisher/server/settings/publish_plugins.py @@ -0,0 +1,50 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ValidatePluginModel(BaseSettingsModel): + _isGroup = True + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateFrameRangeModel(ValidatePluginModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + +class TrayPublisherPublishPlugins(BaseSettingsModel): + CollectFrameDataFromAssetEntity: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Collect Frame Data From Folder Entity", + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + title="Validate Frame Range", + default_factory=ValidateFrameRangeModel, + ) + ValidateExistingVersion: ValidatePluginModel = Field( + title="Validate Existing Version", + default_factory=ValidatePluginModel, + ) + + +DEFAULT_PUBLISH_PLUGINS = { + "CollectFrameDataFromAssetEntity": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateExistingVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/traypublisher/server/settings/simple_creators.py b/server_addon/traypublisher/server/settings/simple_creators.py new file mode 100644 index 0000000000..8335b9d34e --- /dev/null +++ b/server_addon/traypublisher/server/settings/simple_creators.py @@ -0,0 +1,309 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class SimpleCreatorPlugin(BaseSettingsModel): + _layout = "expanded" + product_type: str = Field("", title="Product type") + # TODO add placeholder + identifier: str = Field("", title="Identifier") + label: str = Field("", title="Label") + icon: str = Field("", title="Icon") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + description: str = Field( + "", + title="Description", + widget="textarea" + ) + detailed_description: str = Field( + "", + title="Detailed Description", + widget="textarea" + ) + allow_sequences: bool = Field( + False, + title="Allow sequences" + ) + allow_multiple_items: bool = Field( + False, + title="Allow multiple items" + ) + allow_version_control: bool = Field( + False, + title="Allow version control" + ) + extensions: list[str] = Field( + default_factory=list, + title="Extensions" + ) + + +DEFAULT_SIMPLE_CREATORS = [ + { + "product_type": "workfile", + "identifier": "", + "label": "Workfile", + "icon": "fa.file", + "default_variants": [ + "Main" + ], + "description": "Backup of a working scene", + "detailed_description": "Workfiles are full scenes from any application that are directly edited by artists. They represent a state of work on a task at a given point and are usually not directly referenced into other scenes.", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".nk", + ".hrox", + ".hip", + ".hiplc", + ".hipnc", + ".blend", + ".scn", + ".tvpp", + ".comp", + ".zip", + ".prproj", + ".drp", + ".psd", + ".psb", + ".aep" + ] + }, + { + "product_type": "model", + "identifier": "", + "label": "Model", + "icon": "fa.cubes", + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ], + "description": "Clean models", + "detailed_description": "Models should only contain geometry data, without any extras like cameras, locators or bones.\n\nKeep in mind that models published from tray publisher are not validated for correctness. ", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".obj", + ".abc", + ".fbx", + ".bgeo", + ".bgeogz", + ".bgeosc", + ".usd", + ".blend" + ] + }, + { + "product_type": "pointcache", + "identifier": "", + "label": "Pointcache", + "icon": "fa.gears", + "default_variants": [ + "Main" + ], + "description": "Geometry Caches", + "detailed_description": "Alembic or bgeo cache of animated data", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".bgeo", + ".bgeogz", + ".bgeosc" + ] + }, + { + "product_type": "plate", + "identifier": "", + "label": "Plate", + "icon": "mdi.camera-image", + "default_variants": [ + "Main", + "BG", + "Animatic", + "Reference", + "Offline" + ], + "description": "Footage Plates", + "detailed_description": "Any type of image seqeuence coming from outside of the studio. Usually camera footage, but could also be animatics used for reference.", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "render", + "identifier": "", + "label": "Render", + "icon": "mdi.folder-multiple-image", + "default_variants": [], + "description": "Rendered images or video", + "detailed_description": "Sequence or single file renders", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".jpeg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "camera", + "identifier": "", + "label": "Camera", + "icon": "fa.video-camera", + "default_variants": [], + "description": "3d Camera", + "detailed_description": "Ideally this should be only camera itself with baked animation, however, it can technically also include helper geometry.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".ma", + ".hip", + ".blend", + ".fbx", + ".usd" + ] + }, + { + "product_type": "image", + "identifier": "", + "label": "Image", + "icon": "fa.image", + "default_variants": [ + "Reference", + "Texture", + "Concept", + "Background" + ], + "description": "Single image", + "detailed_description": "Any image data can be published as image product type. References, textures, concept art, matte paints. This is a fallback 2d product type for everything that doesn't fit more specific product type.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".jpg", + ".jpeg", + ".dpx", + ".bmp", + ".tif", + ".tiff", + ".png", + ".psb", + ".psd" + ] + }, + { + "product_type": "vdb", + "identifier": "", + "label": "VDB Volumes", + "icon": "fa.cloud", + "default_variants": [], + "description": "Sparse volumetric data", + "detailed_description": "Hierarchical data structure for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".vdb" + ] + }, + { + "product_type": "matchmove", + "identifier": "", + "label": "Matchmove", + "icon": "fa.empire", + "default_variants": [ + "Camera", + "Object", + "Mocap" + ], + "description": "Matchmoving script", + "detailed_description": "Script exported from matchmoving application to be later processed into a tracked camera with additional data", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "rig", + "identifier": "", + "label": "Rig", + "icon": "fa.wheelchair", + "default_variants": [], + "description": "CG rig file", + "detailed_description": "CG rigged character or prop. Rig should be clean of any extra data and directly loadable into it's respective application\t", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".blend", + ".hip", + ".hda" + ] + }, + { + "product_type": "simpleUnrealTexture", + "identifier": "", + "label": "Simple UE texture", + "icon": "fa.image", + "default_variants": [], + "description": "Simple Unreal Engine texture", + "detailed_description": "Texture files with Unreal Engine naming conventions", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "audio", + "identifier": "", + "label": "Audio ", + "icon": "fa5s.file-audio", + "default_variants": [ + "Main" + ], + "description": "Audio product", + "detailed_description": "Audio files for review or final delivery", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".wav" + ] + } +] diff --git a/server_addon/traypublisher/server/version.py b/server_addon/traypublisher/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/traypublisher/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/tvpaint/server/__init__.py b/server_addon/tvpaint/server/__init__.py new file mode 100644 index 0000000000..033d7d3792 --- /dev/null +++ b/server_addon/tvpaint/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TvpaintSettings, DEFAULT_VALUES + + +class TvpaintAddon(BaseServerAddon): + name = "tvpaint" + title = "TVPaint" + version = __version__ + settings_model: Type[TvpaintSettings] = TvpaintSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/tvpaint/server/settings/__init__.py b/server_addon/tvpaint/server/settings/__init__.py new file mode 100644 index 0000000000..abee32e897 --- /dev/null +++ b/server_addon/tvpaint/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TvpaintSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "TvpaintSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/tvpaint/server/settings/create_plugins.py b/server_addon/tvpaint/server/settings/create_plugins.py new file mode 100644 index 0000000000..349bfdd288 --- /dev/null +++ b/server_addon/tvpaint/server/settings/create_plugins.py @@ -0,0 +1,133 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateWorkfileModel(BaseSettingsModel): + enabled: bool = Field(True) + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateReviewModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderSceneModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderLayerModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderPassModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class AutoDetectCreateRenderModel(BaseSettingsModel): + """The creator tries to auto-detect Render Layers and Render Passes in scene. + + For Render Layers is used group name as a variant and for Render Passes is + used TVPaint layer name. + + Group names can be renamed by their used order in scene. The renaming + template where can be used '{group_index}' formatting key which is + filled by "used position index of group". + - Template: 'L{group_index}' + - Group offset: '10' + - Group padding: '3' + + Would create group names "L010", "L020", ... + """ + + enabled: bool = Field(True) + allow_group_rename: bool = Field(title="Allow group rename") + group_name_template: str = Field(title="Group name template") + group_idx_offset: int = Field(1, title="Group index Offset", ge=1) + group_idx_padding: int = Field(4, title="Group index Padding", ge=1) + + +class CreatePluginsModel(BaseSettingsModel): + create_workfile: CreateWorkfileModel = Field( + default_factory=CreateWorkfileModel, + title="Create Workfile" + ) + create_review: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + create_render_scene: CreateRenderSceneModel = Field( + default_factory=CreateReviewModel, + title="Create Render Scene" + ) + create_render_layer: CreateRenderLayerModel= Field( + default_factory=CreateRenderLayerModel, + title="Create Render Layer" + ) + create_render_pass: CreateRenderPassModel = Field( + default_factory=CreateRenderPassModel, + title="Create Render Pass" + ) + auto_detect_render: AutoDetectCreateRenderModel = Field( + default_factory=AutoDetectCreateRenderModel, + title="Auto-Detect Create Render", + ) + + +DEFAULT_CREATE_SETTINGS = { + "create_workfile": { + "enabled": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_review": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_render_scene": { + "enabled": True, + "active_on_create": False, + "mark_for_review": True, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_layer": { + "mark_for_review": False, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_pass": { + "mark_for_review": False, + "default_variant": "Main", + "default_variants": [] + }, + "auto_detect_render": { + "enabled": False, + "allow_group_rename": True, + "group_name_template": "L{group_index}", + "group_idx_offset": 10, + "group_idx_padding": 3 + } +} diff --git a/server_addon/tvpaint/server/settings/filters.py b/server_addon/tvpaint/server/settings/filters.py new file mode 100644 index 0000000000..009febae06 --- /dev/null +++ b/server_addon/tvpaint/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class FiltersSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field( + "", + title="Textarea", + widget="textarea", + ) + + +class PublishFiltersModel(BaseSettingsModel): + env_search_replace_values: list[FiltersSubmodel] = Field( + default_factory=list + ) diff --git a/server_addon/tvpaint/server/settings/imageio.py b/server_addon/tvpaint/server/settings/imageio.py new file mode 100644 index 0000000000..50f8b7eef4 --- /dev/null +++ b/server_addon/tvpaint/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TVPaintImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/tvpaint/server/settings/main.py b/server_addon/tvpaint/server/settings/main.py new file mode 100644 index 0000000000..4cd6ac4b1a --- /dev/null +++ b/server_addon/tvpaint/server/settings/main.py @@ -0,0 +1,90 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .imageio import TVPaintImageIOModel +from .workfile_builder import WorkfileBuilderPlugin +from .create_plugins import CreatePluginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import ( + PublishPluginsModel, + LoadPluginsModel, + DEFAULT_PUBLISH_SETTINGS, +) + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TvpaintSettings(BaseSettingsModel): + imageio: TVPaintImageIOModel = Field( + default_factory=TVPaintImageIOModel, + title="Color Management (ImageIO)" + ) + stop_timer_on_application_exit: bool = Field( + title="Stop timer on application exit") + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Create plugins" + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins") + load: LoadPluginsModel = Field( + default_factory=LoadPluginsModel, + title="Load plugins") + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "stop_timer_on_application_exit": False, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": { + "LoadImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + }, + "ImportImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + } + }, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "filters": [] +} diff --git a/server_addon/tvpaint/server/settings/publish_plugins.py b/server_addon/tvpaint/server/settings/publish_plugins.py new file mode 100644 index 0000000000..76c7eaac01 --- /dev/null +++ b/server_addon/tvpaint/server/settings/publish_plugins.py @@ -0,0 +1,132 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class CollectRenderInstancesModel(BaseSettingsModel): + ignore_render_pass_transparency: bool = Field( + title="Ignore Render Pass opacity" + ) + + +class ExtractSequenceModel(BaseSettingsModel): + """Review BG color is used for whole scene review and for thumbnails.""" + # TODO Use alpha color + review_bg: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Review BG color") + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +def compression_enum(): + return [ + {"value": "ZIP", "label": "ZIP"}, + {"value": "ZIPS", "label": "ZIPS"}, + {"value": "DWAA", "label": "DWAA"}, + {"value": "DWAB", "label": "DWAB"}, + {"value": "PIZ", "label": "PIZ"}, + {"value": "RLE", "label": "RLE"}, + {"value": "PXR24", "label": "PXR24"}, + {"value": "B44", "label": "B44"}, + {"value": "B44A", "label": "B44A"}, + {"value": "none", "label": "None"} + ] + + +class ExtractConvertToEXRModel(BaseSettingsModel): + """WARNING: This plugin does not work on MacOS (using OIIO tool).""" + enabled: bool = False + replace_pngs: bool = True + + exr_compression: str = Field( + "ZIP", + enum_resolver=compression_enum, + title="EXR Compression" + ) + + +class LoadImageDefaultModel(BaseSettingsModel): + _layout = "expanded" + stretch: bool = Field(title="Stretch") + timestretch: bool = Field(title="TimeStretch") + preload: bool = Field(title="Preload") + + +class LoadImageModel(BaseSettingsModel): + defaults: LoadImageDefaultModel = Field( + default_factory=LoadImageDefaultModel + ) + + +class PublishPluginsModel(BaseSettingsModel): + CollectRenderInstances: CollectRenderInstancesModel = Field( + default_factory=CollectRenderInstancesModel, + title="Collect Render Instances") + ExtractSequence: ExtractSequenceModel = Field( + default_factory=ExtractSequenceModel, + title="Extract Sequence") + ValidateProjectSettings: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Project Settings") + ValidateMarks: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate MarkIn/Out") + ValidateStartFrame: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Scene Start Frame") + ValidateAssetName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Folder Name") + ExtractConvertToEXR: ExtractConvertToEXRModel = Field( + default_factory=ExtractConvertToEXRModel, + title="Extract Convert To EXR") + + +class LoadPluginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image") + ImportImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Import Image") + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectRenderInstances": { + "ignore_render_pass_transparency": False + }, + "ExtractSequence": { + "review_bg": [255, 255, 255, 1.0] + }, + "ValidateProjectSettings": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMarks": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateStartFrame": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractConvertToEXR": { + "enabled": False, + "replace_pngs": True, + "exr_compression": "ZIP" + } +} diff --git a/server_addon/tvpaint/server/settings/workfile_builder.py b/server_addon/tvpaint/server/settings/workfile_builder.py new file mode 100644 index 0000000000..e0aba5da7e --- /dev/null +++ b/server_addon/tvpaint/server/settings/workfile_builder.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + task_types_enum, +) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/tvpaint/server/version.py b/server_addon/tvpaint/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/tvpaint/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/unreal/server/__init__.py b/server_addon/unreal/server/__init__.py new file mode 100644 index 0000000000..a5f3e9597d --- /dev/null +++ b/server_addon/unreal/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import UnrealSettings, DEFAULT_VALUES + + +class UnrealAddon(BaseServerAddon): + name = "unreal" + title = "Unreal" + version = __version__ + settings_model: Type[UnrealSettings] = UnrealSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/unreal/server/imageio.py b/server_addon/unreal/server/imageio.py new file mode 100644 index 0000000000..dde042ba47 --- /dev/null +++ b/server_addon/unreal/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class UnrealImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/unreal/server/settings.py b/server_addon/unreal/server/settings.py new file mode 100644 index 0000000000..479e041e25 --- /dev/null +++ b/server_addon/unreal/server/settings.py @@ -0,0 +1,64 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import UnrealImageIOModel + + +class ProjectSetup(BaseSettingsModel): + dev_mode: bool = Field( + False, + title="Dev mode" + ) + + +def _render_format_enum(): + return [ + {"value": "png", "label": "PNG"}, + {"value": "exr", "label": "EXR"}, + {"value": "jpg", "label": "JPG"}, + {"value": "bmp", "label": "BMP"} + ] + + +class UnrealSettings(BaseSettingsModel): + imageio: UnrealImageIOModel = Field( + default_factory=UnrealImageIOModel, + title="Color Management (ImageIO)" + ) + level_sequences_for_layouts: bool = Field( + False, + title="Generate level sequences when loading layouts" + ) + delete_unmatched_assets: bool = Field( + False, + title="Delete assets that are not matched" + ) + render_config_path: str = Field( + "", + title="Render Config Path" + ) + preroll_frames: int = Field( + 0, + title="Pre-roll frames" + ) + render_format: str = Field( + "png", + title="Render format", + enum_resolver=_render_format_enum + ) + project_setup: ProjectSetup = Field( + default_factory=ProjectSetup, + title="Project Setup", + ) + + +DEFAULT_VALUES = { + "level_sequences_for_layouts": False, + "delete_unmatched_assets": False, + "render_config_path": "", + "preroll_frames": 0, + "render_format": "png", + "project_setup": { + "dev_mode": False + } +} diff --git a/server_addon/unreal/server/version.py b/server_addon/unreal/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/unreal/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/setup.cfg b/setup.cfg index 10cca3eb3f..ead9b25164 100644 --- a/setup.cfg +++ b/setup.cfg @@ -28,4 +28,11 @@ omit = /tests directory = ./coverage [tool:pytest] -norecursedirs = repos/* openpype/modules/ftrack/* \ No newline at end of file +norecursedirs = openpype/modules/ftrack/* + +[isort] +line_length = 79 +multi_line_output = 3 +include_trailing_comma = True +force_grid_wrap = 0 +combine_as_imports = True diff --git a/setup.py b/setup.py index ab6e22bccc..6179de1d34 100644 --- a/setup.py +++ b/setup.py @@ -1,7 +1,6 @@ # -*- coding: utf-8 -*- """Setup info for building OpenPype 3.0.""" import os -import sys import re import platform import distutils.spawn @@ -89,7 +88,6 @@ install_requires = [ "keyring", "clique", "jsonschema", - "opentimelineio", "pathlib2", "pkg_resources", "PIL", @@ -126,7 +124,6 @@ bin_includes = [ include_files = [ "igniter", "openpype", - "schema", "LICENSE", "README.md" ] @@ -158,11 +155,20 @@ bdist_mac_options = dict( ) executables = [ - Executable("start.py", base=base, - target_name="openpype_gui", icon=icon_path.as_posix()), - Executable("start.py", base=None, - target_name="openpype_console", icon=icon_path.as_posix()) + Executable( + "start.py", + base=base, + target_name="openpype_gui", + icon=icon_path.as_posix() + ), + Executable( + "start.py", + base=None, + target_name="openpype_console", + icon=icon_path.as_posix() + ), ] + if IS_LINUX: executables.append( Executable( diff --git a/start.py b/start.py index 91e5c29a53..3b020c76c0 100644 --- a/start.py +++ b/start.py @@ -133,6 +133,10 @@ else: vendor_python_path = os.path.join(OPENPYPE_ROOT, "vendor", "python") sys.path.insert(0, vendor_python_path) +# Add common package to sys path +# - common contains common code for bootstraping and OpenPype processes +sys.path.insert(0, os.path.join(OPENPYPE_ROOT, "common")) + import blessed # noqa: E402 import certifi # noqa: E402 @@ -140,8 +144,8 @@ import certifi # noqa: E402 if sys.__stdout__: term = blessed.Terminal() - def _print(message: str): - if silent_mode: + def _print(message: str, force=False): + if silent_mode and not force: return if message.startswith("!!! "): print(f'{term.orangered2("!!! ")}{message[4:]}') @@ -197,6 +201,15 @@ if "--headless" in sys.argv: elif os.getenv("OPENPYPE_HEADLESS_MODE") != "1": os.environ.pop("OPENPYPE_HEADLESS_MODE", None) +# Set builtin ocio root +os.environ["BUILTIN_OCIO_ROOT"] = os.path.join( + OPENPYPE_ROOT, + "vendor", + "bin", + "ocioconfig", + "OpenColorIOConfigs" +) + # Enabled logging debug mode when "--debug" is passed if "--verbose" in sys.argv: expected_values = ( @@ -255,6 +268,7 @@ from igniter import BootstrapRepos # noqa: E402 from igniter.tools import ( get_openpype_global_settings, get_openpype_path_from_settings, + get_local_openpype_path_from_settings, validate_mongo_connection, OpenPypeVersionNotFound, OpenPypeVersionIncompatible @@ -348,8 +362,15 @@ def run_disk_mapping_commands(settings): mappings = disk_mapping.get(low_platform) or [] for source, destination in mappings: - destination = destination.rstrip('/') - source = source.rstrip('/') + if low_platform == "windows": + destination = destination.replace("/", "\\").rstrip("\\") + source = source.replace("/", "\\").rstrip("\\") + # Add slash after ':' ('G:' -> 'G:\') + if destination.endswith(":"): + destination += "\\" + else: + destination = destination.rstrip("/") + source = source.rstrip("/") if low_platform == "darwin": scr = f'do shell script "ln -s {source} {destination}" with administrator privileges' # noqa @@ -507,8 +528,8 @@ def _process_arguments() -> tuple: not use_version_value or not use_version_value.startswith("=") ): - _print("!!! Please use option --use-version like:") - _print(" --use-version=3.0.0") + _print("!!! Please use option --use-version like:", True) + _print(" --use-version=3.0.0", True) sys.exit(1) version_str = use_version_value[1:] @@ -525,14 +546,14 @@ def _process_arguments() -> tuple: break if use_version is None: - _print("!!! Requested version isn't in correct format.") + _print("!!! Requested version isn't in correct format.", True) _print((" Use --list-versions to find out" - " proper version string.")) + " proper version string."), True) sys.exit(1) if arg == "--validate-version": - _print("!!! Please use option --validate-version like:") - _print(" --validate-version=3.0.0") + _print("!!! Please use option --validate-version like:", True) + _print(" --validate-version=3.0.0", True) sys.exit(1) if arg.startswith("--validate-version="): @@ -543,9 +564,9 @@ def _process_arguments() -> tuple: sys.argv.remove(arg) commands.append("validate") else: - _print("!!! Requested version isn't in correct format.") + _print("!!! Requested version isn't in correct format.", True) _print((" Use --list-versions to find out" - " proper version string.")) + " proper version string."), True) sys.exit(1) if "--list-versions" in sys.argv: @@ -556,7 +577,7 @@ def _process_arguments() -> tuple: # this is helper to run igniter before anything else if "igniter" in sys.argv: if os.getenv("OPENPYPE_HEADLESS_MODE") == "1": - _print("!!! Cannot open Igniter dialog in headless mode.") + _print("!!! Cannot open Igniter dialog in headless mode.", True) sys.exit(1) return_code = igniter.open_dialog() @@ -606,9 +627,9 @@ def _determine_mongodb() -> str: if not openpype_mongo: _print("*** No DB connection string specified.") if os.getenv("OPENPYPE_HEADLESS_MODE") == "1": - _print("!!! Cannot open Igniter dialog in headless mode.") - _print( - "!!! Please use `OPENPYPE_MONGO` to specify server address.") + _print("!!! Cannot open Igniter dialog in headless mode.", True) + _print(("!!! Please use `OPENPYPE_MONGO` to specify " + "server address."), True) sys.exit(1) _print("--- launching setup UI ...") @@ -783,7 +804,7 @@ def _find_frozen_openpype(use_version: str = None, try: version_path = bootstrap.extract_openpype(openpype_version) except OSError as e: - _print("!!! failed: {}".format(str(e))) + _print("!!! failed: {}".format(str(e)), True) sys.exit(1) else: # cleanup zip after extraction @@ -899,7 +920,7 @@ def _boot_validate_versions(use_version, local_version): v: OpenPypeVersion found = [v for v in openpype_versions if str(v) == use_version] if not found: - _print(f"!!! Version [ {use_version} ] not found.") + _print(f"!!! Version [ {use_version} ] not found.", True) list_versions(openpype_versions, local_version) sys.exit(1) @@ -908,7 +929,8 @@ def _boot_validate_versions(use_version, local_version): use_version, openpype_versions ) valid, message = bootstrap.validate_openpype_version(version_path) - _print(f'{">>> " if valid else "!!! "}{message}') + _print(f'{">>> " if valid else "!!! "}{message}', not valid) + return valid def _boot_print_versions(openpype_root): @@ -935,7 +957,7 @@ def _boot_print_versions(openpype_root): def _boot_handle_missing_version(local_version, message): - _print(message) + _print(message, True) if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": openpype_versions = bootstrap.find_openpype( include_zips=True) @@ -983,7 +1005,7 @@ def boot(): openpype_mongo = _determine_mongodb() except RuntimeError as e: # without mongodb url we are done for. - _print(f"!!! {e}") + _print(f"!!! {e}", True) sys.exit(1) os.environ["OPENPYPE_MONGO"] = openpype_mongo @@ -1018,14 +1040,18 @@ def boot(): # find its versions there and bootstrap them. openpype_path = get_openpype_path_from_settings(global_settings) + # Check if local versions should be installed in custom folder and not in + # user app data + data_dir = get_local_openpype_path_from_settings(global_settings) + bootstrap.set_data_dir(data_dir) if getattr(sys, 'frozen', False): local_version = bootstrap.get_version(Path(OPENPYPE_ROOT)) else: local_version = OpenPypeVersion.get_installed_version_str() if "validate" in commands: - _boot_validate_versions(use_version, local_version) - sys.exit(1) + valid = _boot_validate_versions(use_version, local_version) + sys.exit(0 if valid else 1) if not openpype_path: _print("*** Cannot get OpenPype path from database.") @@ -1035,7 +1061,7 @@ def boot(): if "print_versions" in commands: _boot_print_versions(OPENPYPE_ROOT) - sys.exit(1) + sys.exit(0) # ------------------------------------------------------------------------ # Find OpenPype versions @@ -1052,13 +1078,13 @@ def boot(): except RuntimeError as e: # no version to run - _print(f"!!! {e}") + _print(f"!!! {e}", True) sys.exit(1) # validate version - _print(f">>> Validating version [ {str(version_path)} ]") + _print(f">>> Validating version in frozen [ {str(version_path)} ]") result = bootstrap.validate_openpype_version(version_path) if not result[0]: - _print(f"!!! Invalid version: {result[1]}") + _print(f"!!! Invalid version: {result[1]}", True) sys.exit(1) _print("--- version is valid") else: @@ -1126,7 +1152,7 @@ def boot(): cli.main(obj={}, prog_name="openpype") except Exception: # noqa exc_info = sys.exc_info() - _print("!!! OpenPype crashed:") + _print("!!! OpenPype crashed:", True) traceback.print_exception(*exc_info) sys.exit(1) diff --git a/tests/conftest.py b/tests/conftest.py index 4f7c17244b..6e82c9917d 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -29,6 +29,11 @@ def pytest_addoption(parser): help="True - only setup test, do not run any tests" ) + parser.addoption( + "--mongo_url", action="store", default=None, + help="Provide url of the Mongo database." + ) + @pytest.fixture(scope="module") def test_data_folder(request): @@ -55,6 +60,11 @@ def setup_only(request): return request.config.getoption("--setup_only") +@pytest.fixture(scope="module") +def mongo_url(request): + return request.config.getoption("--mongo_url") + + @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object diff --git a/tests/integration/hosts/maya/input/startup/userSetup.py b/tests/integration/hosts/maya/input/startup/userSetup.py new file mode 100644 index 0000000000..eb6e2411b5 --- /dev/null +++ b/tests/integration/hosts/maya/input/startup/userSetup.py @@ -0,0 +1,28 @@ +import logging +import sys + +from maya import cmds + +import pyblish.util + + +def setup_pyblish_logging(): + log = logging.getLogger("pyblish") + handler = logging.StreamHandler(sys.stdout) + formatter = logging.Formatter( + "pyblish (%(levelname)s) (line: %(lineno)d) %(name)s:" + "\n%(message)s" + ) + handler.setFormatter(formatter) + log.addHandler(handler) + + +def _run_publish_test_deferred(): + try: + setup_pyblish_logging() + pyblish.util.publish() + finally: + cmds.quit(force=True) + + +cmds.evalDeferred("_run_publish_test_deferred()", lowestPriority=True) diff --git a/tests/integration/hosts/maya/lib.py b/tests/integration/hosts/maya/lib.py index e7480e25fa..f27d516605 100644 --- a/tests/integration/hosts/maya/lib.py +++ b/tests/integration/hosts/maya/lib.py @@ -33,16 +33,16 @@ class MayaHostFixtures(HostFixtures): yield dest_path @pytest.fixture(scope="module") - def startup_scripts(self, monkeypatch_session, download_test_data): + def startup_scripts(self, monkeypatch_session): """Points Maya to userSetup file from input data""" - startup_path = os.path.join(download_test_data, - "input", - "startup") + startup_path = os.path.join( + os.path.dirname(__file__), "input", "startup" + ) original_pythonpath = os.environ.get("PYTHONPATH") - monkeypatch_session.setenv("PYTHONPATH", - "{}{}{}".format(startup_path, - os.pathsep, - original_pythonpath)) + monkeypatch_session.setenv( + "PYTHONPATH", + "{}{}{}".format(startup_path, os.pathsep, original_pythonpath) + ) @pytest.fixture(scope="module") def skip_compare_folders(self): diff --git a/tests/integration/hosts/maya/test_deadline_publish_in_maya.py b/tests/integration/hosts/maya/test_deadline_publish_in_maya.py index c5bf526f52..9332177944 100644 --- a/tests/integration/hosts/maya/test_deadline_publish_in_maya.py +++ b/tests/integration/hosts/maya/test_deadline_publish_in_maya.py @@ -32,7 +32,7 @@ class TestDeadlinePublishInMaya(MayaDeadlinePublishTestClass): # keep empty to locate latest installed variant or explicit APP_VARIANT = "" - TIMEOUT = 120 # publish timeout + TIMEOUT = 180 # publish timeout def test_db_asserts(self, dbcon, publish_finished): """Host and input data dependent expected results in DB.""" diff --git a/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py new file mode 100644 index 0000000000..57e2f78973 --- /dev/null +++ b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py @@ -0,0 +1,106 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.nuke.lib import NukeDeadlinePublishTestClass + +log = logging.getLogger("test_publish_in_nuke") + + +class TestDeadlinePublishInNukePrerender(NukeDeadlinePublishTestClass): + """Basic test case for publishing in Nuke and Deadline for prerender + + It is different from `test_deadline_publish_in_nuke` as that one is for + `render` family >> this test expects different subset names. + + Uses generic TestCase to prepare fixtures for test data, testing DBs, + env vars. + + !!! + It expects path in WriteNode starting with 'c:/projects', it replaces + it with correct value in temp folder. + Access file path by selecting WriteNode group, CTRL+Enter, update file + input + !!! + + Opens Nuke, run publish on prepared workile. + + Then checks content of DB (if subset, version, representations were + created. + Checks tmp folder if all expected files were published. + + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py + runtests ../tests/integration/hosts/nuke # noqa: E501 + + To check log/errors from launched app's publish process keep PERSIST + to True and check `test_openpype.logs` collection. + """ + TEST_FILES = [ + ("1aQaKo3cF-fvbTfvODIRFMxgherjbJ4Ql", + "test_nuke_deadline_publish_in_nuke_prerender.zip", "") + ] + + APP_GROUP = "nuke" + + TIMEOUT = 180 # publish timeout + + # could be overwritten by command line arguments + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + PERSIST = False # True - keep test_db, test_openpype, outputted test files + TEST_DATA_FOLDER = None + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 2)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + # prerender has only default subset format `{family}{variant}`, + # Key01 is used variant + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="prerenderKey01")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2)) + + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "nk"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "context.ext": "exr"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + # prerender doesn't have set creation of review by default + additional_args = {"context.subset": "prerenderKey01", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "name": "h264_mov"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestDeadlinePublishInNukePrerender() diff --git a/tests/integration/hosts/nuke/test_publish_in_nuke.py b/tests/integration/hosts/nuke/test_publish_in_nuke.py index bfd84e4fd5..b7bb8716c0 100644 --- a/tests/integration/hosts/nuke/test_publish_in_nuke.py +++ b/tests/integration/hosts/nuke/test_publish_in_nuke.py @@ -68,7 +68,7 @@ class TestPublishInNuke(NukeLocalPublishTestClass): name="workfileTest_task")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 4)) + DBAssert.count_of_types(dbcon, "representation", 3)) additional_args = {"context.subset": "workfileTest_task", "context.ext": "nk"} @@ -85,7 +85,7 @@ class TestPublishInNuke(NukeLocalPublishTestClass): additional_args = {"context.subset": "renderTest_taskMain", "name": "thumbnail"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) additional_args = {"context.subset": "renderTest_taskMain", diff --git a/common/openpype_common/distribution/file_handler.py b/tests/lib/file_handler.py similarity index 66% rename from common/openpype_common/distribution/file_handler.py rename to tests/lib/file_handler.py index e649f143e9..07f6962c98 100644 --- a/common/openpype_common/distribution/file_handler.py +++ b/tests/lib/file_handler.py @@ -9,21 +9,23 @@ import hashlib import tarfile import zipfile +import requests -USER_AGENT = "openpype" +USER_AGENT = "AYON-launcher" class RemoteFileHandler: """Download file from url, might be GDrive shareable link""" - IMPLEMENTED_ZIP_FORMATS = ['zip', 'tar', 'tgz', - 'tar.gz', 'tar.xz', 'tar.bz2'] + IMPLEMENTED_ZIP_FORMATS = { + "zip", "tar", "tgz", "tar.gz", "tar.xz", "tar.bz2" + } @staticmethod def calculate_md5(fpath, chunk_size=10000): md5 = hashlib.md5() - with open(fpath, 'rb') as f: - for chunk in iter(lambda: f.read(chunk_size), b''): + with open(fpath, "rb") as f: + for chunk in iter(lambda: f.read(chunk_size), b""): md5.update(chunk) return md5.hexdigest() @@ -45,7 +47,7 @@ class RemoteFileHandler: h = hashlib.sha256() b = bytearray(128 * 1024) mv = memoryview(b) - with open(fpath, 'rb', buffering=0) as f: + with open(fpath, "rb", buffering=0) as f: for n in iter(lambda: f.readinto(mv), 0): h.update(mv[:n]) return h.hexdigest() @@ -62,27 +64,32 @@ class RemoteFileHandler: return True if not hash_type: raise ValueError("Provide hash type, md5 or sha256") - if hash_type == 'md5': + if hash_type == "md5": return RemoteFileHandler.check_md5(fpath, hash_value) if hash_type == "sha256": return RemoteFileHandler.check_sha256(fpath, hash_value) @staticmethod def download_url( - url, root, filename=None, - sha256=None, max_redirect_hops=3 + url, + root, + filename=None, + max_redirect_hops=3, + headers=None ): - """Download a file from a url and place it in root. + """Download a file from url and place it in root. + Args: url (str): URL to download file from root (str): Directory to place downloaded file in filename (str, optional): Name to save the file under. If None, use the basename of the URL - sha256 (str, optional): sha256 checksum of the download. - If None, do not check - max_redirect_hops (int, optional): Maximum number of redirect + max_redirect_hops (Optional[int]): Maximum number of redirect hops allowed + headers (Optional[dict[str, str]]): Additional required headers + - Authentication etc.. """ + root = os.path.expanduser(root) if not filename: filename = os.path.basename(url) @@ -90,55 +97,44 @@ class RemoteFileHandler: os.makedirs(root, exist_ok=True) - # check if file is already present locally - if RemoteFileHandler.check_integrity(fpath, - sha256, hash_type="sha256"): - print('Using downloaded and verified file: ' + fpath) - return - # expand redirect chain if needed - url = RemoteFileHandler._get_redirect_url(url, - max_hops=max_redirect_hops) + url = RemoteFileHandler._get_redirect_url( + url, max_hops=max_redirect_hops, headers=headers) # check if file is located on Google Drive file_id = RemoteFileHandler._get_google_drive_file_id(url) if file_id is not None: return RemoteFileHandler.download_file_from_google_drive( - file_id, root, filename, sha256) + file_id, root, filename) # download the file try: - print('Downloading ' + url + ' to ' + fpath) - RemoteFileHandler._urlretrieve(url, fpath) - except (urllib.error.URLError, IOError) as e: - if url[:5] == 'https': - url = url.replace('https:', 'http:') - print('Failed download. Trying https -> http instead.' - ' Downloading ' + url + ' to ' + fpath) - RemoteFileHandler._urlretrieve(url, fpath) - else: - raise e + print(f"Downloading {url} to {fpath}") + RemoteFileHandler._urlretrieve(url, fpath, headers=headers) + except (urllib.error.URLError, IOError) as exc: + if url[:5] != "https": + raise exc - # check integrity of downloaded file - if not RemoteFileHandler.check_integrity(fpath, - sha256, hash_type="sha256"): - raise RuntimeError("File not found or corrupted.") + url = url.replace("https:", "http:") + print(( + "Failed download. Trying https -> http instead." + f" Downloading {url} to {fpath}" + )) + RemoteFileHandler._urlretrieve(url, fpath, headers=headers) @staticmethod - def download_file_from_google_drive(file_id, root, - filename=None, - sha256=None): + def download_file_from_google_drive( + file_id, root, filename=None + ): """Download a Google Drive file from and place it in root. Args: file_id (str): id of file to be downloaded root (str): Directory to place downloaded file in filename (str, optional): Name to save the file under. If None, use the id of the file. - sha256 (str, optional): sha256 checksum of the download. - If None, do not check """ # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url # noqa - import requests + url = "https://docs.google.com/uc?export=download" root = os.path.expanduser(root) @@ -148,17 +144,16 @@ class RemoteFileHandler: os.makedirs(root, exist_ok=True) - if os.path.isfile(fpath) and RemoteFileHandler.check_integrity( - fpath, sha256, hash_type="sha256"): - print('Using downloaded and verified file: ' + fpath) + if os.path.isfile(fpath) and RemoteFileHandler.check_integrity(fpath): + print(f"Using downloaded and verified file: {fpath}") else: session = requests.Session() - response = session.get(url, params={'id': file_id}, stream=True) + response = session.get(url, params={"id": file_id}, stream=True) token = RemoteFileHandler._get_confirm_token(response) if token: - params = {'id': file_id, 'confirm': token} + params = {"id": file_id, "confirm": token} response = session.get(url, params=params, stream=True) response_content_generator = response.iter_content(32768) @@ -186,28 +181,28 @@ class RemoteFileHandler: destination_path = os.path.dirname(path) _, archive_type = os.path.splitext(path) - archive_type = archive_type.lstrip('.') + archive_type = archive_type.lstrip(".") - if archive_type in ['zip']: - print("Unzipping {}->{}".format(path, destination_path)) + if archive_type in ["zip"]: + print(f"Unzipping {path}->{destination_path}") zip_file = zipfile.ZipFile(path) zip_file.extractall(destination_path) zip_file.close() elif archive_type in [ - 'tar', 'tgz', 'tar.gz', 'tar.xz', 'tar.bz2' + "tar", "tgz", "tar.gz", "tar.xz", "tar.bz2" ]: - print("Unzipping {}->{}".format(path, destination_path)) - if archive_type == 'tar': - tar_type = 'r:' - elif archive_type.endswith('xz'): - tar_type = 'r:xz' - elif archive_type.endswith('gz'): - tar_type = 'r:gz' - elif archive_type.endswith('bz2'): - tar_type = 'r:bz2' + print(f"Unzipping {path}->{destination_path}") + if archive_type == "tar": + tar_type = "r:" + elif archive_type.endswith("xz"): + tar_type = "r:xz" + elif archive_type.endswith("gz"): + tar_type = "r:gz" + elif archive_type.endswith("bz2"): + tar_type = "r:bz2" else: - tar_type = 'r:*' + tar_type = "r:*" try: tar_file = tarfile.open(path, tar_type) except tarfile.ReadError: @@ -216,29 +211,35 @@ class RemoteFileHandler: tar_file.close() @staticmethod - def _urlretrieve(url, filename, chunk_size): + def _urlretrieve(url, filename, chunk_size=None, headers=None): + final_headers = {"User-Agent": USER_AGENT} + if headers: + final_headers.update(headers) + + chunk_size = chunk_size or 8192 with open(filename, "wb") as fh: with urllib.request.urlopen( - urllib.request.Request(url, - headers={"User-Agent": USER_AGENT})) \ - as response: + urllib.request.Request(url, headers=final_headers) + ) as response: for chunk in iter(lambda: response.read(chunk_size), ""): if not chunk: break fh.write(chunk) @staticmethod - def _get_redirect_url(url, max_hops): + def _get_redirect_url(url, max_hops, headers=None): initial_url = url - headers = {"Method": "HEAD", "User-Agent": USER_AGENT} - + final_headers = {"Method": "HEAD", "User-Agent": USER_AGENT} + if headers: + final_headers.update(headers) for _ in range(max_hops + 1): with urllib.request.urlopen( - urllib.request.Request(url, headers=headers)) as response: + urllib.request.Request(url, headers=final_headers) + ) as response: if response.url == url or response.url is None: return url - url = response.url + return response.url else: raise RecursionError( f"Request to {initial_url} exceeded {max_hops} redirects. " @@ -248,7 +249,7 @@ class RemoteFileHandler: @staticmethod def _get_confirm_token(response): for key, value in response.cookies.items(): - if key.startswith('download_warning'): + if key.startswith("download_warning"): return value # handle antivirus warning for big zips diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index 300024dc98..277b332e19 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -12,7 +12,7 @@ import requests import re from tests.lib.db_handler import DBHandler -from common.openpype_common.distribution.file_handler import RemoteFileHandler +from tests.lib.file_handler import RemoteFileHandler from openpype.modules import ModulesManager from openpype.settings import get_project_settings @@ -105,7 +105,7 @@ class ModuleUnitTest(BaseTest): yield path @pytest.fixture(scope="module") - def env_var(self, monkeypatch_session, download_test_data): + def env_var(self, monkeypatch_session, download_test_data, mongo_url): """Sets temporary env vars from json file.""" env_url = os.path.join(download_test_data, "input", "env_vars", "env_var.json") @@ -129,6 +129,9 @@ class ModuleUnitTest(BaseTest): monkeypatch_session.setenv(key, str(value)) #reset connection to openpype DB with new env var + if mongo_url: + monkeypatch_session.setenv("OPENPYPE_MONGO", mongo_url) + import openpype.settings.lib as sett_lib sett_lib._SETTINGS_HANDLER = None sett_lib._LOCAL_SETTINGS_HANDLER = None @@ -147,10 +150,9 @@ class ModuleUnitTest(BaseTest): @pytest.fixture(scope="module") def db_setup(self, download_test_data, env_var, monkeypatch_session, - request): + request, mongo_url): """Restore prepared MongoDB dumps into selected DB.""" backup_dir = os.path.join(download_test_data, "input", "dumps") - uri = os.environ.get("OPENPYPE_MONGO") db_handler = DBHandler(uri) db_handler.setup_from_dump(self.TEST_DB_NAME, backup_dir, diff --git a/tests/unit/openpype/default_modules/royal_render/test_rr_job.py b/tests/unit/openpype/default_modules/royal_render/test_rr_job.py deleted file mode 100644 index ab8b1bfd50..0000000000 --- a/tests/unit/openpype/default_modules/royal_render/test_rr_job.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- -"""Test suite for User Settings.""" -# import pytest -# from openpype.modules import ModulesManager - - -def test_rr_job(): - # manager = ModulesManager() - # rr_module = manager.modules_by_name["royalrender"] - ... diff --git a/tests/unit/openpype/hosts/photoshop/test_lib.py b/tests/unit/openpype/hosts/photoshop/test_lib.py new file mode 100644 index 0000000000..ad4feb42ae --- /dev/null +++ b/tests/unit/openpype/hosts/photoshop/test_lib.py @@ -0,0 +1,25 @@ +import pytest + +from openpype.hosts.photoshop.lib import clean_subset_name + +""" +Tests cleanup of unused layer placeholder ({layer}) from subset name. +Layer differentiation might be desired in subset name, but in some cases it +might be used (in `auto_image` - only single image without layer diff., +single image instance created without toggled use of subset name etc.) +""" + + +def test_no_layer_placeholder(): + clean_subset = clean_subset_name("imageMain") + assert "imageMain" == clean_subset + + +@pytest.mark.parametrize("subset_name", + ["imageMain{Layer}", + "imageMain_{layer}", # trailing _ + "image{Layer}Main", + "image{LAYER}Main"]) +def test_not_used_layer_placeholder(subset_name): + clean_subset = clean_subset_name(subset_name) + assert "imageMain" == clean_subset diff --git a/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py b/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py index 17e47c9f64..f472b8052a 100644 --- a/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py +++ b/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py @@ -19,7 +19,7 @@ import logging from pyblish.api import Instance as PyblishInstance from tests.lib.testing_classes import BaseTest -from openpype.plugins.publish.validate_sequence_frames import ( +from openpype.hosts.unreal.plugins.publish.validate_sequence_frames import ( ValidateSequenceFrames ) @@ -38,7 +38,13 @@ class TestValidateSequenceFrames(BaseTest): data = { "frameStart": 1001, "frameEnd": 1002, - "representations": [] + "representations": [], + "assetEntity": { + "data": { + "clipIn": 1001, + "clipOut": 1002, + } + } } yield Instance @@ -58,6 +64,7 @@ class TestValidateSequenceFrames(BaseTest): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1001 + instance.data["assetEntity"]["data"]["clipOut"] = 1001 plugin.process(instance) @@ -84,49 +91,11 @@ class TestValidateSequenceFrames(BaseTest): plugin.process(instance) - @pytest.mark.parametrize("files", - [["Main_beauty.1001.v001.exr", - "Main_beauty.1002.v001.exr"]]) - def test_validate_sequence_frames_wrong_name(self, instance, - plugin, files): - # tests for names with number inside, caused clique failure before - representations = [ - { - "ext": "exr", - "files": files, - } - ] - instance.data["representations"] = representations - - with pytest.raises(AssertionError) as excinfo: - plugin.process(instance) - assert ("Must detect single collection" in - str(excinfo.value)) - - @pytest.mark.parametrize("files", - [["Main_beauty.v001.1001.ass.gz", - "Main_beauty.v001.1002.ass.gz"]]) - def test_validate_sequence_frames_possible_wrong_name( - self, instance, plugin, files): - # currently pattern fails on extensions with dots - representations = [ - { - "files": files, - } - ] - instance.data["representations"] = representations - - with pytest.raises(AssertionError) as excinfo: - plugin.process(instance) - assert ("Must not have remainder" in - str(excinfo.value)) - @pytest.mark.parametrize("files", [["Main_beauty.v001.1001.ass.gz", "Main_beauty.v001.1002.ass.gz"]]) def test_validate_sequence_frames__correct_ext( self, instance, plugin, files): - # currently pattern fails on extensions with dots representations = [ { "ext": "ass.gz", @@ -147,6 +116,7 @@ class TestValidateSequenceFrames(BaseTest): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 plugin.process(instance) @@ -160,6 +130,7 @@ class TestValidateSequenceFrames(BaseTest): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 with pytest.raises(ValueError) as excinfo: plugin.process(instance) @@ -175,6 +146,7 @@ class TestValidateSequenceFrames(BaseTest): ] instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 with pytest.raises(AssertionError) as excinfo: plugin.process(instance) @@ -195,6 +167,7 @@ class TestValidateSequenceFrames(BaseTest): instance.data["slate"] = True instance.data["representations"] = representations instance.data["frameEnd"] = 1003 + instance.data["assetEntity"]["data"]["clipOut"] = 1003 plugin.process(instance) diff --git a/tests/unit/openpype/lib/test_delivery.py b/tests/unit/openpype/lib/test_delivery.py index 04a71655e3..f1e435f3f8 100644 --- a/tests/unit/openpype/lib/test_delivery.py +++ b/tests/unit/openpype/lib/test_delivery.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- """Test suite for delivery functions.""" -from openpype.lib.delivery import collect_frames +from openpype.lib import collect_frames def test_collect_frames_multi_sequence(): @@ -153,4 +153,3 @@ def test_collect_frames_single_file(): print(ret) assert ret == expected, "Not matching" - diff --git a/tests/unit/openpype/lib/test_event_system.py b/tests/unit/openpype/lib/test_event_system.py new file mode 100644 index 0000000000..aa3f929065 --- /dev/null +++ b/tests/unit/openpype/lib/test_event_system.py @@ -0,0 +1,83 @@ +from openpype.lib.events import EventSystem, QueuedEventSystem + + +def test_default_event_system(): + output = [] + expected_output = [3, 2, 1] + event_system = EventSystem() + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + assert output == expected_output, ( + "Callbacks were not called in correct order") + + +def test_base_event_system_queue(): + output = [] + expected_output = [1, 2, 3] + event_system = QueuedEventSystem() + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + assert output == expected_output, ( + "Callbacks were not called in correct order") + + +def test_manual_event_system_queue(): + output = [] + expected_output = [1, 2, 3] + event_system = QueuedEventSystem(auto_execute=False) + + def callback_1(): + event_system.emit("topic.2", {}, None) + output.append(1) + + def callback_2(): + event_system.emit("topic.3", {}, None) + output.append(2) + + def callback_3(): + output.append(3) + + event_system.add_callback("topic.1", callback_1) + event_system.add_callback("topic.2", callback_2) + event_system.add_callback("topic.3", callback_3) + + event_system.emit("topic.1", {}, None) + + while True: + if event_system.process_next_event() is None: + break + + assert output == expected_output, ( + "Callbacks were not called in correct order") diff --git a/tests/unit/openpype/modules/sync_server/test_site_operations.py b/tests/unit/openpype/modules/sync_server/test_site_operations.py index 6a861100a4..c4a83e33a6 100644 --- a/tests/unit/openpype/modules/sync_server/test_site_operations.py +++ b/tests/unit/openpype/modules/sync_server/test_site_operations.py @@ -12,16 +12,19 @@ removes temporary databases (?) """ import pytest +from bson.objectid import ObjectId from tests.lib.testing_classes import ModuleUnitTest -from bson.objectid import ObjectId + +from openpype.modules.sync_server.utils import SiteAlreadyPresentError + class TestSiteOperation(ModuleUnitTest): REPRESENTATION_ID = "60e578d0c987036c6a7b741d" - TEST_FILES = [("1eCwPljuJeOI8A3aisfOIBKKjcmIycTEt", + TEST_FILES = [("1FHE70Hi7y05LLT_1O3Y6jGxwZGXKV9zX", "test_site_operations.zip", '')] @pytest.fixture(scope="module") @@ -71,7 +74,7 @@ class TestSiteOperation(ModuleUnitTest): @pytest.mark.usefixtures("setup_sync_server_module") def test_add_site_again(self, dbcon, setup_sync_server_module): """Depends on test_add_site, must throw exception.""" - with pytest.raises(ValueError): + with pytest.raises(SiteAlreadyPresentError): setup_sync_server_module.add_site(self.TEST_PROJECT_NAME, self.REPRESENTATION_ID, site_name='test_site') diff --git a/tests/unit/openpype/pipeline/test_colorspace.py b/tests/unit/openpype/pipeline/test_colorspace.py index c22acee2d4..85faa8ff5d 100644 --- a/tests/unit/openpype/pipeline/test_colorspace.py +++ b/tests/unit/openpype/pipeline/test_colorspace.py @@ -26,12 +26,12 @@ class TestPipelineColorspace(TestPipeline): Example: cd to OpenPype repo root dir - poetry run python ./start.py runtests ../tests/unit/openpype/pipeline - """ + poetry run python ./start.py runtests /tests/unit/openpype/pipeline/test_colorspace.py + """ # noqa: E501 TEST_FILES = [ ( - "1Lf-mFxev7xiwZCWfImlRcw7Fj8XgNQMh", + "1csqimz8bbNcNgxtEXklLz6GRv91D3KgA", "test_pipeline_colorspace.zip", "" ) @@ -132,14 +132,14 @@ class TestPipelineColorspace(TestPipeline): path_1 = "renderCompMain_ACES2065-1.####.exr" expected_1 = "ACES2065-1" ret_1 = colorspace.parse_colorspace_from_filepath( - path_1, "nuke", "test_project", project_settings=project_settings + path_1, config_path=config_path_asset ) assert ret_1 == expected_1, f"Not matching colorspace {expected_1}" path_2 = "renderCompMain_BMDFilm_WideGamut_Gen5.mov" expected_2 = "BMDFilm WideGamut Gen5" ret_2 = colorspace.parse_colorspace_from_filepath( - path_2, "nuke", "test_project", project_settings=project_settings + path_2, config_path=config_path_asset ) assert ret_2 == expected_2, f"Not matching colorspace {expected_2}" @@ -185,5 +185,70 @@ class TestPipelineColorspace(TestPipeline): assert expected_hiero == hiero_file_rules, ( f"Not matching file rules {expected_hiero}") + def test_get_imageio_colorspace_from_filepath_p3(self, project_settings): + """Test Colorspace from filepath with python 3 compatibility mode + + Also test ocio v2 file rules + """ + nuke_filepath = "renderCompMain_baking_h264.mp4" + hiero_filepath = "prerenderCompMain.mp4" + + expected_nuke = "Camera Rec.709" + expected_hiero = "Gamma 2.2 Rec.709 - Texture" + + nuke_colorspace = colorspace.get_colorspace_name_from_filepath( + nuke_filepath, + "nuke", + "test_project", + project_settings=project_settings + ) + assert expected_nuke == nuke_colorspace, ( + f"Not matching colorspace {expected_nuke}") + + hiero_colorspace = colorspace.get_colorspace_name_from_filepath( + hiero_filepath, + "hiero", + "test_project", + project_settings=project_settings + ) + assert expected_hiero == hiero_colorspace, ( + f"Not matching colorspace {expected_hiero}") + + def test_get_imageio_colorspace_from_filepath_python2mode( + self, project_settings): + """Test Colorspace from filepath with python 2 compatibility mode + + Also test ocio v2 file rules + """ + nuke_filepath = "renderCompMain_baking_h264.mp4" + hiero_filepath = "prerenderCompMain.mp4" + + expected_nuke = "Camera Rec.709" + expected_hiero = "Gamma 2.2 Rec.709 - Texture" + + # switch to python 2 compatibility mode + colorspace.CachedData.has_compatible_ocio_package = False + + nuke_colorspace = colorspace.get_colorspace_name_from_filepath( + nuke_filepath, + "nuke", + "test_project", + project_settings=project_settings + ) + assert expected_nuke == nuke_colorspace, ( + f"Not matching colorspace {expected_nuke}") + + hiero_colorspace = colorspace.get_colorspace_name_from_filepath( + hiero_filepath, + "hiero", + "test_project", + project_settings=project_settings + ) + assert expected_hiero == hiero_colorspace, ( + f"Not matching colorspace {expected_hiero}") + + # return to python 3 compatibility mode + colorspace.CachedData.python3compatible = None + test_case = TestPipelineColorspace() diff --git a/tools/docker_build.ps1 b/tools/docker_build.ps1 new file mode 100644 index 0000000000..392165288c --- /dev/null +++ b/tools/docker_build.ps1 @@ -0,0 +1,98 @@ +$current_dir = Get-Location +$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent +$repo_root = (Get-Item $script_dir).parent.FullName + +$env:PSModulePath = $env:PSModulePath + ";$($repo_root)\tools\modules\powershell" + +function Exit-WithCode($exitcode) { + # Only exit this host process if it's a child of another PowerShell parent process... + $parentPID = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$PID" | Select-Object -Property ParentProcessId).ParentProcessId + $parentProcName = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$parentPID" | Select-Object -Property Name).Name + if ('powershell.exe' -eq $parentProcName) { $host.SetShouldExit($exitcode) } + + exit $exitcode +} + +function Restore-Cwd() { + $tmp_current_dir = Get-Location + if ("$tmp_current_dir" -ne "$current_dir") { + Write-Color -Text ">>> ", "Restoring current directory" -Color Green, Gray + Set-Location -Path $current_dir + } +} + +function Get-Container { + if (-not (Test-Path -PathType Leaf -Path "$($repo_root)\build\docker-image.id")) { + Write-Color -Text "!!! ", "Docker command failed, cannot find image id." -Color Red, Yellow + Restore-Cwd + Exit-WithCode 1 + } + $id = Get-Content "$($repo_root)\build\docker-image.id" + Write-Color -Text ">>> ", "Creating container from image id ", "[", $id, "]" -Color Green, Gray, White, Cyan, White + $cid = docker create $id bash + if ($LASTEXITCODE -ne 0) { + Write-Color -Text "!!! ", "Cannot create container." -Color Red, Yellow + Restore-Cwd + Exit-WithCode 1 + } + return $cid +} + +function Change-Cwd() { + Set-Location -Path $repo_root +} + +function New-DockerBuild { + $version_file = Get-Content -Path "$($repo_root)\openpype\version.py" + $result = [regex]::Matches($version_file, '__version__ = "(?\d+\.\d+.\d+.*)"') + $openpype_version = $result[0].Groups['version'].Value + $startTime = [int][double]::Parse((Get-Date -UFormat %s)) + Write-Color -Text ">>> ", "Building OpenPype using Docker ..." -Color Green, Gray, White + $variant = $args[0] + if ($variant.Length -eq 0) { + $dockerfile = "$($repo_root)\Dockerfile" + } else { + $dockerfile = "$( $repo_root )\Dockerfile.$variant" + } + if (-not (Test-Path -PathType Leaf -Path $dockerfile)) { + Write-Color -Text "!!! ", "Dockerfile for specifed platform ", "[", $variant, "]", "doesn't exist." -Color Red, Yellow, Cyan, White, Cyan, Yellow + Restore-Cwd + Exit-WithCode 1 + } + Write-Color -Text ">>> ", "Using Dockerfile for ", "[ ", $variant, " ]" -Color Green, Gray, White, Cyan, White + + $build_dir = "$($repo_root)\build" + if (-not(Test-Path $build_dir)) { + New-Item -ItemType Directory -Path $build_dir + } + Write-Color -Text "--- ", "Cleaning build directory ..." -Color Yellow, Gray + try { + Remove-Item -Recurse -Force "$($build_dir)\*" + } catch { + Write-Color -Text "!!! ", "Cannot clean build directory, possibly because process is using it." -Color Red, Gray + Write-Color -Text $_.Exception.Message -Color Red + Exit-WithCode 1 + } + + Write-Color -Text ">>> ", "Running Docker build ..." -Color Green, Gray, White + docker build --pull --iidfile $repo_root/build/docker-image.id --build-arg BUILD_DATE=$(Get-Date -UFormat %Y-%m-%dT%H:%M:%SZ) --build-arg VERSION=$openpype_version -t pypeclub/openpype:$openpype_version -f $dockerfile . + if ($LASTEXITCODE -ne 0) { + Write-Color -Text "!!! ", "Docker command failed.", $LASTEXITCODE -Color Red, Yellow, Red + Restore-Cwd + Exit-WithCode 1 + } + Write-Color -Text ">>> ", "Copying build from container ..." -Color Green, Gray, White + $cid = Get-Container + + docker cp "$($cid):/opt/openpype/build/exe.linux-x86_64-3.9" "$($repo_root)/build" + docker cp "$($cid):/opt/openpype/build/build.log" "$($repo_root)/build" + + $endTime = [int][double]::Parse((Get-Date -UFormat %s)) + try { + New-BurntToastNotification -AppLogo "$openpype_root/openpype/resources/icons/openpype_icon.png" -Text "OpenPype build complete!", "All done in $( $endTime - $startTime ) secs. You will find OpenPype and build log in build directory." + } catch {} + Write-Color -Text "*** ", "All done in ", $($endTime - $startTime), " secs. You will find OpenPype and build log in ", "'.\build'", " directory." -Color Green, Gray, White, Gray, White, Gray +} + +Change-Cwd +New-DockerBuild $ARGS diff --git a/tools/fetch_thirdparty_libs.py b/tools/fetch_thirdparty_libs.py index 70257caa46..c2dc4636d0 100644 --- a/tools/fetch_thirdparty_libs.py +++ b/tools/fetch_thirdparty_libs.py @@ -67,40 +67,45 @@ def _print(msg: str, message_type: int = 0) -> None: print(f"{header}{msg}") -def install_qtbinding(pyproject, openpype_root, platform_name): - _print("Handling Qt binding framework ...") - qtbinding_def = pyproject["openpype"]["qtbinding"][platform_name] - package = qtbinding_def["package"] - version = qtbinding_def.get("version") - - qtbinding_arg = None +def _pip_install(openpype_root, package, version=None): + arg = None if package and version: - qtbinding_arg = f"{package}=={version}" + arg = f"{package}=={version}" elif package: - qtbinding_arg = package + arg = package - if not qtbinding_arg: - _print("Didn't find Qt binding to install") + if not arg: + _print("Couldn't find package to install") sys.exit(1) - _print(f"We'll install {qtbinding_arg}") + _print(f"We'll install {arg}") python_vendor_dir = openpype_root / "vendor" / "python" try: subprocess.run( [ sys.executable, - "-m", "pip", "install", "--upgrade", qtbinding_arg, + "-m", "pip", "install", "--upgrade", arg, "-t", str(python_vendor_dir) ], check=True, stdout=subprocess.DEVNULL ) except subprocess.CalledProcessError as e: - _print("Error during PySide2 installation.", 1) + _print(f"Error during {package} installation.", 1) _print(str(e), 1) sys.exit(1) + +def install_qtbinding(pyproject, openpype_root, platform_name): + _print("Handling Qt binding framework ...") + qtbinding_def = pyproject["openpype"]["qtbinding"][platform_name] + package = qtbinding_def["package"] + version = qtbinding_def.get("version") + _pip_install(openpype_root, package, version) + + python_vendor_dir = openpype_root / "vendor" / "python" + # Remove libraries for QtSql which don't have available libraries # by default and Postgre library would require to modify rpath of # dependency @@ -112,6 +117,13 @@ def install_qtbinding(pyproject, openpype_root, platform_name): os.remove(str(filepath)) +def install_runtime_dependencies(pyproject, openpype_root): + _print("Installing Runtime Dependencies ...") + runtime_deps = pyproject["openpype"]["runtime-deps"] + for package, version in runtime_deps.items(): + _pip_install(openpype_root, package, version) + + def install_thirdparty(pyproject, openpype_root, platform_name): _print("Processing third-party dependencies ...") try: @@ -221,6 +233,7 @@ def main(): pyproject = toml.load(openpype_root / "pyproject.toml") platform_name = platform.system().lower() install_qtbinding(pyproject, openpype_root, platform_name) + install_runtime_dependencies(pyproject, openpype_root) install_thirdparty(pyproject, openpype_root, platform_name) end_time = time.time_ns() total_time = (end_time - start_time) / 1000000000 diff --git a/website/docs/admin_hosts_aftereffects.md b/website/docs/admin_hosts_aftereffects.md index 974428fe06..72fdb32faf 100644 --- a/website/docs/admin_hosts_aftereffects.md +++ b/website/docs/admin_hosts_aftereffects.md @@ -18,6 +18,10 @@ Location: Settings > Project > AfterEffects ## Publish plugins +### Collect Review + +Enable/disable creation of auto instance of review. + ### Validate Scene Settings #### Skip Resolution Check for Tasks @@ -28,6 +32,10 @@ Set regex pattern(s) to look for in a Task name to skip resolution check against Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd` check against values from DB. +### ValidateContainers + +By default this validator will look loaded items with lower version than latest. This validator is context wide so it must be disabled in Context button. + ### AfterEffects Submit to Deadline * `Use Published scene` - Set to True (green) when Deadline should take published scene as a source instead of uploaded local one. diff --git a/website/docs/admin_hosts_houdini.md b/website/docs/admin_hosts_houdini.md index 64c54db591..18c390e07f 100644 --- a/website/docs/admin_hosts_houdini.md +++ b/website/docs/admin_hosts_houdini.md @@ -3,9 +3,36 @@ id: admin_hosts_houdini title: Houdini sidebar_label: Houdini --- +## General Settings +### Houdini Vars + +Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task. + +Using template keys is supported but formatting keys capitalization variants is not, e.g. `{Asset}` and `{ASSET}` won't work + + +:::note +If `Treat as directory` toggle is activated, Openpype will consider the given value is a path of a folder. + +If the folder does not exist on the context change it will be created by this feature so that the path will always try to point to an existing folder. +::: + +Disabling `Update Houdini vars on context change` feature will leave all Houdini vars unmanaged and thus no context update changes will occur. + +> If `$JOB` is present in the Houdini var list and has an empty value, OpenPype will set its value to `$HIP` + + +:::note +For consistency reasons we always force all vars to be uppercase. +e.g. `myvar` will be `MYVAR` +::: + +![update-houdini-vars-context-change](assets/houdini/update-houdini-vars-context-change.png) + + ## Shelves Manager You can add your custom shelf set into Houdini by setting your shelf sets, shelves and tools in **Houdini -> Shelves Manager**. ![Custom menu definition](assets/houdini-admin_shelvesmanager.png) -The Shelf Set Path is used to load a .shelf file to generate your shelf set. If the path is specified, you don't have to set the shelves and tools. \ No newline at end of file +The Shelf Set Path is used to load a .shelf file to generate your shelf set. If the path is specified, you don't have to set the shelves and tools. diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index 700822843f..93acf316c2 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -113,7 +113,8 @@ This is useful to fix some specific renderer glitches and advanced hacking of Ma #### Namespace and Group Name Here you can create your own custom naming for the reference loader. -The custom naming is split into two parts: namespace and group name. If you don't set the namespace or the group name, an error will occur. +The custom naming is split into two parts: namespace and group name. If you don't set the namespace, an error will occur. +Group name could be set empty, that way no wrapping group will be created for loaded item. Here's the different variables you can use:

diff --git a/website/docs/admin_hosts_photoshop.md b/website/docs/admin_hosts_photoshop.md index de684f01d2..d79789760e 100644 --- a/website/docs/admin_hosts_photoshop.md +++ b/website/docs/admin_hosts_photoshop.md @@ -33,7 +33,6 @@ Provides list of [variants](artist_concepts.md#variant) that will be shown to an Provides simplified publishing process. It will create single `image` instance for artist automatically. This instance will produce flatten image from all visible layers in a workfile. -- Subset template for flatten image - provide template for subset name for this instance (example `imageBeauty`) - Review - should be separate review created for this instance ### Create Review @@ -111,11 +110,11 @@ Set Byte limit for review file. Applicable if gigantic `image` instances are pro #### Extract jpg Options -Handles tags for produced `.jpg` representation. `Create review` and `Add review to Ftrack` are defaults. +Handles tags for produced `.jpg` representation. `Create review` and `Add review to Ftrack` are defaults. #### Extract mov Options -Handles tags for produced `.mov` representation. `Create review` and `Add review to Ftrack` are defaults. +Handles tags for produced `.mov` representation. `Create review` and `Add review to Ftrack` are defaults. ### Workfile Builder @@ -124,4 +123,4 @@ Allows to open prepared workfile for an artist when no workfile exists. Useful t Could be configured per `Task type`, eg. `composition` task type could use different `.psd` template file than `art` task. Workfile template must be accessible for all artists. -(Currently not handled by [SiteSync](module_site_sync.md)) \ No newline at end of file +(Currently not handled by [SiteSync](module_site_sync.md)) diff --git a/website/docs/admin_hosts_resolve.md b/website/docs/admin_hosts_resolve.md index 09e7df1d9f..8bb8440f78 100644 --- a/website/docs/admin_hosts_resolve.md +++ b/website/docs/admin_hosts_resolve.md @@ -4,100 +4,38 @@ title: DaVinci Resolve Setup sidebar_label: DaVinci Resolve --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +:::warning +Only Resolve Studio is supported due to Python API limitation in Resolve (free). +::: ## Resolve requirements Due to the way resolve handles python and python scripts there are a few steps required steps needed to be done on any machine that will be using OpenPype with resolve. -### Installing Resolve's own python 3.6 interpreter. -Resolve uses a hardcoded method to look for the python executable path. All of tho following paths are defined automatically by Python msi installer. We are using Python 3.6.2. +## Basic setup - +- Supported version is up to v18 +- Install Python 3.6.2 (latest tested v17) or up to 3.9.13 (latest tested on v18) +- pip install PySide2: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install PySide2` +- pip install OpenTimelineIO: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install OpenTimelineIO` + - Python 3.6: open terminal and go to python.exe directory, then `python -m pip install git+https://github.com/PixarAnimationStudios/OpenTimelineIO.git@5aa24fbe89d615448876948fe4b4900455c9a3e8` and move built files from `./Lib/site-packages/opentimelineio/cxx-libs/bin and lib` to `./Lib/site-packages/opentimelineio/`. I was building it on Win10 machine with Visual Studio Community 2019 and + ![image](https://user-images.githubusercontent.com/40640033/102792588-ffcb1c80-43a8-11eb-9c6b-bf2114ed578e.png) with installed CMake in PATH. +- make sure Resolve Fusion (Fusion Tab/menu/Fusion/Fusion Settings) is set to Python 3.6 + ![image](https://user-images.githubusercontent.com/40640033/102631545-280b0f00-414e-11eb-89fc-98ac268d209d.png) +- Open OpenPype **Tray/Admin/Studio settings** > `applications/resolve/environment` and add Python3 path to `RESOLVE_PYTHON3_HOME` platform related. - +## Editorial setup -`%LOCALAPPDATA%\Programs\Python\Python36` +This is how it looks on my testing project timeline +![image](https://user-images.githubusercontent.com/40640033/102637638-96ec6600-4156-11eb-9656-6e8e3ce4baf8.png) +Notice I had renamed tracks to `main` (holding metadata markers) and `review` used for generating review data with ffmpeg confersion to jpg sequence. - - - -`/opt/Python/3.6/bin` - - - - -`~/Library/Python/3.6/bin` - - - - - -### Installing PySide2 into python 3.6 for correct gui work - -OpenPype is using its own window widget inside Resolve, for that reason PySide2 has to be installed into the python 3.6 (as explained above). - - - - - -paste to any terminal of your choice - -```bash -%LOCALAPPDATA%\Programs\Python\Python36\python.exe -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -/opt/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -~/Library/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -
- -### Set Resolve's Fusion settings for Python 3.6 interpereter - -
- - -As it is shown in below picture you have to go to Fusion Tab and then in Fusion menu find Fusion Settings. Go to Fusion/Script and find Default Python Version and switch to Python 3.6 - -
- -
- -![Create menu](assets/resolve_fusion_tab.png) -![Create menu](assets/resolve_fusion_menu.png) -![Create menu](assets/resolve_fusion_script_settings.png) - -
-
\ No newline at end of file +1. you need to start OpenPype menu from Resolve/EditTab/Menu/Workspace/Scripts/Comp/**__OpenPype_Menu__** +2. then select any clips in `main` track and change their color to `Chocolate` +3. in OpenPype Menu select `Create` +4. in Creator select `Create Publishable Clip [New]` (temporary name) +5. set `Rename clips` to True, Master Track to `main` and Use review track to `review` as in picture + ![image](https://user-images.githubusercontent.com/40640033/102643773-0d419600-4160-11eb-919e-9c2be0aecab8.png) +6. after you hit `ok` all clips are colored to `ping` and marked with openpype metadata tag +7. git `Publish` on openpype menu and see that all had been collected correctly. That is the last step for now as rest is Work in progress. Next steps will follow. diff --git a/website/docs/admin_openpype_commands.md b/website/docs/admin_openpype_commands.md index 131b6c0a51..a149d78aa2 100644 --- a/website/docs/admin_openpype_commands.md +++ b/website/docs/admin_openpype_commands.md @@ -40,7 +40,6 @@ For more information [see here](admin_use.md#run-openpype). | module | Run command line arguments for modules. | | | repack-version | Tool to re-create version zip. | [📑](#repack-version-arguments) | | tray | Launch OpenPype Tray. | [📑](#tray-arguments) -| launch | Launch application in Pype environment. | [📑](#launch-arguments) | | publish | Pype takes JSON from provided path and use it to publish data in it. | [📑](#publish-arguments) | | extractenvironments | Extract environment variables for entered context to a json file. | [📑](#extractenvironments-arguments) | | run | Execute given python script within OpenPype environment. | [📑](#run-arguments) | @@ -54,26 +53,6 @@ For more information [see here](admin_use.md#run-openpype). ```shell openpype_console tray ``` ---- - -### `launch` arguments {#launch-arguments} - -| Argument | Description | -| --- | --- | -| `--app` | Application name - this should be the key for application from Settings. | -| `--project` | Project name (default taken from `AVALON_PROJECT` if set) | -| `--asset` | Asset name (default taken from `AVALON_ASSET` if set) | -| `--task` | Task name (default taken from `AVALON_TASK` is set) | -| `--tools` | *Optional: Additional tools to add* | -| `--user` | *Optional: User on behalf to run* | -| `--ftrack-server` / `-fs` | *Optional: Ftrack server URL* | -| `--ftrack-user` / `-fu` | *Optional: Ftrack user* | -| `--ftrack-key` / `-fk` | *Optional: Ftrack API key* | - -For example to run Python interactive console in Pype context: -```shell -pype launch --app python --project my_project --asset my_asset --task my_task -``` --- ### `publish` arguments {#publish-arguments} diff --git a/website/docs/admin_settings_local.md b/website/docs/admin_settings_local.md index b254beb53b..8935b29fb5 100644 --- a/website/docs/admin_settings_local.md +++ b/website/docs/admin_settings_local.md @@ -16,13 +16,19 @@ OpenPype stores some of it's settings and configuration in local file system. Th ## Categories ### OpenPype Mongo URL +The **Mongo URL** is the database URL given by your Studio. More details [here](artist_getting_started.md#mongodb). ### General +**OpenPype Username** : enter your username (if not provided, it uses computer session username by default). This username is used to sign your actions on **OpenPype**, for example the "author" on a publish. +**Admin permissions** : When enabled you do not need to enter a password (if defined in Studio Settings) to access to the **Admin** section. ### Experimental tools +Future version of existing tools or new ones. +### Environments +Local replacement of the environment data of each software and additional internal data necessary to be loaded correctly. ### Applications +Local override of software executable paths for each version. More details [here](admin_settings_system.md#applications). ### Project Settings - - +The **Project Settings** allows to determine the root folder. More details [here](module_site_sync.md#local-settings). diff --git a/website/docs/admin_settings_system.md b/website/docs/admin_settings_system.md index d61713ccd5..8abcefd24d 100644 --- a/website/docs/admin_settings_system.md +++ b/website/docs/admin_settings_system.md @@ -102,6 +102,10 @@ workstation that should be submitting render jobs to muster via OpenPype. **`templates mapping`** - you can customize Muster templates to match your existing setup here. +### Royal Render + +**`Royal Render Root Paths`** - multi platform paths to Royal Render installation. + ### Clockify **`Workspace Name`** - name of the clockify workspace where you would like to be sending all the timelogs. diff --git a/website/docs/artist_hosts_houdini.md b/website/docs/artist_hosts_houdini.md index 0471765365..940d5ac351 100644 --- a/website/docs/artist_hosts_houdini.md +++ b/website/docs/artist_hosts_houdini.md @@ -132,3 +132,25 @@ switch versions between different hda types. When you load hda, it will install its type in your hip file and add published version as its definition file. When you switch version via Scene Manager, it will add its definition and set it as preferred. + +## Publishing and loading BGEO caches + +There is a simple support for publishing and loading **BGEO** files in all supported compression variants. + +### Creating BGEO instances + +Select your SOP node to be exported as BGEO. If your selection is in the object level, OpenPype will try to find if there is an `output` node inside, the one with the lowest index will be used: + +![BGEO output node](assets/houdini_bgeo_output_node.png) + +Then you can open Publisher, in Create you select **BGEO PointCache**: + +![BGEO Publisher](assets/houdini_bgeo-publisher.png) + +You can select compression type and if the current selection should be connected to ROPs SOP path parameter. Publishing will produce sequence of files based on your timeline settings. + +### Loading BGEO + +Select your published BGEO subsets in Loader, right click and load them in: + +![BGEO Publisher](assets/houdini_bgeo-loading.png) diff --git a/website/docs/artist_publish.md b/website/docs/artist_publish.md index 321eb5c56a..b1be2e629e 100644 --- a/website/docs/artist_publish.md +++ b/website/docs/artist_publish.md @@ -33,39 +33,41 @@ The Instances are categorized into ‘families’ based on what type of data the Following family definitions and requirements are OpenPype defaults and what we consider good industry practice, but most of the requirements can be easily altered to suit the studio or project needs. Here's a list of supported families -| Family | Comment | Example Subsets | -| ----------------------- | ------------------------------------------------ | ------------------------- | -| [Model](#model) | Cleaned geo without materials | main, proxy, broken | -| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty | -| [Rig](#rig) | Characters or props with animation controls | main, deform, sim | -| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim | -| [Layout](#layout) | Simple representation of the environment | main, | -| [Setdress](#setdress) | Environment containing only referenced assets | main, | -| [Camera](#camera) | May contain trackers or proxy geo | main, tracked, anim | -| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB | -| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 | -| MayaAscii | Maya publishes that don't fit other categories | | -| [Render](#render) | Rendered frames from CG or Comp | | -| RenderSetup | Scene render settings, AOVs and layers | | -| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane | -| Write | Nuke write nodes for rendering | | -| Image | Any non-plate image to be used by artists | Reference, ConceptArt | -| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt | -| Review | Reviewable video or image. | | -| Matchmove | Matchmoved camera, potentially with geometry | main | -| Workfile | Backup of the workfile with all its content | uses the task name | -| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop | -| Yeticache | Cached out yeti fur setup | | -| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed | -| VrayProxy | Vray proxy geometry for rendering | | -| VrayScene | Vray full scene export | | -| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty | -| LUT | | | -| Nukenodes | | | -| Gizmo | | | -| Nukenodes | | | -| Harmony.template | | | -| Harmony.palette | | | +| Family | Comment | Example Subsets | +|-------------------------|-------------------------------------------------------| ------------------------- | +| [Model](#model) | Cleaned geo without materials | main, proxy, broken | +| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty | +| [Rig](#rig) | Characters or props with animation controls | main, deform, sim | +| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim | +| [Layout](#layout) | Simple representation of the environment | main, | +| [Setdress](#setdress) | Environment containing only referenced assets | main, | +| [Camera](#camera) | May contain trackers or proxy geo, only single camera | main, tracked, anim | +| | expected. | | +| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB | +| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 | +| MayaAscii | Maya publishes that don't fit other categories | | +| [Render](#render) | Rendered frames from CG or Comp | | +| RenderSetup | Scene render settings, AOVs and layers | | +| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane | +| Write | Nuke write nodes for rendering | | +| Image | Any non-plate image to be used by artists | Reference, ConceptArt | +| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt | +| Review | Reviewable video or image. | | +| Matchmove | Matchmoved camera, potentially with geometry, allows | main | +| | multiple cameras even with planes. | | +| Workfile | Backup of the workfile with all its content | uses the task name | +| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop | +| Yeticache | Cached out yeti fur setup | | +| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed | +| VrayProxy | Vray proxy geometry for rendering | | +| VrayScene | Vray full scene export | | +| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty | +| LUT | | | +| Nukenodes | | | +| Gizmo | | | +| Nukenodes | | | +| Harmony.template | | | +| Harmony.palette | | | @@ -161,7 +163,7 @@ Example Representations: ### Animation Published result of an animation created with a rig. Animation can be extracted -as animation curves, cached out geometry or even fully animated rig with all the controllers. +as animation curves, cached out geometry or even fully animated rig with all the controllers. Animation cache is usually defined by a rigger in the rig file of a character or by FX TD in the effects rig, to ensure consistency of outputs. diff --git a/website/docs/assets/admin_hosts_photoshop_settings.png b/website/docs/assets/admin_hosts_photoshop_settings.png index aaa6ecbed7..9478fbedf7 100644 Binary files a/website/docs/assets/admin_hosts_photoshop_settings.png and b/website/docs/assets/admin_hosts_photoshop_settings.png differ diff --git a/website/docs/assets/houdini/update-houdini-vars-context-change.png b/website/docs/assets/houdini/update-houdini-vars-context-change.png new file mode 100644 index 0000000000..74ac8d86c9 Binary files /dev/null and b/website/docs/assets/houdini/update-houdini-vars-context-change.png differ diff --git a/website/docs/assets/houdini_bgeo-loading.png b/website/docs/assets/houdini_bgeo-loading.png new file mode 100644 index 0000000000..e8aad66f43 Binary files /dev/null and b/website/docs/assets/houdini_bgeo-loading.png differ diff --git a/website/docs/assets/houdini_bgeo-publisher.png b/website/docs/assets/houdini_bgeo-publisher.png new file mode 100644 index 0000000000..5c3534077f Binary files /dev/null and b/website/docs/assets/houdini_bgeo-publisher.png differ diff --git a/website/docs/assets/houdini_bgeo_output_node.png b/website/docs/assets/houdini_bgeo_output_node.png new file mode 100644 index 0000000000..160f0a259b Binary files /dev/null and b/website/docs/assets/houdini_bgeo_output_node.png differ diff --git a/website/docs/assets/settings/settings_local.png b/website/docs/assets/settings/settings_local.png index d2cf1c920d..725c332747 100644 Binary files a/website/docs/assets/settings/settings_local.png and b/website/docs/assets/settings/settings_local.png differ diff --git a/website/docs/dev_colorspace.md b/website/docs/dev_colorspace.md index c4b8e74d73..cb07cb18a0 100644 --- a/website/docs/dev_colorspace.md +++ b/website/docs/dev_colorspace.md @@ -80,7 +80,7 @@ from openpype.pipeline.colorspace import ( class YourLoader(api.Loader): def load(self, context, name=None, namespace=None, options=None): - path = self.fname + path = self.filepath_from_context(context) colorspace_data = context["representation"]["data"].get("colorspaceData", {}) colorspace = ( colorspace_data.get("colorspace") diff --git a/website/docs/module_royalrender.md b/website/docs/module_royalrender.md new file mode 100644 index 0000000000..2b75fbefef --- /dev/null +++ b/website/docs/module_royalrender.md @@ -0,0 +1,37 @@ +--- +id: module_royalrender +title: Royal Render Administration +sidebar_label: Royal Render +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + +## Preparation + +For [Royal Render](hhttps://www.royalrender.de/) support you need to set a few things up in both OpenPype and Royal Render itself + +1. Deploy OpenPype executable to all nodes of Royal Render farm. See [Install & Run](admin_use.md) + +2. Enable Royal Render Module in the [OpenPype Admin Settings](admin_settings_system.md#royal-render). + +3. Point OpenPype to your Royal Render installation in the [OpenPype Admin Settings](admin_settings_system.md#royal-render). + +4. Install our custom plugin and scripts to your RR repository. It should be as simple as copying content of `openpype/modules/royalrender/rr_root` to `path/to/your/royalrender/repository`. + + +## Configuration + +OpenPype integration for Royal Render consists of pointing RR to location of Openpype executable. That is being done by copying `_install_paths/OpenPype.cfg` to +RR root folder. This file contains reasonable defaults. They could be changed in this file or modified Render apps in `rrControl`. + + +## Debugging + +Current implementation uses dynamically build '.xml' file which is stored in temporary folder accessible by RR. It might make sense to +use this Openpype built file and try to run it via `*__rrServerConsole` executable from command line in case of unforeseeable issues. + +## Known issues + +Currently environment values set in Openpype are not propagated into render jobs on RR. It is studio responsibility to synchronize environment variables from Openpype with all render nodes for now. diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md index 5ddf247d98..27aa60a464 100644 --- a/website/docs/project_settings/settings_project_global.md +++ b/website/docs/project_settings/settings_project_global.md @@ -189,10 +189,10 @@ A profile may generate multiple outputs from a single input. Each output must de - Profile filtering defines which group of output definitions is used but output definitions may require more specific filters on their own. - They may filter by subset name (regex can be used) or publish families. Publish families are more complex as are based on knowing code base. - Filtering by custom tags -> this is used for targeting to output definitions from other extractors using settings (at this moment only Nuke bake extractor can target using custom tags). - - Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewDataMov/outputs/baking/add_custom_tags` + - Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewIntermediates/outputs/baking/add_custom_tags` - Filtering by input length. Input may be video, sequence or single image. It is possible that `.mp4` should be created only when input is video or sequence and to create review `.png` when input is single frame. In some cases the output should be created even if it's single frame or multi frame input. - + ### Extract Burnin Plugin is responsible for adding burnins into review representations. @@ -226,13 +226,13 @@ A burnin profile may set multiple burnin outputs from one input. The burnin's na | **Bottom Centered** | Bottom center content. | str | "{username}" | | **Bottom Right** | Bottom right corner content. | str | "{frame_start}-{current_frame}-{frame_end}" | -Each burnin profile can be configured with additional family filtering and can -add additional tags to the burnin representation, these can be configured under +Each burnin profile can be configured with additional family filtering and can +add additional tags to the burnin representation, these can be configured under the profile's **Additional filtering** section. :::note Filename suffix -The filename suffix is appended to filename of the source representation. For -example, if the source representation has suffix **"h264"** and the burnin +The filename suffix is appended to filename of the source representation. For +example, if the source representation has suffix **"h264"** and the burnin suffix is **"client"** then the final suffix is **"h264_client"**. ::: @@ -343,6 +343,10 @@ One of the key advantages of this feature is that it allows users to choose the In some cases, these DCCs (Nuke, Houdini, Maya) automatically add a rendering path during the creation stage, which is then used in publishing. Creators and extractors of such DCCs need to use these profiles to fill paths in DCC's nodes to use this functionality. +:::note +Maya's setting `project_settings/maya/RenderSettings/default_render_image_folder` is be overwritten by the custom staging dir. +::: + The custom staging folder uses a path template configured in `project_anatomy/templates/others` with `transient` being a default example path that could be used. The template requires a 'folder' key for it to be usable as custom staging folder. ##### Known issues diff --git a/website/docs/pype2/admin_presets_plugins.md b/website/docs/pype2/admin_presets_plugins.md index 6a057f4bb4..b5e8a3b8a8 100644 --- a/website/docs/pype2/admin_presets_plugins.md +++ b/website/docs/pype2/admin_presets_plugins.md @@ -534,8 +534,7 @@ Plugin responsible for generating thumbnails with colorspace controlled by Nuke. } ``` -### `ExtractReviewDataMov` - +### `ExtractReviewIntermediates` `viewer_lut_raw` **true** will publish the baked mov file without any colorspace conversion. It will be baked with the workfile workspace. This can happen in case the Viewer input process uses baked screen space luts. #### baking with controlled colorspace diff --git a/website/sidebars.js b/website/sidebars.js index 267cc7f6d7..b885181fb6 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -111,6 +111,7 @@ module.exports = { "module_site_sync", "module_deadline", "module_muster", + "module_royalrender", "module_clockify", "module_slack" ],